The feature selection in analogy-based software effort estimation (ASEE) is formulized as a multi-objective optimization problem. One objective is designed to maximize the effort estimation accuracy and the other ob...The feature selection in analogy-based software effort estimation (ASEE) is formulized as a multi-objective optimization problem. One objective is designed to maximize the effort estimation accuracy and the other objective is designed to minimize the number of selected features. Based on these two potential conflict objectives, a novel wrapper- based feature selection method, multi-objective feature selection for analogy-based software effort estimation (MASE), is proposed. In the empirical studies, 77 projects in Desharnais and 62 projects in Maxwell from the real world are selected as the evaluation objects and the proposed method MASE is compared with some baseline methods. Final results show that the proposed method can achieve better performance by selecting fewer features when considering MMRE (mean magnitude of relative error), MdMRE (median magnitude of relative error), PRED ( 0. 25 ), and SA ( standardized accuracy) performance metrics.展开更多
Activity is now playing a vital role in software processes. To ensure the high-level efficiency of software processes, a key point is to locate those activities that own bigger resource occupation probabilities with r...Activity is now playing a vital role in software processes. To ensure the high-level efficiency of software processes, a key point is to locate those activities that own bigger resource occupation probabilities with respect to average execution time, called delayed activities, and then improve them. To this end, we firstly propose an approach to locating delayed activities in software processes. Furthermore, we present a case study, which exhibits the high-level efficiency of the approach, to concretely illustrate this new solution. Some beneficial analysis and reasonable modification are developed in the end.展开更多
Deep convolutional neural networks(CNNs)have demonstrated remarkable performance in video super-resolution(VSR).However,the ability of most existing methods to recover fine details in complex scenes is often hindered ...Deep convolutional neural networks(CNNs)have demonstrated remarkable performance in video super-resolution(VSR).However,the ability of most existing methods to recover fine details in complex scenes is often hindered by the loss of shallow texture information during feature extraction.To address this limitation,we propose a 3D Convolutional Enhanced Residual Video Super-Resolution Network(3D-ERVSNet).This network employs a forward and backward bidirectional propagation module(FBBPM)that aligns features across frames using explicit optical flow through lightweight SPyNet.By incorporating an enhanced residual structure(ERS)with skip connections,shallow and deep features are effectively integrated,enhancing texture restoration capabilities.Furthermore,3D convolution module(3DCM)is applied after the backward propagation module to implicitly capture spatio-temporal dependencies.The architecture synergizes these components where FBBPM extracts aligned features,ERS fuses hierarchical representations,and 3DCM refines temporal coherence.Finally,a deep feature aggregation module(DFAM)fuses the processed features,and a pixel-upsampling module(PUM)reconstructs the high-resolution(HR)video frames.Comprehensive evaluations on REDS,Vid4,UDM10,and Vim4 benchmarks demonstrate well performance including 30.95 dB PSNR/0.8822 SSIM on REDS and 32.78 dB/0.8987 on Vim4.3D-ERVSNet achieves significant gains over baselines while maintaining high efficiency with only 6.3M parameters and 77ms/frame runtime(i.e.,20×faster than RBPN).The network’s effectiveness stems from its task-specific asymmetric design that balances explicit alignment and implicit fusion.展开更多
Point of interest(POI)recommendation analyses user preferences through historical check-in data.However,existing POI recommendation methods often overlook the influence of weather information and face the challenge of...Point of interest(POI)recommendation analyses user preferences through historical check-in data.However,existing POI recommendation methods often overlook the influence of weather information and face the challenge of sparse historical data for individual users.To address these issues,this paper proposes a new paradigm,namely temporal-weather-aware transition pattern for POI recommendation(TWTransNet).This paradigm is designed to capture user transition patterns under different times and weather conditions.Additionally,we introduce the construction of a user-POI interaction graph to alleviate the problem of sparse historical data for individual users.Furthermore,when predicting user interests by aggregating graph information,some POIs may not be suitable for visitation under current weather conditions.To account for this,we propose an attention mechanism to filter POI neighbours when aggregating information from the graph,considering the impact of weather and time.Empirical results on two real-world datasets demonstrate the superior performance of our proposed method,showing a substantial improvement of 6.91%-23.31% in terms of prediction accuracy.展开更多
The cloud storage service cannot be completely trusted because of the separation of data management and ownership, leading to the difficulty of data privacy protection. In order to protect the privacy of data on untru...The cloud storage service cannot be completely trusted because of the separation of data management and ownership, leading to the difficulty of data privacy protection. In order to protect the privacy of data on untrusted servers of cloud storage, a novel multi-authority access control scheme without a trustworthy central authority has been proposed based on CP-ABE for cloud storage systems, called non-centered multi-authority proxy re-encryption based on the cipher-text policy attribute-based encryption(NC-MACPABE). NC-MACPABE optimizes the weighted access structure(WAS) allowing different levels of operation on the same file in cloud storage system. The concept of identity dyeing is introduced to improve the users' information privacy further. The re-encryption algorithm is improved in the scheme so that the data owner can revoke user's access right in a more flexible way. The scheme is proved to be secure. And the experimental results also show that removing the central authority can resolve the existing performance bottleneck in the multi-authority architecture with a central authority, which significantly improves user experience when a large number of users apply for accesses to the cloud storage system at the same time.展开更多
To fight against malicious codes of P2P networks, it is necessary to study the malicious code propagation model of P2P networks in depth. The epidemic of malicious code threatening P2P systems can be divided into the ...To fight against malicious codes of P2P networks, it is necessary to study the malicious code propagation model of P2P networks in depth. The epidemic of malicious code threatening P2P systems can be divided into the active and passive propagation models and a new passive propagation model of malicious code is proposed, which differentiates peers into 4 kinds of state and fits better for actual P2P networks. From the propagation model of malicious code, it is easy to find that quickly making peers get their patched and upgraded anti-virus system is the key way of immunization and damage control. To distribute patches and immune modules efficiently, a new exponential tree plus (ET+) and vaccine distribution algorithm based on ET+ are also proposed. The performance analysis and test results show that the vaccine distribution algorithm based on ET+ is robust, efficient and much more suitable for P2P networks.展开更多
As a new computing mode,cloud computing can provide users with virtualized and scalable web services,which faced with serious security challenges,however.Access control is one of the most important measures to ensure ...As a new computing mode,cloud computing can provide users with virtualized and scalable web services,which faced with serious security challenges,however.Access control is one of the most important measures to ensure the security of cloud computing.But applying traditional access control model into the Cloud directly could not solve the uncertainty and vulnerability caused by the open conditions of cloud computing.In cloud computing environment,only when the security and reliability of both interaction parties are ensured,data security can be effectively guaranteed during interactions between users and the Cloud.Therefore,building a mutual trust relationship between users and cloud platform is the key to implement new kinds of access control method in cloud computing environment.Combining with Trust Management(TM),a mutual trust based access control(MTBAC) model is proposed in this paper.MTBAC model take both user's behavior trust and cloud services node's credibility into consideration.Trust relationships between users and cloud service nodes are established by mutual trust mechanism.Security problems of access control are solved by implementing MTBAC model into cloud computing environment.Simulation experiments show that MTBAC model can guarantee the interaction between users and cloud service nodes.展开更多
The Cloud is increasingly being used to store and process big data for its tenants and classical security mechanisms using encryption are neither sufficiently efficient nor suited to the task of protecting big data in...The Cloud is increasingly being used to store and process big data for its tenants and classical security mechanisms using encryption are neither sufficiently efficient nor suited to the task of protecting big data in the Cloud.In this paper,we present an alternative approach which divides big data into sequenced parts and stores them among multiple Cloud storage service providers.Instead of protecting the big data itself,the proposed scheme protects the mapping of the various data elements to each provider using a trapdoor function.Analysis,comparison and simulation prove that the proposed scheme is efficient and secure for the big data of Cloud tenants.展开更多
Community detection is an important methodology for understanding the intrinsic structure and function of a realworld network. In this paper, we propose an effective and efficient algorithm, called Dominant Label Prop...Community detection is an important methodology for understanding the intrinsic structure and function of a realworld network. In this paper, we propose an effective and efficient algorithm, called Dominant Label Propagation Algorithm(Abbreviated as DLPA), to detect communities in complex networks. The algorithm simulates a special voting process to detect overlapping and non-overlapping community structure in complex networks simultaneously. Our algorithm is very efficient, since its computational complexity is almost linear to the number of edges in the network. Experimental results on both real-world and synthetic networks show that our algorithm also possesses high accuracies on detecting community structure in networks.展开更多
In modern terminology,“organoids”refer to cells that grow in a specific three-dimensional(3D)environment in vitro,sharing similar structures with their source organs or tissues.Observing themorphology or growth char...In modern terminology,“organoids”refer to cells that grow in a specific three-dimensional(3D)environment in vitro,sharing similar structures with their source organs or tissues.Observing themorphology or growth characteristics of organoids through a microscope is a commonly used method of organoid analysis.However,it is difficult,time-consuming,and inaccurate to screen and analyze organoids only manually,a problem which cannot be easily solved with traditional technology.Artificial intelligence(AI)technology has proven to be effective in many biological and medical research fields,especially in the analysis of single-cell or hematoxylin/eosin stained tissue slices.When used to analyze organoids,AI should also provide more efficient,quantitative,accurate,and fast solutions.In this review,we will first briefly outline the application areas of organoids and then discuss the shortcomings of traditional organoid measurement and analysis methods.Secondly,we will summarize the development from machine learning to deep learning and the advantages of the latter,and then describe how to utilize a convolutional neural network to solve the challenges in organoid observation and analysis.Finally,we will discuss the limitations of current AI used in organoid research,as well as opportunities and future research directions.展开更多
Attribute reduction in the rough set theory is an important feature selection method, but finding a minimum attribute reduction has been proven to be a non-deterministic polynomial (NP)-hard problem. Therefore, it i...Attribute reduction in the rough set theory is an important feature selection method, but finding a minimum attribute reduction has been proven to be a non-deterministic polynomial (NP)-hard problem. Therefore, it is necessary to investigate some fast and effective approximate algorithms. A novel and enhanced quantum-inspired shuffled frog leaping based minimum attribute reduction algorithm (QSFLAR) is proposed. Evolutionary frogs are represented by multi-state quantum bits, and both quantum rotation gate and quantum mutation operators are used to exploit the mechanisms of frog population diversity and convergence to the global optimum. The decomposed attribute subsets are co-evolved by the elitist frogs with a quantum-inspired shuffled frog leaping algorithm. The experimental results validate the better feasibility and effectiveness of QSFLAR, comparing with some representa- tive algorithms. Therefore, QSFLAR can be considered as a more competitive algorithm on the efficiency and accuracy for minimum attribute reduction.展开更多
In this paper, we propose a new attribute-based proxy re-encryption scheme, where a semi-trusted proxy, with some additional information, can transform a ciphertext under a set of attributes into a new ciphertext unde...In this paper, we propose a new attribute-based proxy re-encryption scheme, where a semi-trusted proxy, with some additional information, can transform a ciphertext under a set of attributes into a new ciphertext under another set of attributes on the same message, but not vice versa, furthermore, its security was proved in the standard model based on decisional bilinear Diffie-Hellman assumption. This scheme can be used to realize fine-grained selectively sharing of encrypted data, but the general proxy rencryption scheme severely can not do it, so the proposed schemecan be thought as an improvement of general traditional proxy re-encryption scheme.展开更多
In a measurement system, new representation methods are necessary to maintain the uncertainty and to supply more powerful ability for reasoning and transformation between numerical system and symbolic system. A grey m...In a measurement system, new representation methods are necessary to maintain the uncertainty and to supply more powerful ability for reasoning and transformation between numerical system and symbolic system. A grey measurement system is discussed from the point of view of intelligent sensors and incomplete information processing compared with a numerical and symbolized measurement system. The methods of grey representation and information processing are proposed for data collection and reasoning. As a case study, multi-ultrasonic sensor systems are demonstrated to verify the effectiveness of the proposed methods.展开更多
Image recognition technology is an important field of artificial intelligence.Combined with the development of machine learning technology in recent years,it has great researches value and commercial value.As a matter...Image recognition technology is an important field of artificial intelligence.Combined with the development of machine learning technology in recent years,it has great researches value and commercial value.As a matter of fact,a single recognition function can no longer meet people’s needs,and accurate image prediction is the trend that people pursue.This paper is based on Long Short-Term Memory(LSTM)and Deep Convolution Generative Adversarial Networks(DCGAN),studies and implements a prediction model by using radar image data.We adopt a stack cascading strategy in designing network connection which can control of parameter convergence better.This new method enables effective learning of image features and makes predictive models to have greater generalization capabilities.Experiments demonstrate that our network model is more robust and efficient in terms of timing prediction than 3DCNN and traditional ConvLSTM.The sequential image prediction model architecture proposed in this paper is theoretically applicable to all sequential images.展开更多
Image feature optimization is an important means to deal with high-dimensional image data in image semantic understanding and its applications. We formulate image feature optimization as the establishment of a mapping...Image feature optimization is an important means to deal with high-dimensional image data in image semantic understanding and its applications. We formulate image feature optimization as the establishment of a mapping between highand low-dimensional space via a five-tuple model. Nonlinear dimensionality reduction based on manifold learning provides a feasible way for solving such a problem. We propose a novel globular neighborhood based locally linear embedding (GNLLE) algorithm using neighborhood update and an incremental neighbor search scheme, which not only can handle sparse datasets but also has strong anti-noise capability and good topological stability. Given that the distance measure adopted in nonlinear dimensionality reduction is usually based on pairwise similarity calculation, we also present a globular neighborhood and path clustering based locally linear embedding (GNPCLLE) algorithm based on path-based clustering. Due to its full consideration of correlations between image data, GNPCLLE can eliminate the distortion of the overall topological structure within the dataset on the manifold. Experimental results on two image sets show the effectiveness and efficiency of the proposed algorithms.展开更多
How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data cente...How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data centers. The winner tree is introduced to make the data nodes as the leaf nodes of the tree and the final winner on the purpose of reducing energy consumption is selected. The complexity of large-scale cloud data centers is fully consider, and the task comparson coefficient is defined to make task scheduling strategy more reasonable. Experiments and performance analysis show that the proposed algorithm can effectively improve the node utilization, and reduce the overall power consumption of the cloud data center.展开更多
Federated learning has been used extensively in business inno-vation scenarios in various industries.This research adopts the federated learning approach for the first time to address the issue of bank-enterprise info...Federated learning has been used extensively in business inno-vation scenarios in various industries.This research adopts the federated learning approach for the first time to address the issue of bank-enterprise information asymmetry in the credit assessment scenario.First,this research designs a credit risk assessment model based on federated learning and feature selection for micro and small enterprises(MSEs)using multi-dimensional enterprise data and multi-perspective enterprise information.The proposed model includes four main processes:namely encrypted entity alignment,hybrid feature selection,secure multi-party computation,and global model updating.Secondly,a two-step feature selection algorithm based on wrapper and filter is designed to construct the optimal feature set in multi-source heterogeneous data,which can provide excellent accuracy and interpretability.In addition,a local update screening strategy is proposed to select trustworthy model parameters for aggregation each time to ensure the quality of the global model.The results of the study show that the model error rate is reduced by 6.22%and the recall rate is improved by 11.03%compared to the algorithms commonly used in credit risk research,significantly improving the ability to identify defaulters.Finally,the business operations of commercial banks are used to confirm the potential of the proposed model for real-world implementation.展开更多
Private clouds and public clouds are turning mutually into the open integrated cloud computing environment,which can aggregate and utilize WAN and LAN networks computing,storage,information and other hardware and soft...Private clouds and public clouds are turning mutually into the open integrated cloud computing environment,which can aggregate and utilize WAN and LAN networks computing,storage,information and other hardware and software resources sufficiently,but also bring a series of security,reliability and credibility problems.To solve these problems,a novel secure-agent-based trustworthy virtual private cloud model named SATVPC was proposed for the integrated and open cloud computing environment.Through the introduction of secure-agent technology,SATVPC provides an independent,safe and trustworthy computing virtual private platform for multi-tenant systems.In order to meet the needs of the credibility of SATVPC and mandate the trust relationship between each task execution agent and task executor node suitable for their security policies,a new dynamic composite credibility evaluation mechanism was presented,including the credit index computing algorithm and the credibility differentiation strategy.The experimental system shows that SATVPC and the credibility evaluation mechanism can ensure the security of open computing environments with feasibility.Experimental results and performance analysis also show that the credit indexes computing algorithm can evaluate the credibilities of task execution agents and task executor nodes quantitatively,correctly and operationally.展开更多
By pushing computation,cache,and network control to the edge,mobile edge computing(MEC)is expected to play a leading role in fifth generation(5G)and future sixth generation(6G).Nevertheless,facing ubiquitous fast-grow...By pushing computation,cache,and network control to the edge,mobile edge computing(MEC)is expected to play a leading role in fifth generation(5G)and future sixth generation(6G).Nevertheless,facing ubiquitous fast-growing computational demands,it is impossible for a single MEC paradigm to effectively support high-quality intelligent services at end user equipments(UEs).To address this issue,we propose an air-ground collaborative MEC(AGCMEC)architecture in this article.The proposed AGCMEC integrates all potentially available MEC servers within air and ground in the envisioned 6G,by a variety of collaborative ways to provide computation services at their best for UEs.Firstly,we introduce the AGC-MEC architecture and elaborate three typical use cases.Then,we discuss four main challenges in the AGC-MEC as well as their potential solutions.Next,we conduct a case study of collaborative service placement for AGC-MEC to validate the effectiveness of the proposed collaborative service placement strategy.Finally,we highlight several potential research directions of the AGC-MEC.展开更多
Anti-detection is becoming as an emerging challenge for anti-phishing.This paper solves the threats of anti-detection from the threshold setting condition.Enough webpages are considered to complicate threshold setting...Anti-detection is becoming as an emerging challenge for anti-phishing.This paper solves the threats of anti-detection from the threshold setting condition.Enough webpages are considered to complicate threshold setting condition when the threshold is settled.According to the common visual behavior which is easily attracted by the salient region of webpages,image retrieval methods based on texton correlation descriptor(TCD)are improved to obtain enough webpages which have similarity in the salient region for the images of webpages.There are two steps for improving TCD which has advantage of recognizing the salient region of images:(1)This paper proposed Weighted Euclidean Distance based on neighborhood location(NLW-Euclidean distance)and double cross windows,and combine them to solve the problems in TCD;(2)Space structure is introduced to map the image set to Euclid space so that similarity relation among images can be used to complicate threshold setting conditions.Experimental results show that the proposed method can improve the effectiveness of anti-phishing and make the system more stable,and significantly reduce the possibilities of being hacked to be used as mining systems for blockchain.展开更多
基金The National Natural Science Foundation of China(No.61602267,61202006)the Open Project of State Key Laboratory for Novel Software Technology at Nanjing University(No.KFKT2016B18)
文摘The feature selection in analogy-based software effort estimation (ASEE) is formulized as a multi-objective optimization problem. One objective is designed to maximize the effort estimation accuracy and the other objective is designed to minimize the number of selected features. Based on these two potential conflict objectives, a novel wrapper- based feature selection method, multi-objective feature selection for analogy-based software effort estimation (MASE), is proposed. In the empirical studies, 77 projects in Desharnais and 62 projects in Maxwell from the real world are selected as the evaluation objects and the proposed method MASE is compared with some baseline methods. Final results show that the proposed method can achieve better performance by selecting fewer features when considering MMRE (mean magnitude of relative error), MdMRE (median magnitude of relative error), PRED ( 0. 25 ), and SA ( standardized accuracy) performance metrics.
基金supported by National Natural Science Foundation of China(No.61462091)High-tech Industrial Development Program of Yunnan Province(No.1956,in 2012)+2 种基金New Academic Researcher Award for Doctoral Candidates of Yunnan Province of China(No.ynu201414)Natural Science Youth Foundation of Yunnan Province of China(No.2014FD006)the Postgraduates Science Foundation of Yunnan University(No.ynuy201424)
文摘Activity is now playing a vital role in software processes. To ensure the high-level efficiency of software processes, a key point is to locate those activities that own bigger resource occupation probabilities with respect to average execution time, called delayed activities, and then improve them. To this end, we firstly propose an approach to locating delayed activities in software processes. Furthermore, we present a case study, which exhibits the high-level efficiency of the approach, to concretely illustrate this new solution. Some beneficial analysis and reasonable modification are developed in the end.
基金supported in part by the Basic and Applied Basic Research Foundation of Guangdong Province[2025A1515011566]in part by the State Key Laboratory for Novel Software Technology,Nanjing University[KFKT2024B08]+1 种基金in part by Leading Talents in Gusu Innovation and Entrepreneurship[ZXL2023170]in part by the Basic Research Programs of Taicang 2024,[TC2024JC32].
文摘Deep convolutional neural networks(CNNs)have demonstrated remarkable performance in video super-resolution(VSR).However,the ability of most existing methods to recover fine details in complex scenes is often hindered by the loss of shallow texture information during feature extraction.To address this limitation,we propose a 3D Convolutional Enhanced Residual Video Super-Resolution Network(3D-ERVSNet).This network employs a forward and backward bidirectional propagation module(FBBPM)that aligns features across frames using explicit optical flow through lightweight SPyNet.By incorporating an enhanced residual structure(ERS)with skip connections,shallow and deep features are effectively integrated,enhancing texture restoration capabilities.Furthermore,3D convolution module(3DCM)is applied after the backward propagation module to implicitly capture spatio-temporal dependencies.The architecture synergizes these components where FBBPM extracts aligned features,ERS fuses hierarchical representations,and 3DCM refines temporal coherence.Finally,a deep feature aggregation module(DFAM)fuses the processed features,and a pixel-upsampling module(PUM)reconstructs the high-resolution(HR)video frames.Comprehensive evaluations on REDS,Vid4,UDM10,and Vim4 benchmarks demonstrate well performance including 30.95 dB PSNR/0.8822 SSIM on REDS and 32.78 dB/0.8987 on Vim4.3D-ERVSNet achieves significant gains over baselines while maintaining high efficiency with only 6.3M parameters and 77ms/frame runtime(i.e.,20×faster than RBPN).The network’s effectiveness stems from its task-specific asymmetric design that balances explicit alignment and implicit fusion.
基金supported by Stable Support Project of Shenzhen(20231120161634002)Shenzhen Science and Technology Programme(JCYJ20240813141417023)+5 种基金Natural Science Foundation of Guangdong Province of China(2025A1515010233)Guangdong Provincial Department of Education(2024KTSCX060)Tencent‘Rhinoceros Birds’—Scientific Research Foundation for Young Teachers of Shenzhen University,Open Project of State Key Laboratory for Novel Software Technology of Nanjing University(KFKT2025B22)Hong Kong RGC General Research Fund(No.152211/23E and 15216424/24E)PolyU Internal Fund(No.P0043932,P0048988)NVIDIA AI Technology Centre.
文摘Point of interest(POI)recommendation analyses user preferences through historical check-in data.However,existing POI recommendation methods often overlook the influence of weather information and face the challenge of sparse historical data for individual users.To address these issues,this paper proposes a new paradigm,namely temporal-weather-aware transition pattern for POI recommendation(TWTransNet).This paradigm is designed to capture user transition patterns under different times and weather conditions.Additionally,we introduce the construction of a user-POI interaction graph to alleviate the problem of sparse historical data for individual users.Furthermore,when predicting user interests by aggregating graph information,some POIs may not be suitable for visitation under current weather conditions.To account for this,we propose an attention mechanism to filter POI neighbours when aggregating information from the graph,considering the impact of weather and time.Empirical results on two real-world datasets demonstrate the superior performance of our proposed method,showing a substantial improvement of 6.91%-23.31% in terms of prediction accuracy.
基金Projects(61472192,61202004)supported by the National Natural Science Foundation of ChinaProject(14KJB520014)supported by the Natural Science Fund of Higher Education of Jiangsu Province,China
文摘The cloud storage service cannot be completely trusted because of the separation of data management and ownership, leading to the difficulty of data privacy protection. In order to protect the privacy of data on untrusted servers of cloud storage, a novel multi-authority access control scheme without a trustworthy central authority has been proposed based on CP-ABE for cloud storage systems, called non-centered multi-authority proxy re-encryption based on the cipher-text policy attribute-based encryption(NC-MACPABE). NC-MACPABE optimizes the weighted access structure(WAS) allowing different levels of operation on the same file in cloud storage system. The concept of identity dyeing is introduced to improve the users' information privacy further. The re-encryption algorithm is improved in the scheme so that the data owner can revoke user's access right in a more flexible way. The scheme is proved to be secure. And the experimental results also show that removing the central authority can resolve the existing performance bottleneck in the multi-authority architecture with a central authority, which significantly improves user experience when a large number of users apply for accesses to the cloud storage system at the same time.
基金supported by the National Natural Science Foundation of China (60573141,60773041)National High Technology Research and Development Program of China (863 Program) (2006AA01Z439+12 种基金2007AA01Z404 2007AA01Z478)the Natural Science Foundation of Jiangsu Province (BK2008451)Science & Technology Project of Jiangsu Province (BE2009158)the Natural Science Foundation of Higher Education Institutions of Jiangsu Province (09KJB520010 09KJB520009)the Research Fund for the Doctoral Program of Higher Education(2009 3223120001)the Sepcialized Research Fund of Ministry of Education (2009117)High Technology Research Program of Nanjing(2007RZ127)Foundation of National Laboratory for Modern Communications (9140C1105040805)Postdoctoral Foundation of Jiangsu Province (0801019C)Science & Technology Innovation Fundfor Higher Education Institutions of Jiangsu Province (CX08B-085ZCX08B-086Z)
文摘To fight against malicious codes of P2P networks, it is necessary to study the malicious code propagation model of P2P networks in depth. The epidemic of malicious code threatening P2P systems can be divided into the active and passive propagation models and a new passive propagation model of malicious code is proposed, which differentiates peers into 4 kinds of state and fits better for actual P2P networks. From the propagation model of malicious code, it is easy to find that quickly making peers get their patched and upgraded anti-virus system is the key way of immunization and damage control. To distribute patches and immune modules efficiently, a new exponential tree plus (ET+) and vaccine distribution algorithm based on ET+ are also proposed. The performance analysis and test results show that the vaccine distribution algorithm based on ET+ is robust, efficient and much more suitable for P2P networks.
基金ACKNOWLEDGEMENT This paper is supported by the Opening Project of State Key Laboratory for Novel Software Technology of Nanjing University, China (Grant No.KFKT2012B25) and National Science Foundation of China (Grant No.61303263).
文摘As a new computing mode,cloud computing can provide users with virtualized and scalable web services,which faced with serious security challenges,however.Access control is one of the most important measures to ensure the security of cloud computing.But applying traditional access control model into the Cloud directly could not solve the uncertainty and vulnerability caused by the open conditions of cloud computing.In cloud computing environment,only when the security and reliability of both interaction parties are ensured,data security can be effectively guaranteed during interactions between users and the Cloud.Therefore,building a mutual trust relationship between users and cloud platform is the key to implement new kinds of access control method in cloud computing environment.Combining with Trust Management(TM),a mutual trust based access control(MTBAC) model is proposed in this paper.MTBAC model take both user's behavior trust and cloud services node's credibility into consideration.Trust relationships between users and cloud service nodes are established by mutual trust mechanism.Security problems of access control are solved by implementing MTBAC model into cloud computing environment.Simulation experiments show that MTBAC model can guarantee the interaction between users and cloud service nodes.
基金supported in part by the National Nature Science Foundation of China under Grant No.61402413 and 61340058 the "Six Kinds Peak Talents Plan" project of Jiangsu Province under Grant No.ll-JY-009+2 种基金the Nature Science Foundation of Zhejiang Province under Grant No.LY14F020019, Z14F020006 and Y1101183the China Postdoctoral Science Foundation funded project under Grant No.2012M511732Jiangsu Province Postdoctoral Science Foundation funded project Grant No.1102014C
文摘The Cloud is increasingly being used to store and process big data for its tenants and classical security mechanisms using encryption are neither sufficiently efficient nor suited to the task of protecting big data in the Cloud.In this paper,we present an alternative approach which divides big data into sequenced parts and stores them among multiple Cloud storage service providers.Instead of protecting the big data itself,the proposed scheme protects the mapping of the various data elements to each provider using a trapdoor function.Analysis,comparison and simulation prove that the proposed scheme is efficient and secure for the big data of Cloud tenants.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.61173093 and 61202182)the Postdoctoral Science Foundation of China(Grant No.2012 M521776)+2 种基金the Fundamental Research Funds for the Central Universities of Chinathe Postdoctoral Science Foundation of Shannxi Province,Chinathe Natural Science Basic Research Plan of Shaanxi Province,China(Grant Nos.2013JM8019 and 2014JQ8359)
文摘Community detection is an important methodology for understanding the intrinsic structure and function of a realworld network. In this paper, we propose an effective and efficient algorithm, called Dominant Label Propagation Algorithm(Abbreviated as DLPA), to detect communities in complex networks. The algorithm simulates a special voting process to detect overlapping and non-overlapping community structure in complex networks simultaneously. Our algorithm is very efficient, since its computational complexity is almost linear to the number of edges in the network. Experimental results on both real-world and synthetic networks show that our algorithm also possesses high accuracies on detecting community structure in networks.
基金the National Key R&D Program of China(No.2017YFA0700500)the National Natural Science Foundation of China(No.62172202)+1 种基金the Experiment Project of ChinaManned Space Program(No.HYZHXM01019)the Fundamental Research Funds for the Central Universities from Southeast University(No.3207032101C3).
文摘In modern terminology,“organoids”refer to cells that grow in a specific three-dimensional(3D)environment in vitro,sharing similar structures with their source organs or tissues.Observing themorphology or growth characteristics of organoids through a microscope is a commonly used method of organoid analysis.However,it is difficult,time-consuming,and inaccurate to screen and analyze organoids only manually,a problem which cannot be easily solved with traditional technology.Artificial intelligence(AI)technology has proven to be effective in many biological and medical research fields,especially in the analysis of single-cell or hematoxylin/eosin stained tissue slices.When used to analyze organoids,AI should also provide more efficient,quantitative,accurate,and fast solutions.In this review,we will first briefly outline the application areas of organoids and then discuss the shortcomings of traditional organoid measurement and analysis methods.Secondly,we will summarize the development from machine learning to deep learning and the advantages of the latter,and then describe how to utilize a convolutional neural network to solve the challenges in organoid observation and analysis.Finally,we will discuss the limitations of current AI used in organoid research,as well as opportunities and future research directions.
基金supported by the National Natural Science Foundation of China(6113900261171132)+4 种基金the Funding of Jiangsu Innovation Program for Graduate Education(CXZZ11 0219)the Natural Science Foundation of Jiangsu Education Department(12KJB520013)the Applying Study Foundation of Nantong(BK2011062)the Open Project Program of State Key Laboratory for Novel Software Technology,Nanjing University(KFKT2012B28)the Natural Science Pre-Research Foundation of Nantong University(12ZY016)
文摘Attribute reduction in the rough set theory is an important feature selection method, but finding a minimum attribute reduction has been proven to be a non-deterministic polynomial (NP)-hard problem. Therefore, it is necessary to investigate some fast and effective approximate algorithms. A novel and enhanced quantum-inspired shuffled frog leaping based minimum attribute reduction algorithm (QSFLAR) is proposed. Evolutionary frogs are represented by multi-state quantum bits, and both quantum rotation gate and quantum mutation operators are used to exploit the mechanisms of frog population diversity and convergence to the global optimum. The decomposed attribute subsets are co-evolved by the elitist frogs with a quantum-inspired shuffled frog leaping algorithm. The experimental results validate the better feasibility and effectiveness of QSFLAR, comparing with some representa- tive algorithms. Therefore, QSFLAR can be considered as a more competitive algorithm on the efficiency and accuracy for minimum attribute reduction.
基金the Natural Science Foundation of Shandong Province (Y2007G37)the Science and Technology Development Program of Shandong Province (2007GG10001012)
文摘In this paper, we propose a new attribute-based proxy re-encryption scheme, where a semi-trusted proxy, with some additional information, can transform a ciphertext under a set of attributes into a new ciphertext under another set of attributes on the same message, but not vice versa, furthermore, its security was proved in the standard model based on decisional bilinear Diffie-Hellman assumption. This scheme can be used to realize fine-grained selectively sharing of encrypted data, but the general proxy rencryption scheme severely can not do it, so the proposed schemecan be thought as an improvement of general traditional proxy re-encryption scheme.
基金the National Natural Science Foundation of China (6070308360575033).
文摘In a measurement system, new representation methods are necessary to maintain the uncertainty and to supply more powerful ability for reasoning and transformation between numerical system and symbolic system. A grey measurement system is discussed from the point of view of intelligent sensors and incomplete information processing compared with a numerical and symbolized measurement system. The methods of grey representation and information processing are proposed for data collection and reasoning. As a case study, multi-ultrasonic sensor systems are demonstrated to verify the effectiveness of the proposed methods.
基金This work was supported in part by the Open Research Project of State Key Laboratory of Novel Software Technology under Grant KFKT2018B23the Priority Academic Program Development of Jiangsu Higher Education Institutions,and the Open Project Program of the State Key Lab of CAD\&CG(Grant No.A1916),Zhejiang University.
文摘Image recognition technology is an important field of artificial intelligence.Combined with the development of machine learning technology in recent years,it has great researches value and commercial value.As a matter of fact,a single recognition function can no longer meet people’s needs,and accurate image prediction is the trend that people pursue.This paper is based on Long Short-Term Memory(LSTM)and Deep Convolution Generative Adversarial Networks(DCGAN),studies and implements a prediction model by using radar image data.We adopt a stack cascading strategy in designing network connection which can control of parameter convergence better.This new method enables effective learning of image features and makes predictive models to have greater generalization capabilities.Experiments demonstrate that our network model is more robust and efficient in terms of timing prediction than 3DCNN and traditional ConvLSTM.The sequential image prediction model architecture proposed in this paper is theoretically applicable to all sequential images.
基金Project (No 2008AA01Z132) supported by the National High-Tech Research and Development Program of China
文摘Image feature optimization is an important means to deal with high-dimensional image data in image semantic understanding and its applications. We formulate image feature optimization as the establishment of a mapping between highand low-dimensional space via a five-tuple model. Nonlinear dimensionality reduction based on manifold learning provides a feasible way for solving such a problem. We propose a novel globular neighborhood based locally linear embedding (GNLLE) algorithm using neighborhood update and an incremental neighbor search scheme, which not only can handle sparse datasets but also has strong anti-noise capability and good topological stability. Given that the distance measure adopted in nonlinear dimensionality reduction is usually based on pairwise similarity calculation, we also present a globular neighborhood and path clustering based locally linear embedding (GNPCLLE) algorithm based on path-based clustering. Due to its full consideration of correlations between image data, GNPCLLE can eliminate the distortion of the overall topological structure within the dataset on the manifold. Experimental results on two image sets show the effectiveness and efficiency of the proposed algorithms.
基金supported by the National Natural Science Foundation of China(6120200461272084)+9 种基金the National Key Basic Research Program of China(973 Program)(2011CB302903)the Specialized Research Fund for the Doctoral Program of Higher Education(2009322312000120113223110003)the China Postdoctoral Science Foundation Funded Project(2011M5000952012T50514)the Natural Science Foundation of Jiangsu Province(BK2011754BK2009426)the Jiangsu Postdoctoral Science Foundation Funded Project(1102103C)the Natural Science Fund of Higher Education of Jiangsu Province(12KJB520007)the Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions(yx002001)
文摘How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data centers. The winner tree is introduced to make the data nodes as the leaf nodes of the tree and the final winner on the purpose of reducing energy consumption is selected. The complexity of large-scale cloud data centers is fully consider, and the task comparson coefficient is defined to make task scheduling strategy more reasonable. Experiments and performance analysis show that the proposed algorithm can effectively improve the node utilization, and reduce the overall power consumption of the cloud data center.
基金funded by the State Grid Jiangsu Electric Power Company(Grant No.JS2020112)the National Natural Science Foundation of China(Grant No.62272236).
文摘Federated learning has been used extensively in business inno-vation scenarios in various industries.This research adopts the federated learning approach for the first time to address the issue of bank-enterprise information asymmetry in the credit assessment scenario.First,this research designs a credit risk assessment model based on federated learning and feature selection for micro and small enterprises(MSEs)using multi-dimensional enterprise data and multi-perspective enterprise information.The proposed model includes four main processes:namely encrypted entity alignment,hybrid feature selection,secure multi-party computation,and global model updating.Secondly,a two-step feature selection algorithm based on wrapper and filter is designed to construct the optimal feature set in multi-source heterogeneous data,which can provide excellent accuracy and interpretability.In addition,a local update screening strategy is proposed to select trustworthy model parameters for aggregation each time to ensure the quality of the global model.The results of the study show that the model error rate is reduced by 6.22%and the recall rate is improved by 11.03%compared to the algorithms commonly used in credit risk research,significantly improving the ability to identify defaulters.Finally,the business operations of commercial banks are used to confirm the potential of the proposed model for real-world implementation.
基金Projects(61202004,61272084)supported by the National Natural Science Foundation of ChinaProjects(2011M500095,2012T50514)supported by the China Postdoctoral Science Foundation+2 种基金Projects(BK2011754,BK2009426)supported by the Natural Science Foundation of Jiangsu Province,ChinaProject(12KJB520007)supported by the Natural Science Fund of Higher Education of Jiangsu Province,ChinaProject(yx002001)supported by the Priority Academic Program Development of Jiangsu Higher Education Institutions,China
文摘Private clouds and public clouds are turning mutually into the open integrated cloud computing environment,which can aggregate and utilize WAN and LAN networks computing,storage,information and other hardware and software resources sufficiently,but also bring a series of security,reliability and credibility problems.To solve these problems,a novel secure-agent-based trustworthy virtual private cloud model named SATVPC was proposed for the integrated and open cloud computing environment.Through the introduction of secure-agent technology,SATVPC provides an independent,safe and trustworthy computing virtual private platform for multi-tenant systems.In order to meet the needs of the credibility of SATVPC and mandate the trust relationship between each task execution agent and task executor node suitable for their security policies,a new dynamic composite credibility evaluation mechanism was presented,including the credit index computing algorithm and the credibility differentiation strategy.The experimental system shows that SATVPC and the credibility evaluation mechanism can ensure the security of open computing environments with feasibility.Experimental results and performance analysis also show that the credit indexes computing algorithm can evaluate the credibilities of task execution agents and task executor nodes quantitatively,correctly and operationally.
基金supported in part by the National Natural Science Foundation of China under Grant 62171465,62072303,62272223,U22A2031。
文摘By pushing computation,cache,and network control to the edge,mobile edge computing(MEC)is expected to play a leading role in fifth generation(5G)and future sixth generation(6G).Nevertheless,facing ubiquitous fast-growing computational demands,it is impossible for a single MEC paradigm to effectively support high-quality intelligent services at end user equipments(UEs).To address this issue,we propose an air-ground collaborative MEC(AGCMEC)architecture in this article.The proposed AGCMEC integrates all potentially available MEC servers within air and ground in the envisioned 6G,by a variety of collaborative ways to provide computation services at their best for UEs.Firstly,we introduce the AGC-MEC architecture and elaborate three typical use cases.Then,we discuss four main challenges in the AGC-MEC as well as their potential solutions.Next,we conduct a case study of collaborative service placement for AGC-MEC to validate the effectiveness of the proposed collaborative service placement strategy.Finally,we highlight several potential research directions of the AGC-MEC.
基金The work reported in this paper was supported by the Joint research project of Jiangsu Province under Grant No.BY2016026-04the Opening Project of State Key Laboratory for Novel Software Technology of Nanjing University under Grant No.KFKT2018B27+1 种基金the National Natural Science Foundation for Young Scientists of China under Grant No.61303263the Jiangsu Provincial Research Foundation for Basic Research(Natural Science Foundation)under Grant No.BK20150201.
文摘Anti-detection is becoming as an emerging challenge for anti-phishing.This paper solves the threats of anti-detection from the threshold setting condition.Enough webpages are considered to complicate threshold setting condition when the threshold is settled.According to the common visual behavior which is easily attracted by the salient region of webpages,image retrieval methods based on texton correlation descriptor(TCD)are improved to obtain enough webpages which have similarity in the salient region for the images of webpages.There are two steps for improving TCD which has advantage of recognizing the salient region of images:(1)This paper proposed Weighted Euclidean Distance based on neighborhood location(NLW-Euclidean distance)and double cross windows,and combine them to solve the problems in TCD;(2)Space structure is introduced to map the image set to Euclid space so that similarity relation among images can be used to complicate threshold setting conditions.Experimental results show that the proposed method can improve the effectiveness of anti-phishing and make the system more stable,and significantly reduce the possibilities of being hacked to be used as mining systems for blockchain.