The inter-city linkage heat data provided by Baidu Migration is employed as a characterization of inter-city linkages in order to facilitate the study of the network linkage characteristics and hierarchical structure ...The inter-city linkage heat data provided by Baidu Migration is employed as a characterization of inter-city linkages in order to facilitate the study of the network linkage characteristics and hierarchical structure of urban agglomeration in the Greater Bay Area through the use of social network analysis method.This is the inaugural application of big data based on location services in the study of urban agglomeration network structure,which represents a novel research perspective on this topic.The study reveals that the density of network linkages in the Greater Bay Area urban agglomeration has reached 100%,indicating a mature network-like spatial structure.This structure has given rise to three distinct communities:Shenzhen-Dongguan-Huizhou,Guangzhou-Foshan-Zhaoqing,and Zhuhai-Zhongshan-Jiangmen.Additionally,cities within the Greater Bay Area urban agglomeration play different roles,suggesting that varying development strategies may be necessary to achieve staggered development.The study demonstrates that large datasets represented by LBS can offer novel insights and methodologies for the examination of urban agglomeration network structures,contingent on the appropriate mining and processing of the data.展开更多
In this study,cylindrical sandstone samples were imaged by CT scanning technique,and the pore structure images of sandstone samples were analyzed and generated by combining with StyleGAN2-ADA generative adversarial ne...In this study,cylindrical sandstone samples were imaged by CT scanning technique,and the pore structure images of sandstone samples were analyzed and generated by combining with StyleGAN2-ADA generative adversarial network(GAN)model.Firstly,nine small column samples with a diameter of 4 mm were drilled from sandstone samples with a diameter of 2.5 cm,and their CT scanning results were preprocessed.Because the change between adjacent slices was little,using all slices directly may lead to the problem of pattern collapse in the process of model generation.In order to solve this problem,one slice was selected as training data every 30 slices,and the diversity of slices was verified by calculating the LPIPS values of these slices.The results showed that the strategy of selecting one slice every 30 slices could effectively improve the diversity of images generated by the model and avoid the phenomenon of pattern collapse.Through this process,a total of 295 discontinuous two-dimensional slices were generated for the generation and segmentation analysis of sandstone pore structures.This study can provide effective data support for accurate segmentation of porous medium structures,and simultaneously improves the stability and diversity of generative adversarial network under the condition of small samples.展开更多
In a sensor network with a large number of densely populated sensor nodes, a single target of interest may be detected by multiple sensor nodes simultaneously. Data collected from the sensor nodes are usually highly c...In a sensor network with a large number of densely populated sensor nodes, a single target of interest may be detected by multiple sensor nodes simultaneously. Data collected from the sensor nodes are usually highly correlated, and hence energy saving using in-network data fusion becomes possible. A traditional data fusion scheme starts with dividing the network into clusters, followed by electing a sensor node as cluster head in each cluster. A cluster head is responsible for collecting data from all its cluster members, performing data fusion on these data and transmitting the fused data to the base station. Assuming that a sensor node is only capable of handling a single node-to-node transmission at a time and each transmission takes T time-slots, a cluster head with n cluster members will take at least nT time-slots to collect data from all its cluster members. In this paper, a tree-based network structure and its formation algorithms are proposed. Simulation results show that the proposed network structure can greatly reduce the delay in data collection.展开更多
With the deepening of the Guangdong-Hong Kong-Macao Greater Bay Area strategy and the accelerated integration and development of the east and west sides of the Pearl River Estuary,Zhuhai’s hub position is becoming mo...With the deepening of the Guangdong-Hong Kong-Macao Greater Bay Area strategy and the accelerated integration and development of the east and west sides of the Pearl River Estuary,Zhuhai’s hub position is becoming more and more prominent.The city of Zhuhai has a dense water network and is divided into two urban areas,the east and the west,under the influence of the Mordor Gate waterway.Based on the theory of spatial syntax,this paper carries out an analytical study on the urban spatial structure of Zhuhai,identifies the distribution characteristics of urban POIs,and provides theoretical support for the urban development of Zhuhai.展开更多
Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent ...Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data.展开更多
Vibration monitoring by virtual sensing methods has been well developed for linear timeinvariant structures with limited sensors.However,few methods are proposed for Time-Varying(TV)structures which are inevitable in ...Vibration monitoring by virtual sensing methods has been well developed for linear timeinvariant structures with limited sensors.However,few methods are proposed for Time-Varying(TV)structures which are inevitable in aerospace engineering.The core of vibration monitoring for TV structures is to describe the TV structural dynamic characteristics with accuracy and efficiency.This paper propose a new method using the Long Short-Term Memory(LSTM)networks for Continuously Variable Configuration Structures(CVCSs),which is an important subclass of TV structures.The configuration parameters are used to represent the time-varying dynamic characteristics by the‘‘freezing"method.The relationship between TV dynamic characteristics and vibration responses is established by LSTM,and can be generalized to estimate the responses with unknown TV processes benefiting from the time translation invariance of LSTM.A numerical example and a liquid-filled pipe experiment are used to test the performance of the proposed method.The results demonstrate that the proposed method can accurately estimate the unmeasured responses for CVCSs to reveal the actual characteristics in time-domain and modal-domain.Besides,the average one-step estimation time of responses is less than the sampling interval.Thus,the proposed method is promising to on-line estimate the important responses of TV structures.展开更多
The use of blended acquisition technology in marine seismic exploration has the advantages of high acquisition efficiency and low exploration costs.However,during acquisition,the primary source may be disturbed by adj...The use of blended acquisition technology in marine seismic exploration has the advantages of high acquisition efficiency and low exploration costs.However,during acquisition,the primary source may be disturbed by adjacent sources,resulting in blended noise that can adversely affect data processing and interpretation.Therefore,the de-blending method is needed to suppress blended noise and improve the quality of subsequent processing.Conventional de-blending methods,such as denoising and inversion methods,encounter challenges in parameter selection and entail high computational costs.In contrast,deep learning-based de-blending methods demonstrate reduced reliance on manual intervention and provide rapid calculation speeds post-training.In this study,we propose a Uformer network using a nonoverlapping window multihead attention mechanism designed for de-blending blended data in the common shot domain.We add the depthwise convolution to the feedforward network to improve Uformer’s ability to capture local context information.The loss function comprises SSIM and L1 loss.Our test results indicate that the Uformer outperforms convolutional neural networks and traditional denoising methods across various evaluation metrics,thus highlighting the effectiveness and advantages of Uformer in de-blending blended data.展开更多
Automatic segmentation of ischemic stroke lesions from computed tomography(CT)images is of great significance for identifying and curing this life-threatening condition.However,in addition to the problem of low image ...Automatic segmentation of ischemic stroke lesions from computed tomography(CT)images is of great significance for identifying and curing this life-threatening condition.However,in addition to the problem of low image contrast,it is also challenged by the complex changes in the appearance of the stroke area and the difficulty in obtaining image data.Considering that it is difficult to obtain stroke data and labels,a data enhancement algorithm for one-shot medical image segmentation based on data augmentation using learned transformation was proposed to increase the number of data sets for more accurate segmentation.A deep convolutional neural network based algorithm for stroke lesion segmentation,called structural similarity with light U-structure(USSL)Net,was proposed.We embedded a convolution module that combines switchable normalization,multi-scale convolution and dilated convolution in the network for better segmentation performance.Besides,considering the strong structural similarity between multi-modal stroke CT images,the USSL Net uses the correlation maximized structural similarity loss(SSL)function as the loss function to learn the varying shapes of the lesions.The experimental results show that our framework has achieved results in the following aspects.First,the data obtained by adding our data enhancement algorithm is better than the data directly segmented from the multi-modal image.Second,the performance of our network model is better than that of other models for stroke segmentation tasks.Third,the way SSL functioned as a loss function is more helpful to the improvement of segmentation accuracy than the cross-entropy loss function.展开更多
As a critical infrastructure of cloud computing,data center networks(DCNs)directly determine the service performance of data centers,which provide computing services for various applications such as big data processin...As a critical infrastructure of cloud computing,data center networks(DCNs)directly determine the service performance of data centers,which provide computing services for various applications such as big data processing and artificial intelligence.However,current architectures of data center networks suffer from a long routing path and a low fault tolerance between source and destination servers,which is hard to satisfy the requirements of high-performance data center networks.Based on dual-port servers and Clos network structure,this paper proposed a novel architecture RClos to construct high-performance data center networks.Logically,the proposed architecture is constructed by inserting a dual-port server into each pair of adjacent switches in the fabric of switches,where switches are connected in the form of a ring Clos structure.We describe the structural properties of RClos in terms of network scale,bisection bandwidth,and network diameter.RClos architecture inherits characteristics of its embedded Clos network,which can accommodate a large number of servers with a small average path length.The proposed architecture embraces a high fault tolerance,which adapts to the construction of various data center networks.For example,the average path length between servers is 3.44,and the standardized bisection bandwidth is 0.8 in RClos(32,5).The result of numerical experiments shows that RClos enjoys a small average path length and a high network fault tolerance,which is essential in the construction of high-performance data center networks.展开更多
文摘The inter-city linkage heat data provided by Baidu Migration is employed as a characterization of inter-city linkages in order to facilitate the study of the network linkage characteristics and hierarchical structure of urban agglomeration in the Greater Bay Area through the use of social network analysis method.This is the inaugural application of big data based on location services in the study of urban agglomeration network structure,which represents a novel research perspective on this topic.The study reveals that the density of network linkages in the Greater Bay Area urban agglomeration has reached 100%,indicating a mature network-like spatial structure.This structure has given rise to three distinct communities:Shenzhen-Dongguan-Huizhou,Guangzhou-Foshan-Zhaoqing,and Zhuhai-Zhongshan-Jiangmen.Additionally,cities within the Greater Bay Area urban agglomeration play different roles,suggesting that varying development strategies may be necessary to achieve staggered development.The study demonstrates that large datasets represented by LBS can offer novel insights and methodologies for the examination of urban agglomeration network structures,contingent on the appropriate mining and processing of the data.
文摘In this study,cylindrical sandstone samples were imaged by CT scanning technique,and the pore structure images of sandstone samples were analyzed and generated by combining with StyleGAN2-ADA generative adversarial network(GAN)model.Firstly,nine small column samples with a diameter of 4 mm were drilled from sandstone samples with a diameter of 2.5 cm,and their CT scanning results were preprocessed.Because the change between adjacent slices was little,using all slices directly may lead to the problem of pattern collapse in the process of model generation.In order to solve this problem,one slice was selected as training data every 30 slices,and the diversity of slices was verified by calculating the LPIPS values of these slices.The results showed that the strategy of selecting one slice every 30 slices could effectively improve the diversity of images generated by the model and avoid the phenomenon of pattern collapse.Through this process,a total of 295 discontinuous two-dimensional slices were generated for the generation and segmentation analysis of sandstone pore structures.This study can provide effective data support for accurate segmentation of porous medium structures,and simultaneously improves the stability and diversity of generative adversarial network under the condition of small samples.
基金The Hong Kong Polytechnic University under internal Grant No. G-YF51.
文摘In a sensor network with a large number of densely populated sensor nodes, a single target of interest may be detected by multiple sensor nodes simultaneously. Data collected from the sensor nodes are usually highly correlated, and hence energy saving using in-network data fusion becomes possible. A traditional data fusion scheme starts with dividing the network into clusters, followed by electing a sensor node as cluster head in each cluster. A cluster head is responsible for collecting data from all its cluster members, performing data fusion on these data and transmitting the fused data to the base station. Assuming that a sensor node is only capable of handling a single node-to-node transmission at a time and each transmission takes T time-slots, a cluster head with n cluster members will take at least nT time-slots to collect data from all its cluster members. In this paper, a tree-based network structure and its formation algorithms are proposed. Simulation results show that the proposed network structure can greatly reduce the delay in data collection.
基金funded by The Guangdong Province General Universities Young Innovative Talent Project(Grant No.2023WQNCX122)The Zhuhai Philosophy and Social Science Planning Project(Grant No.2023YBB049)。
文摘With the deepening of the Guangdong-Hong Kong-Macao Greater Bay Area strategy and the accelerated integration and development of the east and west sides of the Pearl River Estuary,Zhuhai’s hub position is becoming more and more prominent.The city of Zhuhai has a dense water network and is divided into two urban areas,the east and the west,under the influence of the Mordor Gate waterway.Based on the theory of spatial syntax,this paper carries out an analytical study on the urban spatial structure of Zhuhai,identifies the distribution characteristics of urban POIs,and provides theoretical support for the urban development of Zhuhai.
文摘Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data.
文摘Vibration monitoring by virtual sensing methods has been well developed for linear timeinvariant structures with limited sensors.However,few methods are proposed for Time-Varying(TV)structures which are inevitable in aerospace engineering.The core of vibration monitoring for TV structures is to describe the TV structural dynamic characteristics with accuracy and efficiency.This paper propose a new method using the Long Short-Term Memory(LSTM)networks for Continuously Variable Configuration Structures(CVCSs),which is an important subclass of TV structures.The configuration parameters are used to represent the time-varying dynamic characteristics by the‘‘freezing"method.The relationship between TV dynamic characteristics and vibration responses is established by LSTM,and can be generalized to estimate the responses with unknown TV processes benefiting from the time translation invariance of LSTM.A numerical example and a liquid-filled pipe experiment are used to test the performance of the proposed method.The results demonstrate that the proposed method can accurately estimate the unmeasured responses for CVCSs to reveal the actual characteristics in time-domain and modal-domain.Besides,the average one-step estimation time of responses is less than the sampling interval.Thus,the proposed method is promising to on-line estimate the important responses of TV structures.
基金supported by the National Natural Science Foundation of China(Research on Dynamic Location of Receiving Points and Wave Field Separation Technology Based on Deep Learning in OBN Seismic Exploration,No.42074140)the Sinopec Geophysical Corporation,Project of OBC/OBN Seismic Data Wave Field Characteristics Analysis and Ghost Wave Suppression(No.SGC-202206)。
文摘The use of blended acquisition technology in marine seismic exploration has the advantages of high acquisition efficiency and low exploration costs.However,during acquisition,the primary source may be disturbed by adjacent sources,resulting in blended noise that can adversely affect data processing and interpretation.Therefore,the de-blending method is needed to suppress blended noise and improve the quality of subsequent processing.Conventional de-blending methods,such as denoising and inversion methods,encounter challenges in parameter selection and entail high computational costs.In contrast,deep learning-based de-blending methods demonstrate reduced reliance on manual intervention and provide rapid calculation speeds post-training.In this study,we propose a Uformer network using a nonoverlapping window multihead attention mechanism designed for de-blending blended data in the common shot domain.We add the depthwise convolution to the feedforward network to improve Uformer’s ability to capture local context information.The loss function comprises SSIM and L1 loss.Our test results indicate that the Uformer outperforms convolutional neural networks and traditional denoising methods across various evaluation metrics,thus highlighting the effectiveness and advantages of Uformer in de-blending blended data.
基金the National Natural Science Foundation of China(No.61976091)。
文摘Automatic segmentation of ischemic stroke lesions from computed tomography(CT)images is of great significance for identifying and curing this life-threatening condition.However,in addition to the problem of low image contrast,it is also challenged by the complex changes in the appearance of the stroke area and the difficulty in obtaining image data.Considering that it is difficult to obtain stroke data and labels,a data enhancement algorithm for one-shot medical image segmentation based on data augmentation using learned transformation was proposed to increase the number of data sets for more accurate segmentation.A deep convolutional neural network based algorithm for stroke lesion segmentation,called structural similarity with light U-structure(USSL)Net,was proposed.We embedded a convolution module that combines switchable normalization,multi-scale convolution and dilated convolution in the network for better segmentation performance.Besides,considering the strong structural similarity between multi-modal stroke CT images,the USSL Net uses the correlation maximized structural similarity loss(SSL)function as the loss function to learn the varying shapes of the lesions.The experimental results show that our framework has achieved results in the following aspects.First,the data obtained by adding our data enhancement algorithm is better than the data directly segmented from the multi-modal image.Second,the performance of our network model is better than that of other models for stroke segmentation tasks.Third,the way SSL functioned as a loss function is more helpful to the improvement of segmentation accuracy than the cross-entropy loss function.
基金This work was supported by the Hainan Provincial Natural Science Foundation of China(620RC560,2019RC096,620RC562)the Scientific Research Setup Fund of Hainan University(KYQD(ZR)1877)+2 种基金the National Natural Science Foundation of China(62162021,82160345,61802092)the key research and development program of Hainan province(ZDYF2020199,ZDYF2021GXJS017)the key science and technology plan project of Haikou(2011-016).
文摘As a critical infrastructure of cloud computing,data center networks(DCNs)directly determine the service performance of data centers,which provide computing services for various applications such as big data processing and artificial intelligence.However,current architectures of data center networks suffer from a long routing path and a low fault tolerance between source and destination servers,which is hard to satisfy the requirements of high-performance data center networks.Based on dual-port servers and Clos network structure,this paper proposed a novel architecture RClos to construct high-performance data center networks.Logically,the proposed architecture is constructed by inserting a dual-port server into each pair of adjacent switches in the fabric of switches,where switches are connected in the form of a ring Clos structure.We describe the structural properties of RClos in terms of network scale,bisection bandwidth,and network diameter.RClos architecture inherits characteristics of its embedded Clos network,which can accommodate a large number of servers with a small average path length.The proposed architecture embraces a high fault tolerance,which adapts to the construction of various data center networks.For example,the average path length between servers is 3.44,and the standardized bisection bandwidth is 0.8 in RClos(32,5).The result of numerical experiments shows that RClos enjoys a small average path length and a high network fault tolerance,which is essential in the construction of high-performance data center networks.