Photoacoustic-computed tomography is a novel imaging technique that combines high absorption contrast and deep tissue penetration capability,enabling comprehensive three-dimensional imaging of biological targets.Howev...Photoacoustic-computed tomography is a novel imaging technique that combines high absorption contrast and deep tissue penetration capability,enabling comprehensive three-dimensional imaging of biological targets.However,the increasing demand for higher resolution and real-time imaging results in significant data volume,limiting data storage,transmission and processing efficiency of system.Therefore,there is an urgent need for an effective method to compress the raw data without compromising image quality.This paper presents a photoacoustic-computed tomography 3D data compression method and system based on Wavelet-Transformer.This method is based on the cooperative compression framework that integrates wavelet hard coding with deep learning-based soft decoding.It combines the multiscale analysis capability of wavelet transforms with the global feature modeling advantage of Transformers,achieving high-quality data compression and reconstruction.Experimental results using k-wave simulation suggest that the proposed compression system has advantages under extreme compression conditions,achieving a raw data compression ratio of up to 1:40.Furthermore,three-dimensional data compression experiment using in vivo mouse demonstrated that the maximum peak signal-to-noise ratio(PSNR)and structural similarity index(SSIM)values of reconstructed images reached 38.60 and 0.9583,effectively overcoming detail loss and artifacts introduced by raw data compression.All the results suggest that the proposed system can significantly reduce storage requirements and hardware cost,enhancing computational efficiency and image quality.These advantages support the development of photoacoustic-computed tomography toward higher efficiency,real-time performance and intelligent functionality.展开更多
This paper focuses on the integration and data transformation between GPS and totalstation.It emphasizes on the way to transfer the WGS84 Cartesian coordinates to the local two_dimensional plane coordinates and the or...This paper focuses on the integration and data transformation between GPS and totalstation.It emphasizes on the way to transfer the WGS84 Cartesian coordinates to the local two_dimensional plane coordinates and the orthometric height GPS receiver,totalstation,radio,notebook computer and the corresponding software work together to form a new surveying system,the super_totalstation positioning system(SPS) and a new surveying model for terrestrial surveying.With the help of this system,the positions of detail points can be measured.展开更多
As Internet ofThings(IoT)technologies continue to evolve at an unprecedented pace,intelligent big data control and information systems have become critical enablers for organizational digital transformation,facilitati...As Internet ofThings(IoT)technologies continue to evolve at an unprecedented pace,intelligent big data control and information systems have become critical enablers for organizational digital transformation,facilitating data-driven decision making,fostering innovation ecosystems,and maintaining operational stability.In this study,we propose an advanced deployment algorithm for Service Function Chaining(SFC)that leverages an enhanced Practical Byzantine Fault Tolerance(PBFT)mechanism.The main goal is to tackle the issues of security and resource efficiency in SFC implementation across diverse network settings.By integrating blockchain technology and Deep Reinforcement Learning(DRL),our algorithm not only optimizes resource utilization and quality of service but also ensures robust security during SFC deployment.Specifically,the enhanced PBFT consensus mechanism(VRPBFT)significantly reduces consensus latency and improves Byzantine node detection through the introduction of a Verifiable Random Function(VRF)and a node reputation grading model.Experimental results demonstrate that compared to traditional PBFT,the proposed VRPBFT algorithm reduces consensus latency by approximately 30%and decreases the proportion of Byzantine nodes by 40%after 100 rounds of consensus.Furthermore,the DRL-based SFC deployment algorithm(SDRL)exhibits rapid convergence during training,with improvements in long-term average revenue,request acceptance rate,and revenue/cost ratio of 17%,14.49%,and 20.35%,respectively,over existing algorithms.Additionally,the CPU resource utilization of the SDRL algorithmreaches up to 42%,which is 27.96%higher than other algorithms.These findings indicate that the proposed algorithm substantially enhances resource utilization efficiency,service quality,and security in SFC deployment.展开更多
Efficient and accurate prediction of ocean surface latent heat fluxes is essential for understanding and modeling climate dynamics.Conventional estimation methods have low resolution and lack accuracy.The transformer ...Efficient and accurate prediction of ocean surface latent heat fluxes is essential for understanding and modeling climate dynamics.Conventional estimation methods have low resolution and lack accuracy.The transformer model,with its self-attention mechanism,effectively captures long-range dependencies,leading to a degradation of accuracy over time.Due to the non-linearity and uncertainty of physical processes,the transformer model encounters the problem of error accumulation,leading to a degradation of accuracy over time.To solve this problem,we combine the Data Assimilation(DA)technique with the transformer model and continuously modify the model state to make it closer to the actual observations.In this paper,we propose a deep learning model called TransNetDA,which integrates transformer,convolutional neural network and DA methods.By combining data-driven and DA methods for spatiotemporal prediction,TransNetDA effectively extracts multi-scale spatial features and significantly improves prediction accuracy.The experimental results indicate that the TransNetDA method surpasses traditional techniques in terms of root mean square error and R2 metrics,showcasing its superior performance in predicting latent heat fluxes at the ocean surface.展开更多
The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities...The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders.展开更多
On January 23,2026,the second plenary session of the China Expo Forum for International Cooperation(CEFCO) convened in Wuhan,Hubei Province.Amid significant changes in the global exhibition landscape,participants shar...On January 23,2026,the second plenary session of the China Expo Forum for International Cooperation(CEFCO) convened in Wuhan,Hubei Province.Amid significant changes in the global exhibition landscape,participants shared recent developments,key data,and future trends in major markets,offering guidance for industry practitioners to drive progress.展开更多
Seismic data contain random noise interference and are affected by irregular subsampling. Presently, most of the data reconstruction methods are carried out separately from noise suppression. Moreover, most data recon...Seismic data contain random noise interference and are affected by irregular subsampling. Presently, most of the data reconstruction methods are carried out separately from noise suppression. Moreover, most data reconstruction methods are not ideal for noisy data. In this paper, we choose the multiscale and multidirectional 2D curvelet transform to perform simultaneous data reconstruction and noise suppression of 3D seismic data. We introduce the POCS algorithm, the exponentially decreasing square root threshold, and soft threshold operator to interpolate the data at each time slice. A weighing strategy was introduced to reduce the reconstructed data noise. A 3D simultaneous data reconstruction and noise suppression method based on the curvelet transform was proposed. When compared with data reconstruction followed by denoizing and the Fourier transform, the proposed method is more robust and effective. The proposed method has important implications for data acquisition in complex areas and reconstructing missing traces.展开更多
Receiver ghost reflections adversely affect variable-depth streamer (VDS) data acquisition. In addition, the frequency notches caused by the interference between receiver ghosts and primary waves strongly affect sei...Receiver ghost reflections adversely affect variable-depth streamer (VDS) data acquisition. In addition, the frequency notches caused by the interference between receiver ghosts and primary waves strongly affect seismic data processing and imaging. We developed a high-resolution Radon transform algorithm and used it to predict receiver ghosts from VDS data. The receiver ghost reflections are subtracted and removed from the raw data. We propose a forward Radon transform operator of VDS data in the frequency domain and, based on the ray paths of the receiver ghosts, we propose an inverse Radon transform operator. We apply the proposed methodology to model and field data with good results. We use matching and subtracting modules of commercially available seismic data processing software to remove the receiver ghosts. The frequency notches are compensated and the effective frequency bandwidth of the seismic data broadens.展开更多
基金supported by the National Key R&D Program of China[Grant No.2023YFF0713600]the National Natural Science Foundation of China[Grant No.62275062]+3 种基金Project of Shandong Innovation and Startup Community of High-end Medical Apparatus and Instruments[Grant No.2023-SGTTXM-002 and 2024-SGTTXM-005]the Shandong Province Technology Innovation Guidance Plan(Central Leading Local Science and Technology Development Fund)[Grant No.YDZX2023115]the Taishan Scholar Special Funding Project of Shandong Provincethe Shandong Laboratory of Advanced Biomaterials and Medical Devices in Weihai[Grant No.ZL202402].
文摘Photoacoustic-computed tomography is a novel imaging technique that combines high absorption contrast and deep tissue penetration capability,enabling comprehensive three-dimensional imaging of biological targets.However,the increasing demand for higher resolution and real-time imaging results in significant data volume,limiting data storage,transmission and processing efficiency of system.Therefore,there is an urgent need for an effective method to compress the raw data without compromising image quality.This paper presents a photoacoustic-computed tomography 3D data compression method and system based on Wavelet-Transformer.This method is based on the cooperative compression framework that integrates wavelet hard coding with deep learning-based soft decoding.It combines the multiscale analysis capability of wavelet transforms with the global feature modeling advantage of Transformers,achieving high-quality data compression and reconstruction.Experimental results using k-wave simulation suggest that the proposed compression system has advantages under extreme compression conditions,achieving a raw data compression ratio of up to 1:40.Furthermore,three-dimensional data compression experiment using in vivo mouse demonstrated that the maximum peak signal-to-noise ratio(PSNR)and structural similarity index(SSIM)values of reconstructed images reached 38.60 and 0.9583,effectively overcoming detail loss and artifacts introduced by raw data compression.All the results suggest that the proposed system can significantly reduce storage requirements and hardware cost,enhancing computational efficiency and image quality.These advantages support the development of photoacoustic-computed tomography toward higher efficiency,real-time performance and intelligent functionality.
文摘This paper focuses on the integration and data transformation between GPS and totalstation.It emphasizes on the way to transfer the WGS84 Cartesian coordinates to the local two_dimensional plane coordinates and the orthometric height GPS receiver,totalstation,radio,notebook computer and the corresponding software work together to form a new surveying system,the super_totalstation positioning system(SPS) and a new surveying model for terrestrial surveying.With the help of this system,the positions of detail points can be measured.
基金supported by the National Natural Science Foundation of China under Grant 62471493 and 62402257partially supported by the Natural Science Foundation of Shandong Province under Grant ZR2023LZH017,ZR2024MF066 and 2023QF025+2 种基金partially supported by the Open Research Subject of State Key Laboratory of Intelligent Game(No.ZBKF-24-12)partially supported by the Foundation of Key Laboratory of Education Informatization for Nationalities(Yunnan Normal University),the Ministry of Education(No.EIN2024C006)partially supported by the Key Laboratory of Ethnic Language Intelligent Analysis and Security Governance of MOE(No.202306).
文摘As Internet ofThings(IoT)technologies continue to evolve at an unprecedented pace,intelligent big data control and information systems have become critical enablers for organizational digital transformation,facilitating data-driven decision making,fostering innovation ecosystems,and maintaining operational stability.In this study,we propose an advanced deployment algorithm for Service Function Chaining(SFC)that leverages an enhanced Practical Byzantine Fault Tolerance(PBFT)mechanism.The main goal is to tackle the issues of security and resource efficiency in SFC implementation across diverse network settings.By integrating blockchain technology and Deep Reinforcement Learning(DRL),our algorithm not only optimizes resource utilization and quality of service but also ensures robust security during SFC deployment.Specifically,the enhanced PBFT consensus mechanism(VRPBFT)significantly reduces consensus latency and improves Byzantine node detection through the introduction of a Verifiable Random Function(VRF)and a node reputation grading model.Experimental results demonstrate that compared to traditional PBFT,the proposed VRPBFT algorithm reduces consensus latency by approximately 30%and decreases the proportion of Byzantine nodes by 40%after 100 rounds of consensus.Furthermore,the DRL-based SFC deployment algorithm(SDRL)exhibits rapid convergence during training,with improvements in long-term average revenue,request acceptance rate,and revenue/cost ratio of 17%,14.49%,and 20.35%,respectively,over existing algorithms.Additionally,the CPU resource utilization of the SDRL algorithmreaches up to 42%,which is 27.96%higher than other algorithms.These findings indicate that the proposed algorithm substantially enhances resource utilization efficiency,service quality,and security in SFC deployment.
基金The National Natural Science Foundation of China under contract Nos 42176011 and 61931025the Fundamental Research Funds for the Central Universities of China under contract No.24CX03001A.
文摘Efficient and accurate prediction of ocean surface latent heat fluxes is essential for understanding and modeling climate dynamics.Conventional estimation methods have low resolution and lack accuracy.The transformer model,with its self-attention mechanism,effectively captures long-range dependencies,leading to a degradation of accuracy over time.Due to the non-linearity and uncertainty of physical processes,the transformer model encounters the problem of error accumulation,leading to a degradation of accuracy over time.To solve this problem,we combine the Data Assimilation(DA)technique with the transformer model and continuously modify the model state to make it closer to the actual observations.In this paper,we propose a deep learning model called TransNetDA,which integrates transformer,convolutional neural network and DA methods.By combining data-driven and DA methods for spatiotemporal prediction,TransNetDA effectively extracts multi-scale spatial features and significantly improves prediction accuracy.The experimental results indicate that the TransNetDA method surpasses traditional techniques in terms of root mean square error and R2 metrics,showcasing its superior performance in predicting latent heat fluxes at the ocean surface.
文摘The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders.
文摘On January 23,2026,the second plenary session of the China Expo Forum for International Cooperation(CEFCO) convened in Wuhan,Hubei Province.Amid significant changes in the global exhibition landscape,participants shared recent developments,key data,and future trends in major markets,offering guidance for industry practitioners to drive progress.
基金sponsored by the National Natural Science Foundation of China(Nos.41304097 and 41664006)the Natural Science Foundation of Jiangxi Province(No.20151BAB203044)+1 种基金the China Scholarship Council(No.201508360061)Distinguished Young Talent Foundation of Jiangxi Province(2017)
文摘Seismic data contain random noise interference and are affected by irregular subsampling. Presently, most of the data reconstruction methods are carried out separately from noise suppression. Moreover, most data reconstruction methods are not ideal for noisy data. In this paper, we choose the multiscale and multidirectional 2D curvelet transform to perform simultaneous data reconstruction and noise suppression of 3D seismic data. We introduce the POCS algorithm, the exponentially decreasing square root threshold, and soft threshold operator to interpolate the data at each time slice. A weighing strategy was introduced to reduce the reconstructed data noise. A 3D simultaneous data reconstruction and noise suppression method based on the curvelet transform was proposed. When compared with data reconstruction followed by denoizing and the Fourier transform, the proposed method is more robust and effective. The proposed method has important implications for data acquisition in complex areas and reconstructing missing traces.
文摘Receiver ghost reflections adversely affect variable-depth streamer (VDS) data acquisition. In addition, the frequency notches caused by the interference between receiver ghosts and primary waves strongly affect seismic data processing and imaging. We developed a high-resolution Radon transform algorithm and used it to predict receiver ghosts from VDS data. The receiver ghost reflections are subtracted and removed from the raw data. We propose a forward Radon transform operator of VDS data in the frequency domain and, based on the ray paths of the receiver ghosts, we propose an inverse Radon transform operator. We apply the proposed methodology to model and field data with good results. We use matching and subtracting modules of commercially available seismic data processing software to remove the receiver ghosts. The frequency notches are compensated and the effective frequency bandwidth of the seismic data broadens.