During drilling operations,the low resolution of seismic data often limits the accurate characterization of small-scale geological bodies near the borehole and ahead of the drill bit.This study investigates high-resol...During drilling operations,the low resolution of seismic data often limits the accurate characterization of small-scale geological bodies near the borehole and ahead of the drill bit.This study investigates high-resolution seismic data processing technologies and methods tailored for drilling scenarios.The high-resolution processing of seismic data is divided into three stages:pre-drilling processing,post-drilling correction,and while-drilling updating.By integrating seismic data from different stages,spatial ranges,and frequencies,together with information from drilled wells and while-drilling data,and applying artificial intelligence modeling techniques,a progressive high-resolution processing technology of seismic data based on multi-source information fusion is developed,which performs simple and efficient seismic information updates during drilling.Case studies show that,with the gradual integration of multi-source information,the resolution and accuracy of seismic data are significantly improved,and thin-bed weak reflections are more clearly imaged.The updated seismic information while-drilling demonstrates high value in predicting geological bodies ahead of the drill bit.Validation using logging,mud logging,and drilling engineering data ensures the fidelity of the processing results of high-resolution seismic data.This provides clearer and more accurate stratigraphic information for drilling operations,enhancing both drilling safety and efficiency.展开更多
There is little low-and-high frequency information on seismic data in seismic exploration,resulting in narrower bandwidth and lower seismic resolution.It considerably restricts the prediction accuracy of thin reservoi...There is little low-and-high frequency information on seismic data in seismic exploration,resulting in narrower bandwidth and lower seismic resolution.It considerably restricts the prediction accuracy of thin reservoirs and thin interbeds.This study proposes a novel method to constrain improving seismic resolution in the time and frequency domain.The expected wavelet spectrum is used in the frequency domain to broaden the seismic spectrum range and increase the octave.In the time domain,the Frobenius vector regularization of the Hessian matrix is used to constrain the horizontal continuity of the seismic data.It eff ectively protects the signal-to-noise ratio of seismic data while the longitudinal seismic resolution is improved.This method is applied to actual post-stack seismic data and pre-stack gathers dividedly.Without abolishing the phase characteristics of the original seismic data,the time resolution is signifi cantly improved,and the structural features are clearer.Compared with the traditional spectral simulation and deconvolution methods,the frequency distribution is more reasonable,and seismic data has higher resolution.展开更多
A novel universal preprocessing method is proposed to estimate angles of arrival,which is applicable to one-or two-dimensional high resolution processing based on arbitrarycenter-symmetric arrays (such as uniform line...A novel universal preprocessing method is proposed to estimate angles of arrival,which is applicable to one-or two-dimensional high resolution processing based on arbitrarycenter-symmetric arrays (such as uniform linear arrays, equal-spaced rectangular planar arraysand symmetric circular arrays). By mapping the complex signal space into the real one, the newmethod can effectively reduce the computation needed by the signal subspace direction findingtechniques without any performance degradation. In addition, the new preprocessing scheme itselfcan decorrelate the coherent signals received on the array. For regular array geometry such asuniform linear arrays and equal-spaced rectangular planar arrays, the popular spatial smoothingpreprocessing technique can be combined with the novel approach to improve the decorrelatingability. Simulation results confirm the above conclusions.展开更多
A profile of shallow crustal velocity structure(1–2 km) may greatly enhance interpretation of the sedimentary environment and shallow tectonic deformation.Recent advances in surface wave tomography, using ambient noi...A profile of shallow crustal velocity structure(1–2 km) may greatly enhance interpretation of the sedimentary environment and shallow tectonic deformation.Recent advances in surface wave tomography, using ambient noise data recorded with high-density seismic arrays, have improved the understanding of regional crustal structure. As the interest in detailed shallow crustal structure imaging has increased, dense seismic array methods have become increasingly efficient. This study used a high-density seismic array deployed in the Xinjiang basin in southeastern China, to record seismic data, which was then processed with the ambient noise tomography method. The high-density seismic array contained 203 short-period seismometers, spaced at short intervals(~ 400 m). The array collected continuous records of ambient noise for 32 days. Data preprocessing,cross correlation calculation, and Rayleigh surface wave phase-velocity dispersion curve extraction, yielded more than 16,000 Rayleigh surface wave phase-velocity dispersion curves, which were then analyzed using the direct-inversion method. Checkerboard tests indicate that the shear wave velocity is recovered in the study area, at depths of 0–1.4 km,with a lateral image resolution of ~ 400 m. Model test results show that the seismic array effectively images a 50 m thick slab at a depth of 0–300 m, a 150 m thick anomalous body at a depth of 300–600 m, and a 400 m thick anomalous body at a depth of 0.6–1.4 km. The shear wave velocity profile reveals features very similar to those detected by a deep seismic reflection profile across the study area. This demonstrates that analysis of shallow crustal velocity structure provides high-resolution imaging of crustal features.Thus, ambient noise tomography with a high-density seismic array may play an important role in imaging shallow crustal structure.展开更多
In recent years more and more multi-array logging tools, such as the array induction and the array lateralog, are applied in place of conventional logging tools resulting in increased resolution, better radial and ver...In recent years more and more multi-array logging tools, such as the array induction and the array lateralog, are applied in place of conventional logging tools resulting in increased resolution, better radial and vertical sounding capability and other features. Multi-array logging tools acquire several times more individual measurements than conventional logging tools. In addition to new information contained in these data, there is a certain redundancy among the measurements. The sum of the measurements actually composes a large matrix. Providing the measurements are error-free, the elements of this matrix show certain consistencies. Taking advantage of these consistencies, an innovative method is developed to detect and correct errors in the array resistivity logging tool raw measurements, and evaluate the quality of the data. The method can be described in several steps. First, data consistency patterns are identified based on the physics of the measurements. Second, the measurements are compared against the consistency patterns for error and bad data detection. Third, the erroneous data are eliminated and the measurements are re-constructed according to the consistency patterns. Finally, the data quality is evaluated by comparing the raw measurements with the re-constructed measurements. The method can be applied to all array type logging tools, such as array induction tool and array resistivity tool. This paper describes the method and illustrates its application with the High Definition Lateral Log (HDLL, Baker Atlas) instrument. To demonstrate the efficiency of the method, several field examples are shown and discussed.展开更多
To improve the resolution and accuracy of Direct Position Determination(DPD),this paper investigates the problem of positioning multiple emitters directly with a single moving Rotating Linear Array(RLA).Firstly,the ge...To improve the resolution and accuracy of Direct Position Determination(DPD),this paper investigates the problem of positioning multiple emitters directly with a single moving Rotating Linear Array(RLA).Firstly,the geometry of the RLA is formulated and analysed.According to its geometry,the intercepted noncoherent signals in multiple interception intervals are modeled.Correspondingly,the Multiple SIgnal Classification(MUSIC)based noncoherent DPD approach is proposed.Secondly,the synchronous coherent pulse signals are individually considered and formulated.And the coherent DPD approach which aims for localizing this special type of signal is presented by stacking all array responses at different interception intervals.Besides,we also derive the constrained Cramer-Rao Lower Bound(CRLB)expression for both noncoherent and coherent DPD with RLA under the constraint that the altitudes of the emitters are known.At last,computer simulations are included to examine the performance of the proposed approach.The results demonstrate that the localization accuracy and resolution of DPD with single moving linear array can be significantly improved by the array rotation.In addition,coherent DPD with RLA further improves the resolution and increases the maximum emitter number that can be localized compared with the noncoherent DPD with RLA.展开更多
High-resolution approaches such as multiple signal classification and estimation of signal parameters via rotational invariance techniques(ESPRIT) are currently employed widely in multibeam echo-sounder(MBES)syste...High-resolution approaches such as multiple signal classification and estimation of signal parameters via rotational invariance techniques(ESPRIT) are currently employed widely in multibeam echo-sounder(MBES)systems for sea floor bathymetry,where a uniform line array is also required.However,due to the requirements in terms of the system coverage/resolution and installation space constraints,an MBES system usually employs a receiving array with a special shape,which means that high-resolution algorithms cannot be applied directly.In addition,the short-term stationary echo signals make it difficult to estimate the covariance matrix required by the high-resolution approaches,which further increases the complexity when applying the high-resolution algorithms in the MBES systems.The ESPRIT with multiple-angle subarray beamforming is employed to reduce the requirements in terms of the signal-to-noise ratio,number of snapshots,and computational effort.The simulations show that the new processing method can provide better fine-structure resolution.Then a highresolution bottom detection(HRBD) algorithm is developed by combining the new processing method with virtual array transformation.The application of the HRBD algorithm to a U-shaped array is also discuss.The computer simulations and experimental data processing results verify the effectiveness of the proposed algorithm.展开更多
A constant problem is to localize a number of acoustic sources, to separate their individual signals and to estimate their strengths in a propagation medium. An acoustic receiving array with signal processing algorith...A constant problem is to localize a number of acoustic sources, to separate their individual signals and to estimate their strengths in a propagation medium. An acoustic receiving array with signal processing algorithms is then used. The most widely used algorithm is the conventional beamforming algorithm but it has a very low resolution and high sidelobes that may cause a signal leakage problem. Several new signal processors for arrays of sensors are derived to evaluate the strengths of acoustic signals arriving at an array of sensors. In particular, we present the covariance vector estimator and the pseudoinverse of the array manifold matrix estimator. The covariance vector estimator uses only the correlations between sensors and the pseudoinverse of the array manifold matrix estimator operates with the minimum eigenvalues of the covariance matrix. Numerical and experimental results are presented.展开更多
Due to the limited scenes that synthetic aperture radar(SAR)satellites can detect,the full-track utilization rate is not high.Because of the computing and storage limitation of one satellite,it is difficult to process...Due to the limited scenes that synthetic aperture radar(SAR)satellites can detect,the full-track utilization rate is not high.Because of the computing and storage limitation of one satellite,it is difficult to process large amounts of data of spaceborne synthetic aperture radars.It is proposed to use a new method of networked satellite data processing for improving the efficiency of data processing.A multi-satellite distributed SAR real-time processing method based on Chirp Scaling(CS)imaging algorithm is studied in this paper,and a distributed data processing system is built with field programmable gate array(FPGA)chips as the kernel.Different from the traditional CS algorithm processing,the system divides data processing into three stages.The computing tasks are reasonably allocated to different data processing units(i.e.,satellites)in each stage.The method effectively saves computing and storage resources of satellites,improves the utilization rate of a single satellite,and shortens the data processing time.Gaofen-3(GF-3)satellite SAR raw data is processed by the system,with the performance of the method verified.展开更多
Direct ink writing(DIW)holds enormous potential in fabricating multiscale and multi-functional architectures by virtue of its wide range of printable materials,simple operation,and ease of rapid prototyping.Although i...Direct ink writing(DIW)holds enormous potential in fabricating multiscale and multi-functional architectures by virtue of its wide range of printable materials,simple operation,and ease of rapid prototyping.Although it is well known that ink rheology and processing parameters have a direct impact on the resolution and shape of the printed objects,the underlying mechanisms of these key factors on the printability and quality of DIW technique remain poorly understood.To tackle this issue,we systematically analyzed the printability and quality through extrusion mechanism modeling and experimental validating.Hybrid non-Newtonian fluid inks were first prepared,and their rheological properties were measured.Then,finite element analysis of the whole DIW process was conducted to reveal the flow dynamics of these inks.The obtained optimal process parameters(ink rheology,applied pressure,printing speed,etc)were also validated by experiments where high-resolution(<100μm)patterns were fabricated rapidly(>70 mm s^(-1)).Finally,as a process research demonstration,we printed a series of microstructures and circuit systems with hybrid inks and silver inks,showing the suitability of the printable process parameters.This study provides a strong quantitative illustration of the use of DIW for the high-speed preparation of high-resolution,high-precision samples.展开更多
The commercial high-resolution imaging satellite with 1 m spatial resolution IKONOS is an important data source of information for urban planning and geographical information system (GIS) applications. In this paper, ...The commercial high-resolution imaging satellite with 1 m spatial resolution IKONOS is an important data source of information for urban planning and geographical information system (GIS) applications. In this paper, a morphological method is proposed. The proposed method combines the automatic thresholding and morphological operation techniques to extract the road centerline of the urban environment. This method intends to solve urban road centerline problems, vehicle, vegetation, building etc. Based on this morphological method, an object extractor is designed to extract road networks from highly remote sensing images. Some filters are applied in this experiment such as line reconstruction and region filling techniques to connect the disconnected road segments and remove the small redundant. Finally, the thinning algorithm is used to extract the road centerline. Experiments have been conducted on a high-resolution IKONOS and QuickBird images showing the efficiency of the proposed method.展开更多
The structure of a microlens array( MLA) can be formed on copper by an indentation process which is a new manufacture approach we applied here instead of a traditional method to test the material property,thereby wo...The structure of a microlens array( MLA) can be formed on copper by an indentation process which is a new manufacture approach we applied here instead of a traditional method to test the material property,thereby work time can be saved. Single-indentation and multi-indentation are both conducted to generate a single dimple and dimples array,namely micro lens and MLA. Based on finite element simulation method,factors affecting the form accuracy,such as springback at the compressed area of one single dimple and compressional deformation at the adjacent area of dimples arrays,are determined,and the results are verified by experiments under the same conditions. Meanwhile,indenter compensation method is proposed to improve form accuracy of single dimple,and the relationship between pitch and compressional deformation is investigated by modelling seven sets of multi-indentations at different pitches to identify the critical pitch for the MLA's indentation processing. Loads and cross-sectional profiles are measured and analyzed to reveal the compressional deformation mechanism. Finally,it is found that MLA at pitches higher than 1. 47 times of its diameter can be manufactured precisely by indentation using a compensated indenter.展开更多
The photonic neural processing unit(PNPU)demonstrates ultrahigh inference speed with low energy consumption,and it has become a promising hardware artificial intelligence(AI)accelerator.However,the nonidealities of th...The photonic neural processing unit(PNPU)demonstrates ultrahigh inference speed with low energy consumption,and it has become a promising hardware artificial intelligence(AI)accelerator.However,the nonidealities of the photonic device and the peripheral circuit make the practical application much more complex.Rather than optimizing the photonic device,the architecture,and the algorithm individually,a joint device-architecture-algorithm codesign method is proposed to improve the accuracy,efficiency and robustness of the PNPU.First,a full-flow simulator for the PNPU is developed from the back end simulator to the high-level training framework;Second,the full system architecture and the complete photonic chip design enable the simulator to closely model the real system;Third,the nonidealities of the photonic chip are evaluated for the PNPU design.The average test accuracy exceeds 98%,and the computing power exceeds 100TOPS.展开更多
A fast separable approach based on a cross array is presented, which has coarsegrained parallelism. Its computational load is far less than that of the two-dimensional (2-D) direct processing method and other existing...A fast separable approach based on a cross array is presented, which has coarsegrained parallelism. Its computational load is far less than that of the two-dimensional (2-D) direct processing method and other existing separable approaches. In order to compensate for the performance degradation due to separable processing, two postprocessing schemes are also proposed. Some computer simulation results are provided for illustration in the end.展开更多
This paper proposes the Application Specific Signal Processor(ASSP)-based implementation of the real-time signal processing system in both spatial domain and time domain for a phased-array radar. This paper also propo...This paper proposes the Application Specific Signal Processor(ASSP)-based implementation of the real-time signal processing system in both spatial domain and time domain for a phased-array radar. This paper also proposes the system-on-silicon hardware design of some ASSPs including the adaptive beamformer, FFT appliation specific integrated circuit, clutter map former and update, moving target extractor and video integrator. The advantages of the processing system are compact, efficient, and robust.展开更多
The near-seabed multichannel seismic exploration systems have yielded remarkable successes in marine geological disaster assessment,marine gas hydrate investigation,and deep-sea mineral exploration owing to their high...The near-seabed multichannel seismic exploration systems have yielded remarkable successes in marine geological disaster assessment,marine gas hydrate investigation,and deep-sea mineral exploration owing to their high vertical and horizontal resolution.However,the quality of deep-towed seismic imaging hinges on accurate source-receiver positioning information.In light of existing technical problems,we propose a novel array geometry inversion method tailored for high-resolution deep-towed multichannel seismic exploration systems.This method is independent of the attitude and depth sensors along a deep-towed seismic streamer,accounting for variations in seawater velocity and seabed slope angle.Our approach decomposes the towed line array into multiline segments and characterizes its geometric shape using the line segment distance and pitch angle.Introducing optimization parameters for seawater velocity and seabed slope angle,we establish an objective function based on the model,yielding results that align with objective reality.Employing the particle swarm optimization algorithm enables synchronous acquisition of optimized inversion results for array geometry and seawater velocity.Experimental validation using theoretical models and practical data verifies that our approach effectively enhances source and receiver positioning inversion accuracy.The algorithm exhibits robust stability and reliability,addressing uncertainties in seismic traveltime picking and complex seabed topography conditions.展开更多
A systolic array architecture computer (FXCQ) has been designed for signal processing. R can handle floating point data at very high speed. It is composed of 16 processing cells and a cache that are connected linearly...A systolic array architecture computer (FXCQ) has been designed for signal processing. R can handle floating point data at very high speed. It is composed of 16 processing cells and a cache that are connected linearly and form a ring structure. All processing cells are identical and programmable. Each processing cell has the peak performance of 20 million floating-point operations per second (20MFLOPS). The machine therefore has a peak performance of 320 M FLOPS. It is integrated as an attached processor into a host system through VME bus interface. Programs for FXCQ are written in a high-level language -B language, which is supported by a parallel optimizing compiler. This paper describes the architecture of FXCQ, B language and its compiler.展开更多
The system of Integrated-Optics Acousto-Optic RF Spectrum Analyzer (IOAOSA)consists of a laser diode, an Acousto-Optic (A-O) modulator, geodesic lenses and CCD detectorarray. The optical signal projected on the CCD ar...The system of Integrated-Optics Acousto-Optic RF Spectrum Analyzer (IOAOSA)consists of a laser diode, an Acousto-Optic (A-O) modulator, geodesic lenses and CCD detectorarray. The optical signal projected on the CCD array is converted into electrical signal andprocessed by the signal processing center which consists of a TMS 32010 system and an IBM-PC.The TMS 32010 with very high speed is used in a microcomputer system. A cycle sample methodis adopted to collect the data of the CCD video signal, sampling one per 40-point. After theprocessing, the frequency bandwidth, the resolution and the dynamic range of the system aremeasured to be 100 MHz, 8 MHz and 20 dB, respectively.展开更多
Fractional order algorithms have shown promising results in various signal processing applications due to their ability to improve performance without significantly increasing complexity.The goal of this work is to in...Fractional order algorithms have shown promising results in various signal processing applications due to their ability to improve performance without significantly increasing complexity.The goal of this work is to inves-tigate the use of fractional order algorithm in the field of adaptive beam-forming,with a focus on improving performance while keeping complexity lower.The effectiveness of the algorithm will be studied and evaluated in this context.In this paper,a fractional order least mean square(FLMS)algorithm is proposed for adaptive beamforming in wireless applications for effective utilization of resources.This algorithm aims to improve upon existing beam-forming algorithms,which are inefficient in performance,by offering faster convergence,better accuracy,and comparable computational complexity.The FLMS algorithm uses fractional order gradient in addition to the standard ordered gradient in weight adaptation.The derivation of the algorithm is provided and supported by mathematical convergence analysis.Performance is evaluated through simulations using mean square error(MSE)minimization as a metric and compared with the standard LMS algorithm for various parameters.The results,obtained through Matlab simulations,show that the FLMS algorithm outperforms the standard LMS in terms of convergence speed,beampattern accuracy and scatter plots.FLMS outperforms LMS in terms of convergence speed by 34%.From this,it can be concluded that FLMS is a better candidate for adaptive beamforming and other signal processing applications.展开更多
Unstructured and irregular graph data causes strong randomness and poor locality of data accesses in graph processing.This paper optimizes the depth-branch-resorting algorithm(DBR),and proposes a branch-alternation-re...Unstructured and irregular graph data causes strong randomness and poor locality of data accesses in graph processing.This paper optimizes the depth-branch-resorting algorithm(DBR),and proposes a branch-alternation-resorting algorithm(BAR).In order to make the algorithm run in parallel and improve the efficiency of algorithm operation,the BAR algorithm is mapped onto the reconfigurable array processor(APR-16)to achieve vertex reordering,effectively improving the locality of graph data.This paper validates the BAR algorithm on the GraphBIG framework,by utilizing the reordered dataset with BAR on breadth-first search(BFS),single source shortest paht(SSSP)and betweenness centrality(BC)algorithms for traversal.The results show that compared with DBR and Corder algorithms,BAR can reduce execution time by up to 33.00%,and 51.00%seperatively.In terms of data movement,the BAR algorithm has a maximum reduction of 39.00%compared with the DBR algorithm and 29.66%compared with Corder algorithm.In terms of computational complexity,the BAR algorithm has a maximum reduction of 32.56%compared with DBR algorithm and53.05%compared with Corder algorithm.展开更多
基金Supported by the National Natural Science Foundation of China(U24B2031)National Key Research and Development Project(2018YFA0702504)"14th Five-Year Plan"Science and Technology Project of CNOOC(KJGG2022-0201)。
文摘During drilling operations,the low resolution of seismic data often limits the accurate characterization of small-scale geological bodies near the borehole and ahead of the drill bit.This study investigates high-resolution seismic data processing technologies and methods tailored for drilling scenarios.The high-resolution processing of seismic data is divided into three stages:pre-drilling processing,post-drilling correction,and while-drilling updating.By integrating seismic data from different stages,spatial ranges,and frequencies,together with information from drilled wells and while-drilling data,and applying artificial intelligence modeling techniques,a progressive high-resolution processing technology of seismic data based on multi-source information fusion is developed,which performs simple and efficient seismic information updates during drilling.Case studies show that,with the gradual integration of multi-source information,the resolution and accuracy of seismic data are significantly improved,and thin-bed weak reflections are more clearly imaged.The updated seismic information while-drilling demonstrates high value in predicting geological bodies ahead of the drill bit.Validation using logging,mud logging,and drilling engineering data ensures the fidelity of the processing results of high-resolution seismic data.This provides clearer and more accurate stratigraphic information for drilling operations,enhancing both drilling safety and efficiency.
基金supported by the PetroChina Prospective,Basic,and Strategic Technology Research Project(No.2021DJ0606).
文摘There is little low-and-high frequency information on seismic data in seismic exploration,resulting in narrower bandwidth and lower seismic resolution.It considerably restricts the prediction accuracy of thin reservoirs and thin interbeds.This study proposes a novel method to constrain improving seismic resolution in the time and frequency domain.The expected wavelet spectrum is used in the frequency domain to broaden the seismic spectrum range and increase the octave.In the time domain,the Frobenius vector regularization of the Hessian matrix is used to constrain the horizontal continuity of the seismic data.It eff ectively protects the signal-to-noise ratio of seismic data while the longitudinal seismic resolution is improved.This method is applied to actual post-stack seismic data and pre-stack gathers dividedly.Without abolishing the phase characteristics of the original seismic data,the time resolution is signifi cantly improved,and the structural features are clearer.Compared with the traditional spectral simulation and deconvolution methods,the frequency distribution is more reasonable,and seismic data has higher resolution.
文摘A novel universal preprocessing method is proposed to estimate angles of arrival,which is applicable to one-or two-dimensional high resolution processing based on arbitrarycenter-symmetric arrays (such as uniform linear arrays, equal-spaced rectangular planar arraysand symmetric circular arrays). By mapping the complex signal space into the real one, the newmethod can effectively reduce the computation needed by the signal subspace direction findingtechniques without any performance degradation. In addition, the new preprocessing scheme itselfcan decorrelate the coherent signals received on the array. For regular array geometry such asuniform linear arrays and equal-spaced rectangular planar arrays, the popular spatial smoothingpreprocessing technique can be combined with the novel approach to improve the decorrelatingability. Simulation results confirm the above conclusions.
基金supported by the China Geological Survey Project“Deep Geological Survey of the Qin-Hang Belt”(No.DD20160082)the National Natural Science Foundation of China(No.41574048)
文摘A profile of shallow crustal velocity structure(1–2 km) may greatly enhance interpretation of the sedimentary environment and shallow tectonic deformation.Recent advances in surface wave tomography, using ambient noise data recorded with high-density seismic arrays, have improved the understanding of regional crustal structure. As the interest in detailed shallow crustal structure imaging has increased, dense seismic array methods have become increasingly efficient. This study used a high-density seismic array deployed in the Xinjiang basin in southeastern China, to record seismic data, which was then processed with the ambient noise tomography method. The high-density seismic array contained 203 short-period seismometers, spaced at short intervals(~ 400 m). The array collected continuous records of ambient noise for 32 days. Data preprocessing,cross correlation calculation, and Rayleigh surface wave phase-velocity dispersion curve extraction, yielded more than 16,000 Rayleigh surface wave phase-velocity dispersion curves, which were then analyzed using the direct-inversion method. Checkerboard tests indicate that the shear wave velocity is recovered in the study area, at depths of 0–1.4 km,with a lateral image resolution of ~ 400 m. Model test results show that the seismic array effectively images a 50 m thick slab at a depth of 0–300 m, a 150 m thick anomalous body at a depth of 300–600 m, and a 400 m thick anomalous body at a depth of 0.6–1.4 km. The shear wave velocity profile reveals features very similar to those detected by a deep seismic reflection profile across the study area. This demonstrates that analysis of shallow crustal velocity structure provides high-resolution imaging of crustal features.Thus, ambient noise tomography with a high-density seismic array may play an important role in imaging shallow crustal structure.
基金The authors would like to thank Dr. Jiaqi Xiao in Halliburton for his assistance and discussions.
文摘In recent years more and more multi-array logging tools, such as the array induction and the array lateralog, are applied in place of conventional logging tools resulting in increased resolution, better radial and vertical sounding capability and other features. Multi-array logging tools acquire several times more individual measurements than conventional logging tools. In addition to new information contained in these data, there is a certain redundancy among the measurements. The sum of the measurements actually composes a large matrix. Providing the measurements are error-free, the elements of this matrix show certain consistencies. Taking advantage of these consistencies, an innovative method is developed to detect and correct errors in the array resistivity logging tool raw measurements, and evaluate the quality of the data. The method can be described in several steps. First, data consistency patterns are identified based on the physics of the measurements. Second, the measurements are compared against the consistency patterns for error and bad data detection. Third, the erroneous data are eliminated and the measurements are re-constructed according to the consistency patterns. Finally, the data quality is evaluated by comparing the raw measurements with the re-constructed measurements. The method can be applied to all array type logging tools, such as array induction tool and array resistivity tool. This paper describes the method and illustrates its application with the High Definition Lateral Log (HDLL, Baker Atlas) instrument. To demonstrate the efficiency of the method, several field examples are shown and discussed.
基金funded by the National Defence Science and Technology Project Fund of China(No.3101140)the Shanghai Aerospace Science and Technology Innovation Fund of China(No.SAST2015028)the Equipment Prophecy Fund of China(No.9140A21040115KG01001).
文摘To improve the resolution and accuracy of Direct Position Determination(DPD),this paper investigates the problem of positioning multiple emitters directly with a single moving Rotating Linear Array(RLA).Firstly,the geometry of the RLA is formulated and analysed.According to its geometry,the intercepted noncoherent signals in multiple interception intervals are modeled.Correspondingly,the Multiple SIgnal Classification(MUSIC)based noncoherent DPD approach is proposed.Secondly,the synchronous coherent pulse signals are individually considered and formulated.And the coherent DPD approach which aims for localizing this special type of signal is presented by stacking all array responses at different interception intervals.Besides,we also derive the constrained Cramer-Rao Lower Bound(CRLB)expression for both noncoherent and coherent DPD with RLA under the constraint that the altitudes of the emitters are known.At last,computer simulations are included to examine the performance of the proposed approach.The results demonstrate that the localization accuracy and resolution of DPD with single moving linear array can be significantly improved by the array rotation.In addition,coherent DPD with RLA further improves the resolution and increases the maximum emitter number that can be localized compared with the noncoherent DPD with RLA.
基金The National Natural Science Foundation of China under contract No.41706066the National Key R&D Program of China under contract No.2016YFC1400200the China-ASEAN Maritime Cooperation Fund
文摘High-resolution approaches such as multiple signal classification and estimation of signal parameters via rotational invariance techniques(ESPRIT) are currently employed widely in multibeam echo-sounder(MBES)systems for sea floor bathymetry,where a uniform line array is also required.However,due to the requirements in terms of the system coverage/resolution and installation space constraints,an MBES system usually employs a receiving array with a special shape,which means that high-resolution algorithms cannot be applied directly.In addition,the short-term stationary echo signals make it difficult to estimate the covariance matrix required by the high-resolution approaches,which further increases the complexity when applying the high-resolution algorithms in the MBES systems.The ESPRIT with multiple-angle subarray beamforming is employed to reduce the requirements in terms of the signal-to-noise ratio,number of snapshots,and computational effort.The simulations show that the new processing method can provide better fine-structure resolution.Then a highresolution bottom detection(HRBD) algorithm is developed by combining the new processing method with virtual array transformation.The application of the HRBD algorithm to a U-shaped array is also discuss.The computer simulations and experimental data processing results verify the effectiveness of the proposed algorithm.
文摘A constant problem is to localize a number of acoustic sources, to separate their individual signals and to estimate their strengths in a propagation medium. An acoustic receiving array with signal processing algorithms is then used. The most widely used algorithm is the conventional beamforming algorithm but it has a very low resolution and high sidelobes that may cause a signal leakage problem. Several new signal processors for arrays of sensors are derived to evaluate the strengths of acoustic signals arriving at an array of sensors. In particular, we present the covariance vector estimator and the pseudoinverse of the array manifold matrix estimator. The covariance vector estimator uses only the correlations between sensors and the pseudoinverse of the array manifold matrix estimator operates with the minimum eigenvalues of the covariance matrix. Numerical and experimental results are presented.
基金Project(2017YFC1405600)supported by the National Key R&D Program of ChinaProject(18JK05032)supported by the Scientific Research Project of Education Department of Shaanxi Province,China。
文摘Due to the limited scenes that synthetic aperture radar(SAR)satellites can detect,the full-track utilization rate is not high.Because of the computing and storage limitation of one satellite,it is difficult to process large amounts of data of spaceborne synthetic aperture radars.It is proposed to use a new method of networked satellite data processing for improving the efficiency of data processing.A multi-satellite distributed SAR real-time processing method based on Chirp Scaling(CS)imaging algorithm is studied in this paper,and a distributed data processing system is built with field programmable gate array(FPGA)chips as the kernel.Different from the traditional CS algorithm processing,the system divides data processing into three stages.The computing tasks are reasonably allocated to different data processing units(i.e.,satellites)in each stage.The method effectively saves computing and storage resources of satellites,improves the utilization rate of a single satellite,and shortens the data processing time.Gaofen-3(GF-3)satellite SAR raw data is processed by the system,with the performance of the method verified.
基金supported by National Natural Science Foundation of China(Nos.52188102,U2013213,51820105008)the Technology Innovation Project of Hubei Province of China under Grant No.2019AEA171+1 种基金The project of introducing innovative leading talents in Songshan Lake High-tech Zone,Dongguan City,Guangdong Province(No.2019342101RSFJ-G)the support from Flexible Electronics Research Center of HUST for providing experiment facility。
文摘Direct ink writing(DIW)holds enormous potential in fabricating multiscale and multi-functional architectures by virtue of its wide range of printable materials,simple operation,and ease of rapid prototyping.Although it is well known that ink rheology and processing parameters have a direct impact on the resolution and shape of the printed objects,the underlying mechanisms of these key factors on the printability and quality of DIW technique remain poorly understood.To tackle this issue,we systematically analyzed the printability and quality through extrusion mechanism modeling and experimental validating.Hybrid non-Newtonian fluid inks were first prepared,and their rheological properties were measured.Then,finite element analysis of the whole DIW process was conducted to reveal the flow dynamics of these inks.The obtained optimal process parameters(ink rheology,applied pressure,printing speed,etc)were also validated by experiments where high-resolution(<100μm)patterns were fabricated rapidly(>70 mm s^(-1)).Finally,as a process research demonstration,we printed a series of microstructures and circuit systems with hybrid inks and silver inks,showing the suitability of the printable process parameters.This study provides a strong quantitative illustration of the use of DIW for the high-speed preparation of high-resolution,high-precision samples.
文摘The commercial high-resolution imaging satellite with 1 m spatial resolution IKONOS is an important data source of information for urban planning and geographical information system (GIS) applications. In this paper, a morphological method is proposed. The proposed method combines the automatic thresholding and morphological operation techniques to extract the road centerline of the urban environment. This method intends to solve urban road centerline problems, vehicle, vegetation, building etc. Based on this morphological method, an object extractor is designed to extract road networks from highly remote sensing images. Some filters are applied in this experiment such as line reconstruction and region filling techniques to connect the disconnected road segments and remove the small redundant. Finally, the thinning algorithm is used to extract the road centerline. Experiments have been conducted on a high-resolution IKONOS and QuickBird images showing the efficiency of the proposed method.
基金Supported by the National Natural Science Foundation of China(51375050)
文摘The structure of a microlens array( MLA) can be formed on copper by an indentation process which is a new manufacture approach we applied here instead of a traditional method to test the material property,thereby work time can be saved. Single-indentation and multi-indentation are both conducted to generate a single dimple and dimples array,namely micro lens and MLA. Based on finite element simulation method,factors affecting the form accuracy,such as springback at the compressed area of one single dimple and compressional deformation at the adjacent area of dimples arrays,are determined,and the results are verified by experiments under the same conditions. Meanwhile,indenter compensation method is proposed to improve form accuracy of single dimple,and the relationship between pitch and compressional deformation is investigated by modelling seven sets of multi-indentations at different pitches to identify the critical pitch for the MLA's indentation processing. Loads and cross-sectional profiles are measured and analyzed to reveal the compressional deformation mechanism. Finally,it is found that MLA at pitches higher than 1. 47 times of its diameter can be manufactured precisely by indentation using a compensated indenter.
基金supported by the National Natural Science Foundation of China(Grant No.61827817)。
文摘The photonic neural processing unit(PNPU)demonstrates ultrahigh inference speed with low energy consumption,and it has become a promising hardware artificial intelligence(AI)accelerator.However,the nonidealities of the photonic device and the peripheral circuit make the practical application much more complex.Rather than optimizing the photonic device,the architecture,and the algorithm individually,a joint device-architecture-algorithm codesign method is proposed to improve the accuracy,efficiency and robustness of the PNPU.First,a full-flow simulator for the PNPU is developed from the back end simulator to the high-level training framework;Second,the full system architecture and the complete photonic chip design enable the simulator to closely model the real system;Third,the nonidealities of the photonic chip are evaluated for the PNPU design.The average test accuracy exceeds 98%,and the computing power exceeds 100TOPS.
文摘A fast separable approach based on a cross array is presented, which has coarsegrained parallelism. Its computational load is far less than that of the two-dimensional (2-D) direct processing method and other existing separable approaches. In order to compensate for the performance degradation due to separable processing, two postprocessing schemes are also proposed. Some computer simulation results are provided for illustration in the end.
文摘This paper proposes the Application Specific Signal Processor(ASSP)-based implementation of the real-time signal processing system in both spatial domain and time domain for a phased-array radar. This paper also proposes the system-on-silicon hardware design of some ASSPs including the adaptive beamformer, FFT appliation specific integrated circuit, clutter map former and update, moving target extractor and video integrator. The advantages of the processing system are compact, efficient, and robust.
基金supported by the special funds of Laoshan Laboratory(No.LSKJ202203604)the National Key Research and Development Program of China(No.2016 YFC0303901).
文摘The near-seabed multichannel seismic exploration systems have yielded remarkable successes in marine geological disaster assessment,marine gas hydrate investigation,and deep-sea mineral exploration owing to their high vertical and horizontal resolution.However,the quality of deep-towed seismic imaging hinges on accurate source-receiver positioning information.In light of existing technical problems,we propose a novel array geometry inversion method tailored for high-resolution deep-towed multichannel seismic exploration systems.This method is independent of the attitude and depth sensors along a deep-towed seismic streamer,accounting for variations in seawater velocity and seabed slope angle.Our approach decomposes the towed line array into multiline segments and characterizes its geometric shape using the line segment distance and pitch angle.Introducing optimization parameters for seawater velocity and seabed slope angle,we establish an objective function based on the model,yielding results that align with objective reality.Employing the particle swarm optimization algorithm enables synchronous acquisition of optimized inversion results for array geometry and seawater velocity.Experimental validation using theoretical models and practical data verifies that our approach effectively enhances source and receiver positioning inversion accuracy.The algorithm exhibits robust stability and reliability,addressing uncertainties in seismic traveltime picking and complex seabed topography conditions.
文摘A systolic array architecture computer (FXCQ) has been designed for signal processing. R can handle floating point data at very high speed. It is composed of 16 processing cells and a cache that are connected linearly and form a ring structure. All processing cells are identical and programmable. Each processing cell has the peak performance of 20 million floating-point operations per second (20MFLOPS). The machine therefore has a peak performance of 320 M FLOPS. It is integrated as an attached processor into a host system through VME bus interface. Programs for FXCQ are written in a high-level language -B language, which is supported by a parallel optimizing compiler. This paper describes the architecture of FXCQ, B language and its compiler.
基金Supported by National "863" High Technology Plans of China
文摘The system of Integrated-Optics Acousto-Optic RF Spectrum Analyzer (IOAOSA)consists of a laser diode, an Acousto-Optic (A-O) modulator, geodesic lenses and CCD detectorarray. The optical signal projected on the CCD array is converted into electrical signal andprocessed by the signal processing center which consists of a TMS 32010 system and an IBM-PC.The TMS 32010 with very high speed is used in a microcomputer system. A cycle sample methodis adopted to collect the data of the CCD video signal, sampling one per 40-point. After theprocessing, the frequency bandwidth, the resolution and the dynamic range of the system aremeasured to be 100 MHz, 8 MHz and 20 dB, respectively.
基金supported by the Office of Research and Innovation(IRG project#23207)at Alfaisal University,Riyadh,KSA.
文摘Fractional order algorithms have shown promising results in various signal processing applications due to their ability to improve performance without significantly increasing complexity.The goal of this work is to inves-tigate the use of fractional order algorithm in the field of adaptive beam-forming,with a focus on improving performance while keeping complexity lower.The effectiveness of the algorithm will be studied and evaluated in this context.In this paper,a fractional order least mean square(FLMS)algorithm is proposed for adaptive beamforming in wireless applications for effective utilization of resources.This algorithm aims to improve upon existing beam-forming algorithms,which are inefficient in performance,by offering faster convergence,better accuracy,and comparable computational complexity.The FLMS algorithm uses fractional order gradient in addition to the standard ordered gradient in weight adaptation.The derivation of the algorithm is provided and supported by mathematical convergence analysis.Performance is evaluated through simulations using mean square error(MSE)minimization as a metric and compared with the standard LMS algorithm for various parameters.The results,obtained through Matlab simulations,show that the FLMS algorithm outperforms the standard LMS in terms of convergence speed,beampattern accuracy and scatter plots.FLMS outperforms LMS in terms of convergence speed by 34%.From this,it can be concluded that FLMS is a better candidate for adaptive beamforming and other signal processing applications.
基金the National Key R&D Program of China(No.2022ZD0119001)the National Natural Science Foundation of China(No.61834005)+3 种基金the Shaanxi Province Key R&D Plan(No.2022GY-027)the Key Scientific Research Project of Shaanxi Department of Education(No.22JY060)the Education Research Project of XUPT(No.JGA202108)the Graduate Student Innovation Fund of Xi'an University of Posts and Telecommunications(No.CXJJZL2022011)。
文摘Unstructured and irregular graph data causes strong randomness and poor locality of data accesses in graph processing.This paper optimizes the depth-branch-resorting algorithm(DBR),and proposes a branch-alternation-resorting algorithm(BAR).In order to make the algorithm run in parallel and improve the efficiency of algorithm operation,the BAR algorithm is mapped onto the reconfigurable array processor(APR-16)to achieve vertex reordering,effectively improving the locality of graph data.This paper validates the BAR algorithm on the GraphBIG framework,by utilizing the reordered dataset with BAR on breadth-first search(BFS),single source shortest paht(SSSP)and betweenness centrality(BC)algorithms for traversal.The results show that compared with DBR and Corder algorithms,BAR can reduce execution time by up to 33.00%,and 51.00%seperatively.In terms of data movement,the BAR algorithm has a maximum reduction of 39.00%compared with the DBR algorithm and 29.66%compared with Corder algorithm.In terms of computational complexity,the BAR algorithm has a maximum reduction of 32.56%compared with DBR algorithm and53.05%compared with Corder algorithm.