To detect bull’s-eye anomalies in low-frequency seismic inversion models,the study proposed an advanced method using an optimized you only look once version 7(YOLOv7)model.This model is enhanced by integrating advanc...To detect bull’s-eye anomalies in low-frequency seismic inversion models,the study proposed an advanced method using an optimized you only look once version 7(YOLOv7)model.This model is enhanced by integrating advanced modules,including the bidirectional feature pyramid network(BiFPN),weighted intersection-over-union(wise-IoU),efficient channel attention(ECA),and atrous spatial pyramid pooling(ASPP).BiFPN facilitates robust feature extraction by enabling bidirectional information fl ow across network scales,which enhances the ability of the model to capture complex patterns in seismic inversion models.Wise-IoU improves the precision and fineness of reservoir feature localization through its weighted approach to IoU.Meanwhile,ECA optimizes interactions between channels,which promotes eff ective information exchange and enhances the overall response of the model to subtle inversion details.Lastly,the ASPP module strategically addresses spatial dependencies at multiple scales,which further enhances the ability of the model to identify complex reservoir structures.By synergistically integrating these advanced modules,the proposed model not only demonstrates superior performance in detecting bull’s-eye anomalies but also marks a pioneering step in utilizing cutting-edge deep learning technologies to enhance the accuracy and reliability of seismic reservoir prediction in oil and gas exploration.The results meet scientific literature standards and provide new perspectives on methodology,which makes significant contributions to ongoing eff orts to refine accurate and efficient prediction models for oil and gas exploration.展开更多
Remote sensing image(RSI)with concurrently high spatial,temporal,and spectral resolutions cannot be produced by a single sensor.Multisource RSI fusion is a convenient technique to realize high spatial resolution multi...Remote sensing image(RSI)with concurrently high spatial,temporal,and spectral resolutions cannot be produced by a single sensor.Multisource RSI fusion is a convenient technique to realize high spatial resolution multispectral(MS)images(spatial spectral fusion,i.e.SSF)and high temporal and spatial resolution MS images(spatiotemporal fusion,i.e.STF).Currently,deep learning-based fusion models can only implement SSF or STF,lacking models that perform both SSF and STF.Multiresolution generative adversarial networks with bidirectional adaptive-stage progressive guided fusion(BAPGF)for RSI are proposed to implement both SSF and STF,namely BPF-MGAN.A bidirectional adaptive-stage feature extraction architecture infine-scale-to-coarse-scale and coarse-scale-to-fine-scale modes is introduced.The designed BAPGF introduces a previous fusion result-oriented cross-stage-level dual-residual attention fusion strategy to enhance critical information and suppress superfluous information.Adaptive resolution U-shaped discriminators are implemented to feed multiresolution context into the generator.A generalized multitask loss function unlimited by no-reference images is developed to strengthen the model via constraints on the multiscale feature,structural,and content similarities.The BPF-MGAN model is validated on SSF datasets and STF datasets.Compared with the state-of-the-art SSF and STF models,results demonstrate the superior performance of the proposed BPF-MGAN model in both subjective and objective evaluations.展开更多
文摘To detect bull’s-eye anomalies in low-frequency seismic inversion models,the study proposed an advanced method using an optimized you only look once version 7(YOLOv7)model.This model is enhanced by integrating advanced modules,including the bidirectional feature pyramid network(BiFPN),weighted intersection-over-union(wise-IoU),efficient channel attention(ECA),and atrous spatial pyramid pooling(ASPP).BiFPN facilitates robust feature extraction by enabling bidirectional information fl ow across network scales,which enhances the ability of the model to capture complex patterns in seismic inversion models.Wise-IoU improves the precision and fineness of reservoir feature localization through its weighted approach to IoU.Meanwhile,ECA optimizes interactions between channels,which promotes eff ective information exchange and enhances the overall response of the model to subtle inversion details.Lastly,the ASPP module strategically addresses spatial dependencies at multiple scales,which further enhances the ability of the model to identify complex reservoir structures.By synergistically integrating these advanced modules,the proposed model not only demonstrates superior performance in detecting bull’s-eye anomalies but also marks a pioneering step in utilizing cutting-edge deep learning technologies to enhance the accuracy and reliability of seismic reservoir prediction in oil and gas exploration.The results meet scientific literature standards and provide new perspectives on methodology,which makes significant contributions to ongoing eff orts to refine accurate and efficient prediction models for oil and gas exploration.
基金funded by the National Key Research and Development Program of China under Grants 2020YFB2104400 and 2020YFB2104401the National Natural Science Foundation of China under Grant 82260362the Hainan Major Science and Technology Program of China under Grant ZDKJ202017.
文摘Remote sensing image(RSI)with concurrently high spatial,temporal,and spectral resolutions cannot be produced by a single sensor.Multisource RSI fusion is a convenient technique to realize high spatial resolution multispectral(MS)images(spatial spectral fusion,i.e.SSF)and high temporal and spatial resolution MS images(spatiotemporal fusion,i.e.STF).Currently,deep learning-based fusion models can only implement SSF or STF,lacking models that perform both SSF and STF.Multiresolution generative adversarial networks with bidirectional adaptive-stage progressive guided fusion(BAPGF)for RSI are proposed to implement both SSF and STF,namely BPF-MGAN.A bidirectional adaptive-stage feature extraction architecture infine-scale-to-coarse-scale and coarse-scale-to-fine-scale modes is introduced.The designed BAPGF introduces a previous fusion result-oriented cross-stage-level dual-residual attention fusion strategy to enhance critical information and suppress superfluous information.Adaptive resolution U-shaped discriminators are implemented to feed multiresolution context into the generator.A generalized multitask loss function unlimited by no-reference images is developed to strengthen the model via constraints on the multiscale feature,structural,and content similarities.The BPF-MGAN model is validated on SSF datasets and STF datasets.Compared with the state-of-the-art SSF and STF models,results demonstrate the superior performance of the proposed BPF-MGAN model in both subjective and objective evaluations.