The Low Earth Orbit(LEO)remote sensing satellite mega-constellation has the characteristics of large quantity and various types which make it have unique superiority in the realization of concurrent multiple tasks.How...The Low Earth Orbit(LEO)remote sensing satellite mega-constellation has the characteristics of large quantity and various types which make it have unique superiority in the realization of concurrent multiple tasks.However,the complexity of resource allocation is increased because of the large number of tasks and satellites.Therefore,the primary problem of implementing concurrent multiple tasks via LEO mega-constellation is to pre-process tasks and observation re-sources.To address the challenge,we propose a pre-processing algorithm for the mega-constellation based on highly Dynamic Spatio-Temporal Grids(DSTG).In the first stage,this paper describes the management model of mega-constellation and the multiple tasks.Then,the coding method of DSTG is proposed,based on which the description of complex mega-constellation observation resources is realized.In the third part,the DSTG algorithm is used to realize the processing of concurrent multiple tasks at multiple levels,such as task space attribute,time attribute and grid task importance evaluation.Finally,the simulation result of the proposed method in the case of constellation has been given to verify the effectiveness of concurrent multi-task pre-processing based on DSTG.The autonomous processing process of task decomposition and task fusion and mapping to grids,and the convenient indexing process of time window are verified.展开更多
Multimodal Sentiment Analysis(MSA)seeks to predict a speaker's sentiment orientation by comprehensively utilizing modalities such as text,vision,and audio.As deep learning and cross-modal fusion technologies evolv...Multimodal Sentiment Analysis(MSA)seeks to predict a speaker's sentiment orientation by comprehensively utilizing modalities such as text,vision,and audio.As deep learning and cross-modal fusion technologies evolve,key challenges include alleviating heterogeneity across modality feature spaces,avoiding bias from fixed main-modal fusion strategies,and enhancing model adaptability to dynamic changes in modality contribution across different samples.To address these issues,this paper proposes a multimodal sentiment analysis framework based on adaptive modality selection and contrastive learning alignment,named Adaptive Modality Selection and Guided Fusion Network(AMSGFN).The framework first employs a crossmodal contrastive learning alignment mechanism to map text,vision,and audio features into a shared semantic space,mitigating semantic discrepancies among heterogeneous modalities.A lightweight modality scoring module then evaluates the discriminability and reliability of each modality for the current sample,adaptively identifying the dominant modality.Building on this,a dominant modality-guided fusion mechanism selectively integrates supplementary information from auxiliary modalities around the dominant modality,highlighting key emotional semantics while suppressing noise and redundant information.Experimental results demonstrate that the proposed method achieves superior performance compared to existing approaches across multiple public datasets,confirming the effectiveness and robustness of the framework in multimodal sentiment analysis.展开更多
基金supported by the National Natural Science Foundation of China(Nos.62003115 and 11972130)the Shenzhen Science and Technology Program,China(JCYJ20220818102207015)the Heilongjiang Touyan Team Program,China。
文摘The Low Earth Orbit(LEO)remote sensing satellite mega-constellation has the characteristics of large quantity and various types which make it have unique superiority in the realization of concurrent multiple tasks.However,the complexity of resource allocation is increased because of the large number of tasks and satellites.Therefore,the primary problem of implementing concurrent multiple tasks via LEO mega-constellation is to pre-process tasks and observation re-sources.To address the challenge,we propose a pre-processing algorithm for the mega-constellation based on highly Dynamic Spatio-Temporal Grids(DSTG).In the first stage,this paper describes the management model of mega-constellation and the multiple tasks.Then,the coding method of DSTG is proposed,based on which the description of complex mega-constellation observation resources is realized.In the third part,the DSTG algorithm is used to realize the processing of concurrent multiple tasks at multiple levels,such as task space attribute,time attribute and grid task importance evaluation.Finally,the simulation result of the proposed method in the case of constellation has been given to verify the effectiveness of concurrent multi-task pre-processing based on DSTG.The autonomous processing process of task decomposition and task fusion and mapping to grids,and the convenient indexing process of time window are verified.
文摘Multimodal Sentiment Analysis(MSA)seeks to predict a speaker's sentiment orientation by comprehensively utilizing modalities such as text,vision,and audio.As deep learning and cross-modal fusion technologies evolve,key challenges include alleviating heterogeneity across modality feature spaces,avoiding bias from fixed main-modal fusion strategies,and enhancing model adaptability to dynamic changes in modality contribution across different samples.To address these issues,this paper proposes a multimodal sentiment analysis framework based on adaptive modality selection and contrastive learning alignment,named Adaptive Modality Selection and Guided Fusion Network(AMSGFN).The framework first employs a crossmodal contrastive learning alignment mechanism to map text,vision,and audio features into a shared semantic space,mitigating semantic discrepancies among heterogeneous modalities.A lightweight modality scoring module then evaluates the discriminability and reliability of each modality for the current sample,adaptively identifying the dominant modality.Building on this,a dominant modality-guided fusion mechanism selectively integrates supplementary information from auxiliary modalities around the dominant modality,highlighting key emotional semantics while suppressing noise and redundant information.Experimental results demonstrate that the proposed method achieves superior performance compared to existing approaches across multiple public datasets,confirming the effectiveness and robustness of the framework in multimodal sentiment analysis.