The self-attention mechanism of Transformers,which captures long-range contextual information,has demonstrated significant potential in image segmentation.However,their ability to learn local,contextual relationships ...The self-attention mechanism of Transformers,which captures long-range contextual information,has demonstrated significant potential in image segmentation.However,their ability to learn local,contextual relationships between pixels requires further improvement.Previous methods face challenges in efficiently managing multi-scale fea-tures of different granularities from the encoder backbone,leaving room for improvement in their global representation and feature extraction capabilities.To address these challenges,we propose a novel Decoder with Multi-Head Feature Receptors(DMHFR),which receives multi-scale features from the encoder backbone and organizes them into three feature groups with different granularities:coarse,fine-grained,and full set.These groups are subsequently processed by Multi-Head Feature Receptors(MHFRs)after feature capture and modeling operations.MHFRs include two Three-Head Feature Receptors(THFRs)and one Four-Head Feature Receptor(FHFR).Each group of features is passed through these MHFRs and then fed into axial transformers,which help the model capture long-range dependencies within the features.The three MHFRs produce three distinct feature outputs.The output from the FHFR serves as auxiliary auxiliary features in the prediction head,and the prediction output and their losses will eventually be aggregated.Experimental results show that the Transformer using DMHFR outperforms 15 state of the arts(SOTA)methods on five public datasets.Specifically,it achieved significant improvements in mean DICE scores over the classic Parallel Reverse Attention Network(PraNet)method,with gains of 4.1%,2.2%,1.4%,8.9%,and 16.3%on the CVC-ClinicDB,Kvasir-SEG,CVC-T,CVC-ColonDB,and ETIS-LaribPolypDB datasets,respectively.展开更多
The development of large language models(LLMs)has created transformative opportunities for the financial industry,especially in the area of financial trading.However,how to integrate LLMs with trading systems has beco...The development of large language models(LLMs)has created transformative opportunities for the financial industry,especially in the area of financial trading.However,how to integrate LLMs with trading systems has become a challenge.To address this problem,we propose an intelligent trade order recognition pipeline that enables the conversion of trade orders into a standard format for trade execution.The system improves the ability of human traders to interact with trading platforms while addressing the problem of misinformation acquisition in trade execution.In addition,we create a trade order dataset of 500 pieces of data to simulate the real-world trading scenarios.Moreover,we design several metrics to provide a comprehensive assessment of dataset reliability and the generative power of big models in finance by using five state-of-the-art LLMs on our dataset.The results show that most models generate syntactically valid JavaScript object notation(JSON)at high rates(about 80%–99%)and initiate clarifying questions in nearly all incomplete cases(about 90%–100%).However,end-to-end accuracy remains low(about 6%–14%),and missing information is substantial(about 12%–66%).Models also tend to over-interrogate—roughly 70%–80%of follow-ups are unnecessary—raising interaction costs and potential information-exposure risk.The research also demonstrates the feasibility of integrating our pipeline with the real-world trading systems,paving the way for practical deployment of LLM-based trade automation solutions.展开更多
Continuous-variable quantum key distribution(CV QKD)using optical coherent detectors is practically favorable due to its low implementation cost,flexibility of wavelength division multiplexing,and compatibility with s...Continuous-variable quantum key distribution(CV QKD)using optical coherent detectors is practically favorable due to its low implementation cost,flexibility of wavelength division multiplexing,and compatibility with standard coherent communication technologies.However,the security analysis and parameter estimation of CV QKD are complicated due to the infinite-dimensional latent Hilbert space.Also,the transmission of strong reference pulses undermines the security and complicates the experiments.In this work,we tackle these two problems by presenting a time-bin-encoding CV protocol with a simple phase-error-based security analysis valid under general coherent attacks.With the key encoded into the relative intensity between two optical modes,the need for global references is removed.Furthermore,phase randomization can be introduced to decouple the security analysis of different photon-number components.We can hence tag the photon number for each round,effectively estimate the associated privacy using a carefully designed coherent-detection method,and independently extract encryption keys from each component.Simulations manifest that the protocol using multi-photon components increases the key rate by two orders of magnitude compared to the one using only the single-photon component.Meanwhile,the protocol with four-intensity decoy analysis is sufficient to yield tight parameter estimation with a short-distance key-rate performance comparable to the best Bennett-Brassard-1984 implementation.展开更多
基金supported by Xiamen Medical and Health Guidance Project in 2021(No.3502Z20214ZD1070)supported by a grant from Guangxi Key Laboratory of Machine Vision and Intelligent Control,China(No.2023B02).
文摘The self-attention mechanism of Transformers,which captures long-range contextual information,has demonstrated significant potential in image segmentation.However,their ability to learn local,contextual relationships between pixels requires further improvement.Previous methods face challenges in efficiently managing multi-scale fea-tures of different granularities from the encoder backbone,leaving room for improvement in their global representation and feature extraction capabilities.To address these challenges,we propose a novel Decoder with Multi-Head Feature Receptors(DMHFR),which receives multi-scale features from the encoder backbone and organizes them into three feature groups with different granularities:coarse,fine-grained,and full set.These groups are subsequently processed by Multi-Head Feature Receptors(MHFRs)after feature capture and modeling operations.MHFRs include two Three-Head Feature Receptors(THFRs)and one Four-Head Feature Receptor(FHFR).Each group of features is passed through these MHFRs and then fed into axial transformers,which help the model capture long-range dependencies within the features.The three MHFRs produce three distinct feature outputs.The output from the FHFR serves as auxiliary auxiliary features in the prediction head,and the prediction output and their losses will eventually be aggregated.Experimental results show that the Transformer using DMHFR outperforms 15 state of the arts(SOTA)methods on five public datasets.Specifically,it achieved significant improvements in mean DICE scores over the classic Parallel Reverse Attention Network(PraNet)method,with gains of 4.1%,2.2%,1.4%,8.9%,and 16.3%on the CVC-ClinicDB,Kvasir-SEG,CVC-T,CVC-ColonDB,and ETIS-LaribPolypDB datasets,respectively.
文摘The development of large language models(LLMs)has created transformative opportunities for the financial industry,especially in the area of financial trading.However,how to integrate LLMs with trading systems has become a challenge.To address this problem,we propose an intelligent trade order recognition pipeline that enables the conversion of trade orders into a standard format for trade execution.The system improves the ability of human traders to interact with trading platforms while addressing the problem of misinformation acquisition in trade execution.In addition,we create a trade order dataset of 500 pieces of data to simulate the real-world trading scenarios.Moreover,we design several metrics to provide a comprehensive assessment of dataset reliability and the generative power of big models in finance by using five state-of-the-art LLMs on our dataset.The results show that most models generate syntactically valid JavaScript object notation(JSON)at high rates(about 80%–99%)and initiate clarifying questions in nearly all incomplete cases(about 90%–100%).However,end-to-end accuracy remains low(about 6%–14%),and missing information is substantial(about 12%–66%).Models also tend to over-interrogate—roughly 70%–80%of follow-ups are unnecessary—raising interaction costs and potential information-exposure risk.The research also demonstrates the feasibility of integrating our pipeline with the real-world trading systems,paving the way for practical deployment of LLM-based trade automation solutions.
基金Engineering and Physical Sciences Research Council(project EP/T001011/1)Shenzhen-Hong Kong Cooperation Zone for Technology and Innovation(HZQB-KCZYB-2020050)+7 种基金Hong Kong Research Grant Council(R7035-21)Army Research Office(W911NF-23-1-0077)Multidisciplinary University Research Initiative(W911NF-21-1-0325)Air Force Office of Scientific Research(FA9550-19-1-0399,FA9550-21-1-0209)National Science Foundation(OMA-1936118,ERC-1941583,OMA-2137642)NTT ResearchDavid and Lucile Packard Foundation(2020-71479)Marshall and Arlene Bennett Family Research Program。
文摘Continuous-variable quantum key distribution(CV QKD)using optical coherent detectors is practically favorable due to its low implementation cost,flexibility of wavelength division multiplexing,and compatibility with standard coherent communication technologies.However,the security analysis and parameter estimation of CV QKD are complicated due to the infinite-dimensional latent Hilbert space.Also,the transmission of strong reference pulses undermines the security and complicates the experiments.In this work,we tackle these two problems by presenting a time-bin-encoding CV protocol with a simple phase-error-based security analysis valid under general coherent attacks.With the key encoded into the relative intensity between two optical modes,the need for global references is removed.Furthermore,phase randomization can be introduced to decouple the security analysis of different photon-number components.We can hence tag the photon number for each round,effectively estimate the associated privacy using a carefully designed coherent-detection method,and independently extract encryption keys from each component.Simulations manifest that the protocol using multi-photon components increases the key rate by two orders of magnitude compared to the one using only the single-photon component.Meanwhile,the protocol with four-intensity decoy analysis is sufficient to yield tight parameter estimation with a short-distance key-rate performance comparable to the best Bennett-Brassard-1984 implementation.