Concrete-filled steel tubes(CFST)are widely utilized in civil engineering due to their superior load-bearing capacity,ductility,and seismic resistance.However,existing design codes,such as AISC and Eurocode 4,tend to ...Concrete-filled steel tubes(CFST)are widely utilized in civil engineering due to their superior load-bearing capacity,ductility,and seismic resistance.However,existing design codes,such as AISC and Eurocode 4,tend to be excessively conservative as they fail to account for the composite action between the steel tube and the concrete core.To address this limitation,this study proposes a hybrid model that integrates XGBoost with the Pied Kingfisher Optimizer(PKO),a nature-inspired algorithm,to enhance the accuracy of shear strength prediction for CFST columns.Additionally,quantile regression is employed to construct prediction intervals for the ultimate shear force,while the Asymmetric Squared Error Loss(ASEL)function is incorporated to mitigate overestimation errors.The computational results demonstrate that the PKO-XGBoost model delivers superior predictive accuracy,achieving a Mean Absolute Percentage Error(MAPE)of 4.431%and R2 of 0.9925 on the test set.Furthermore,the ASEL-PKO-XGBoost model substantially reduces overestimation errors to 28.26%,with negligible impact on predictive performance.Additionally,based on the Genetic Algorithm(GA)and existing equation models,a strength equation model is developed,achieving markedly higher accuracy than existing models(R^(2)=0.934).Lastly,web-based Graphical User Interfaces(GUIs)were developed to enable real-time prediction.展开更多
The original intention of the algorithmic recommender system is to grapple with the negative impacts caused by information overload,but the system also can be used as"hypernudge",a new form of online manipul...The original intention of the algorithmic recommender system is to grapple with the negative impacts caused by information overload,but the system also can be used as"hypernudge",a new form of online manipulation,to inten⁃tionally exploit people's cognitive and decision-making gaps to influence their decisions in practice,which is particu⁃larly detrimental to the sustainable development of the digital market.Limiting harmful algorithmic online manipula⁃tion in digital markets has become a challenging task.Globally,both the EU and China have responded to this issue,and the differences between them are so evident that their governance measures can serve as the typical case.The EU focuses on improving citizens'digital literacy and their ability to integrate into digital social life to independently ad⁃dress this issue,and expects to address harmful manipulation behavior through binding and applicable hard law,which is part of the digital strategy.By comparison,although there exist certain legal norms that have made relevant stipula⁃tions on manipulation issues,China continues to issue specific departmental regulations to regulate algorithmic recom⁃mender services,and pays more attention to addressing collective harm caused by algorithmic online manipulation through a multiple co-governance approach led by the government or industry associations to implement supervision.展开更多
The study conducts a bibliometric review of artificial intelligence applications in two areas:the entrepreneurial finance literature,and the corporate finance literature with implications for entrepreneurship.A rigoro...The study conducts a bibliometric review of artificial intelligence applications in two areas:the entrepreneurial finance literature,and the corporate finance literature with implications for entrepreneurship.A rigorous search and screening of the web of science core collection identified 1,890 journal articles for analysis.The bibliometrics provide a detailed view of the knowledge field,indicating underdeveloped research directions.An important contribution comes from insights through artificial intelligence methods in entrepreneurship.The results demonstrate a high representation of artificial neural networks,deep neural networks,and support vector machines across almost all identified topic niches.In contrast,applications of topic modeling,fuzzy neural networks,and growing hierarchical self-organizing maps are rare.Additionally,we take a broader view by addressing the problem of applying artificial intelligence in economic science.Specifically,we present the foundational paradigm and a bespoke demonstration of the Monte Carlo randomized algorithm.展开更多
Efficient elastic wave focusing is crucial in materials and physical engineering.Elastic coding metasurfaces,which are innovative planar artificial structures,show great potential for use in the field of wave focusing...Efficient elastic wave focusing is crucial in materials and physical engineering.Elastic coding metasurfaces,which are innovative planar artificial structures,show great potential for use in the field of wave focusing.However,elastic coding lenses(ECLs)still suffer from low focusing performance,thickness comparable to wavelength,and frequency sensitivity.Here,we consider both the structural and material properties of the coding unit,thus realizing further compression of the thickness of the ECL.We chose the simplest ECL,which consists of only two encoding units.The coding unit 0 is a straight structure constructed using a carbon fiber reinforced composite material,and the coding unit 1 is a zigzag structure constructed using an aluminum material,and the thickness of the ECL constructed using them is only 1/8 of the wavelength.Based on the theoretical design,the arrangement of coding units is further optimized using genetic algorithms,which significantly improves the focusing performance of the lens at different focus and frequencies.This study provides a more effective way to control vibration and noise in advanced structures.展开更多
We explored the effects of algorithmic opacity on employees’playing dumb and evasive hiding rather than rationalized hiding.We examined the mediating role of job insecurity and the moderating role of employee-AI coll...We explored the effects of algorithmic opacity on employees’playing dumb and evasive hiding rather than rationalized hiding.We examined the mediating role of job insecurity and the moderating role of employee-AI collaboration.Participants were 421 full-time employees(female=46.32%,junior employees=31.83%)from a variety of organizations and industries that interact with AI.Employees filled out data on algorithm opacity,job insecurity,knowledge hiding,employee-AI collaboration,and control variables.The results of the structural equation modeling indicated that algorithm opacity exacerbated employees’job insecurity,and job insecurity mediated between algorithm opacity and playing dumb and evasive hiding rather than rationalized hiding.The relationship between algorithmic opacity and playing dumb and evasive hiding was more positive when the level of employee-AI collaboration was higher.These findings suggest that employee-AI collaboration reinforces the indirect relationship between algorithmic opacity and playing dumb and evasive hiding.Our study contributes to research on human and AI collaboration by exploring the dark side of employee-AI collaboration.展开更多
This paper investigates the optimization of data sampling and target labeling techniques to enhance algorithmic trading strategies in cryptocurrency markets,focusing on Bitcoin(BTC)and Ethereum(ETH).Traditional data s...This paper investigates the optimization of data sampling and target labeling techniques to enhance algorithmic trading strategies in cryptocurrency markets,focusing on Bitcoin(BTC)and Ethereum(ETH).Traditional data sampling methods,such as time bars,often fail to capture the nuances of the continuously active and highly volatile cryptocurrency market and force traders to wait for arbitrary points in time.To address this,we propose an alternative approach using information-driven sampling methods,including the CUSUM filter,range bars,volume bars,and dollar bars,and evaluate their performance using tick-level data from January 2018 to June 2023.Additionally,we introduce the Triple Barrier method for target labeling,which offers a solution tailored for algorithmic trading as opposed to the widely used next-bar prediction.We empirically assess the effectiveness of these data sampling and labeling methods to craft profitable trading strategies.The results demonstrate that the innovative combination of CUSUM-filtered data with Triple Barrier labeling outperforms traditional time bars and next-bar prediction,achieving consistently positive trading performance even after accounting for transaction costs.Moreover,our system enables making trading decisions at any point in time on the basis of market conditions,providing an advantage over traditional methods that rely on fixed time intervals.Furthermore,the paper contributes to the ongoing debate on the applicability of Transformer models to time series classification in the context of algorithmic trading by evaluating various Transformer architectures—including the vanilla Transformer encoder,FEDformer,and Autoformer—alongside other deep learning architectures and classical machine learning models,revealing insights into their relative performance.展开更多
With the rapid advancement of medical artificial intelligence(AI)technology,particularly the widespread adoption of AI diagnostic systems,ethical challenges in medical decision-making have garnered increasing attentio...With the rapid advancement of medical artificial intelligence(AI)technology,particularly the widespread adoption of AI diagnostic systems,ethical challenges in medical decision-making have garnered increasing attention.This paper analyzes the limitations of algorithmic ethics in medical decision-making and explores accountability mechanisms,aiming to provide theoretical support for ethically informed medical practices.The study highlights how the opacity of AI algorithms complicates the definition of decision-making responsibility,undermines doctor-patient trust,and affects informed consent.By thoroughly investigating issues such as the algorithmic“black box”problem and data privacy protection,we develop accountability assessment models to address ethical concerns related to medical resource allocation.Furthermore,this research examines the effective implementation of AI diagnostic systems through case studies of both successful and unsuccessful applications,extracting lessons on accountability mechanisms and response strategies.Finally,we emphasize that establishing a transparent accountability framework is crucial for enhancing the ethical standards of medical AI systems and protecting patients’rights and interests.展开更多
This paper presents a novel method for reconstructing a highly accurate 3D nose model of the human from 2D images and pre-marked landmarks based on algorithmic methods.The study focuses on the reconstruction of a 3D n...This paper presents a novel method for reconstructing a highly accurate 3D nose model of the human from 2D images and pre-marked landmarks based on algorithmic methods.The study focuses on the reconstruction of a 3D nose model tailored for applications in healthcare and cosmetic surgery.The approach leverages advanced image processing techniques,3D Morphable Models(3DMM),and deformation techniques to overcome the limita-tions of deep learning models,particularly addressing the interpretability issues commonly encountered in medical applications.The proposed method estimates the 3D coordinates of landmark points using a 3D structure estimation algorithm.Sub-landmarks are extracted through image processing techniques and interpolation.The initial surface is generated using a 3DMM,though its accuracy remains limited.To enhance precision,deformation techniques are applied,utilizing the coordinates of 76 identified landmarks and sub-landmarks.The resulting 3D nose model is constructed based on algorithmic methods and pre-marked landmarks.Evaluation of the 3D model is conducted by comparing landmark distances and shape similarity with expert-determined ground truth on 30 Vietnamese volunteers aged 18 to 47,all of whom were either preparing for or required nasal surgery.Experimental results demonstrate a strong agreement between the reconstructed 3D model and the ground truth.The method achieved a mean landmark distance error of 0.631 mm and a shape error of 1.738 mm,demonstrating its potential for medical applications.展开更多
Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection me...Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection methods-rooted in statistical heuristics,feature engineering,and shallow machine learning-struggle to adapt to the increasing sophistication,linguistic mimicry,and adversarial variability of DGA variants.The emergence of Large Language Models(LLMs)marks a transformative shift in this landscape.Leveraging deep contextual understanding,semantic generalization,and few-shot learning capabilities,LLMs such as BERT,GPT,and T5 have shown promising results in detecting both character-based and dictionary-based DGAs,including previously unseen(zeroday)variants.This paper provides a comprehensive and critical review of LLM-driven DGA detection,introducing a structured taxonomy of LLM architectures,evaluating the linguistic and behavioral properties of benchmark datasets,and comparing recent detection frameworks across accuracy,latency,robustness,and multilingual performance.We also highlight key limitations,including challenges in adversarial resilience,model interpretability,deployment scalability,and privacy risks.To address these gaps,we present a forward-looking research roadmap encompassing adversarial training,model compression,cross-lingual benchmarking,and real-time integration with SIEM/SOAR platforms.This survey aims to serve as a foundational resource for advancing the development of scalable,explainable,and operationally viable LLM-based DGA detection systems.展开更多
This study investigates how artificial intelligence(AI)algorithms enable mainstream media to achieve precise emotional matching and improve communication efficiency through reconstructed communication logic.As digital...This study investigates how artificial intelligence(AI)algorithms enable mainstream media to achieve precise emotional matching and improve communication efficiency through reconstructed communication logic.As digital intelligence technology rapidly evolves,mainstream media organizations are increasingly leveraging AI-driven empathy algorithms to enhance audience engagement and optimize content delivery.This research employs a mixed-methods approach,combining quantitative analysis of algorithmic performance metrics with qualitative examination of media communication patterns.Through systematic review of 150 academic papers and analysis of data from 12 major media platforms,this study reveals that algorithmic empathy systems can improve emotional resonance by 34.7%and increase audience engagement by 28.3%compared to traditional communication methods.The findings demonstrate that AI algorithms reconstruct media communication logic through three primary pathways:emotional pattern recognition,personalized content curation,and real-time sentiment adaptation.However,the study also identifies significant challenges including algorithmic bias,emotional authenticity concerns,and ethical implications of automated empathy.The research contributes to understanding how mainstream media can leverage AI technology to build high-quality empathetic communication while maintaining journalistic integrity and social responsibility.展开更多
【目的】随着智慧城市建设中信息技术的深度应用,GNSS轨迹数据呈爆炸式增长,但其轨迹生成过程易受信号干扰与传感器故障影响而产生噪声。本文旨在设计新型噪声识别与修复算法,以提升原始GNSS轨迹数据的处理精度与质量。【方法】针对轨...【目的】随着智慧城市建设中信息技术的深度应用,GNSS轨迹数据呈爆炸式增长,但其轨迹生成过程易受信号干扰与传感器故障影响而产生噪声。本文旨在设计新型噪声识别与修复算法,以提升原始GNSS轨迹数据的处理精度与质量。【方法】针对轨迹噪声识别问题,本文提出基于密度矩阵的自适应DBSCAN算法,其具有超参数无关特性,可敏感捕获低幅值噪声点,同时避免连续转向点的误判。针对噪声修复问题,提出基于轨迹分段的函数构造式修复算法:首先采用道格拉斯-普克(Douglas-Peucker,DP)算法压缩轨迹数据实现分段;其次定位含噪声轨迹段,基于段内有效点构造拟合函数;最终依据相邻点时空属性修复噪声数据。相较于主流插值算法(如拉格朗日、牛顿、埃尔米特、线性、三次样条及最近邻插值),本方法通过规避全局特征依赖,显著保留了噪声点蕴含的局部信息特征。【结果】基于长春市1500名志愿者2024年8月19日—9月1日的原始GNSS轨迹数据,设计2组对比实验。第1组将新型识别算法与原始DBSCAN及其主流衍生算法(KANN-DBSCAN、BDT-ADBSCAN)进行对比。实验表明:新算法在轮廓系数(SC)、Calinski-Harabasz指数(CHI)、Da‐vies-Bouldin指数(DBI)3项指标均取得最优值,优化幅度分别为40.17%~381.80%、20.03%~235.18%、23.42%~79.53%。第2组实验对比新型修复算法与6类经典插值方法(拉格朗日、牛顿、埃尔米特、线性、三次样条、最近邻),结果显示:新算法在轨迹相似性度量指标(Dynamic Time Warping,DTW)上全面优于对比方法,整体优化幅度达43.18%~80.43%。【结论】本文提出的噪声识别与修复算法显著提升了原始GNSS轨迹的质量精度,可高效支撑大规模轨迹数据预处理任务,为时空轨迹挖掘研究提供高质量数据基础。展开更多
基金funded by United Arab Emirates University(UAEU)under the UAEU-AUA grant number G00004577(12N145)with the corresponding grant at Universiti Malaya(UM)under grant number IF019-2024.
文摘Concrete-filled steel tubes(CFST)are widely utilized in civil engineering due to their superior load-bearing capacity,ductility,and seismic resistance.However,existing design codes,such as AISC and Eurocode 4,tend to be excessively conservative as they fail to account for the composite action between the steel tube and the concrete core.To address this limitation,this study proposes a hybrid model that integrates XGBoost with the Pied Kingfisher Optimizer(PKO),a nature-inspired algorithm,to enhance the accuracy of shear strength prediction for CFST columns.Additionally,quantile regression is employed to construct prediction intervals for the ultimate shear force,while the Asymmetric Squared Error Loss(ASEL)function is incorporated to mitigate overestimation errors.The computational results demonstrate that the PKO-XGBoost model delivers superior predictive accuracy,achieving a Mean Absolute Percentage Error(MAPE)of 4.431%and R2 of 0.9925 on the test set.Furthermore,the ASEL-PKO-XGBoost model substantially reduces overestimation errors to 28.26%,with negligible impact on predictive performance.Additionally,based on the Genetic Algorithm(GA)and existing equation models,a strength equation model is developed,achieving markedly higher accuracy than existing models(R^(2)=0.934).Lastly,web-based Graphical User Interfaces(GUIs)were developed to enable real-time prediction.
文摘The original intention of the algorithmic recommender system is to grapple with the negative impacts caused by information overload,but the system also can be used as"hypernudge",a new form of online manipulation,to inten⁃tionally exploit people's cognitive and decision-making gaps to influence their decisions in practice,which is particu⁃larly detrimental to the sustainable development of the digital market.Limiting harmful algorithmic online manipula⁃tion in digital markets has become a challenging task.Globally,both the EU and China have responded to this issue,and the differences between them are so evident that their governance measures can serve as the typical case.The EU focuses on improving citizens'digital literacy and their ability to integrate into digital social life to independently ad⁃dress this issue,and expects to address harmful manipulation behavior through binding and applicable hard law,which is part of the digital strategy.By comparison,although there exist certain legal norms that have made relevant stipula⁃tions on manipulation issues,China continues to issue specific departmental regulations to regulate algorithmic recom⁃mender services,and pays more attention to addressing collective harm caused by algorithmic online manipulation through a multiple co-governance approach led by the government or industry associations to implement supervision.
文摘The study conducts a bibliometric review of artificial intelligence applications in two areas:the entrepreneurial finance literature,and the corporate finance literature with implications for entrepreneurship.A rigorous search and screening of the web of science core collection identified 1,890 journal articles for analysis.The bibliometrics provide a detailed view of the knowledge field,indicating underdeveloped research directions.An important contribution comes from insights through artificial intelligence methods in entrepreneurship.The results demonstrate a high representation of artificial neural networks,deep neural networks,and support vector machines across almost all identified topic niches.In contrast,applications of topic modeling,fuzzy neural networks,and growing hierarchical self-organizing maps are rare.Additionally,we take a broader view by addressing the problem of applying artificial intelligence in economic science.Specifically,we present the foundational paradigm and a bespoke demonstration of the Monte Carlo randomized algorithm.
基金Project supported by the National Natural Science Foundation of China(Grant No.12404531)the Natural Science Foundation of the Higher Education Institutions of Jiangsu Province,China(Grant No.23KJB140011)。
文摘Efficient elastic wave focusing is crucial in materials and physical engineering.Elastic coding metasurfaces,which are innovative planar artificial structures,show great potential for use in the field of wave focusing.However,elastic coding lenses(ECLs)still suffer from low focusing performance,thickness comparable to wavelength,and frequency sensitivity.Here,we consider both the structural and material properties of the coding unit,thus realizing further compression of the thickness of the ECL.We chose the simplest ECL,which consists of only two encoding units.The coding unit 0 is a straight structure constructed using a carbon fiber reinforced composite material,and the coding unit 1 is a zigzag structure constructed using an aluminum material,and the thickness of the ECL constructed using them is only 1/8 of the wavelength.Based on the theoretical design,the arrangement of coding units is further optimized using genetic algorithms,which significantly improves the focusing performance of the lens at different focus and frequencies.This study provides a more effective way to control vibration and noise in advanced structures.
基金supported by the Social Science Foundation of Liaoning Province(L23BJY022).
文摘We explored the effects of algorithmic opacity on employees’playing dumb and evasive hiding rather than rationalized hiding.We examined the mediating role of job insecurity and the moderating role of employee-AI collaboration.Participants were 421 full-time employees(female=46.32%,junior employees=31.83%)from a variety of organizations and industries that interact with AI.Employees filled out data on algorithm opacity,job insecurity,knowledge hiding,employee-AI collaboration,and control variables.The results of the structural equation modeling indicated that algorithm opacity exacerbated employees’job insecurity,and job insecurity mediated between algorithm opacity and playing dumb and evasive hiding rather than rationalized hiding.The relationship between algorithmic opacity and playing dumb and evasive hiding was more positive when the level of employee-AI collaboration was higher.These findings suggest that employee-AI collaboration reinforces the indirect relationship between algorithmic opacity and playing dumb and evasive hiding.Our study contributes to research on human and AI collaboration by exploring the dark side of employee-AI collaboration.
基金support of the University of Warsaw under’New Ideas 3B’competition in POB Ⅲ implemented under the’Excellence Initiative-Research University’Programme.
文摘This paper investigates the optimization of data sampling and target labeling techniques to enhance algorithmic trading strategies in cryptocurrency markets,focusing on Bitcoin(BTC)and Ethereum(ETH).Traditional data sampling methods,such as time bars,often fail to capture the nuances of the continuously active and highly volatile cryptocurrency market and force traders to wait for arbitrary points in time.To address this,we propose an alternative approach using information-driven sampling methods,including the CUSUM filter,range bars,volume bars,and dollar bars,and evaluate their performance using tick-level data from January 2018 to June 2023.Additionally,we introduce the Triple Barrier method for target labeling,which offers a solution tailored for algorithmic trading as opposed to the widely used next-bar prediction.We empirically assess the effectiveness of these data sampling and labeling methods to craft profitable trading strategies.The results demonstrate that the innovative combination of CUSUM-filtered data with Triple Barrier labeling outperforms traditional time bars and next-bar prediction,achieving consistently positive trading performance even after accounting for transaction costs.Moreover,our system enables making trading decisions at any point in time on the basis of market conditions,providing an advantage over traditional methods that rely on fixed time intervals.Furthermore,the paper contributes to the ongoing debate on the applicability of Transformer models to time series classification in the context of algorithmic trading by evaluating various Transformer architectures—including the vanilla Transformer encoder,FEDformer,and Autoformer—alongside other deep learning architectures and classical machine learning models,revealing insights into their relative performance.
文摘With the rapid advancement of medical artificial intelligence(AI)technology,particularly the widespread adoption of AI diagnostic systems,ethical challenges in medical decision-making have garnered increasing attention.This paper analyzes the limitations of algorithmic ethics in medical decision-making and explores accountability mechanisms,aiming to provide theoretical support for ethically informed medical practices.The study highlights how the opacity of AI algorithms complicates the definition of decision-making responsibility,undermines doctor-patient trust,and affects informed consent.By thoroughly investigating issues such as the algorithmic“black box”problem and data privacy protection,we develop accountability assessment models to address ethical concerns related to medical resource allocation.Furthermore,this research examines the effective implementation of AI diagnostic systems through case studies of both successful and unsuccessful applications,extracting lessons on accountability mechanisms and response strategies.Finally,we emphasize that establishing a transparent accountability framework is crucial for enhancing the ethical standards of medical AI systems and protecting patients’rights and interests.
文摘This paper presents a novel method for reconstructing a highly accurate 3D nose model of the human from 2D images and pre-marked landmarks based on algorithmic methods.The study focuses on the reconstruction of a 3D nose model tailored for applications in healthcare and cosmetic surgery.The approach leverages advanced image processing techniques,3D Morphable Models(3DMM),and deformation techniques to overcome the limita-tions of deep learning models,particularly addressing the interpretability issues commonly encountered in medical applications.The proposed method estimates the 3D coordinates of landmark points using a 3D structure estimation algorithm.Sub-landmarks are extracted through image processing techniques and interpolation.The initial surface is generated using a 3DMM,though its accuracy remains limited.To enhance precision,deformation techniques are applied,utilizing the coordinates of 76 identified landmarks and sub-landmarks.The resulting 3D nose model is constructed based on algorithmic methods and pre-marked landmarks.Evaluation of the 3D model is conducted by comparing landmark distances and shape similarity with expert-determined ground truth on 30 Vietnamese volunteers aged 18 to 47,all of whom were either preparing for or required nasal surgery.Experimental results demonstrate a strong agreement between the reconstructed 3D model and the ground truth.The method achieved a mean landmark distance error of 0.631 mm and a shape error of 1.738 mm,demonstrating its potential for medical applications.
基金the Deanship of Scientific Research at King Khalid University for funding this work through large group under grant number(GRP.2/663/46).
文摘Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection methods-rooted in statistical heuristics,feature engineering,and shallow machine learning-struggle to adapt to the increasing sophistication,linguistic mimicry,and adversarial variability of DGA variants.The emergence of Large Language Models(LLMs)marks a transformative shift in this landscape.Leveraging deep contextual understanding,semantic generalization,and few-shot learning capabilities,LLMs such as BERT,GPT,and T5 have shown promising results in detecting both character-based and dictionary-based DGAs,including previously unseen(zeroday)variants.This paper provides a comprehensive and critical review of LLM-driven DGA detection,introducing a structured taxonomy of LLM architectures,evaluating the linguistic and behavioral properties of benchmark datasets,and comparing recent detection frameworks across accuracy,latency,robustness,and multilingual performance.We also highlight key limitations,including challenges in adversarial resilience,model interpretability,deployment scalability,and privacy risks.To address these gaps,we present a forward-looking research roadmap encompassing adversarial training,model compression,cross-lingual benchmarking,and real-time integration with SIEM/SOAR platforms.This survey aims to serve as a foundational resource for advancing the development of scalable,explainable,and operationally viable LLM-based DGA detection systems.
文摘This study investigates how artificial intelligence(AI)algorithms enable mainstream media to achieve precise emotional matching and improve communication efficiency through reconstructed communication logic.As digital intelligence technology rapidly evolves,mainstream media organizations are increasingly leveraging AI-driven empathy algorithms to enhance audience engagement and optimize content delivery.This research employs a mixed-methods approach,combining quantitative analysis of algorithmic performance metrics with qualitative examination of media communication patterns.Through systematic review of 150 academic papers and analysis of data from 12 major media platforms,this study reveals that algorithmic empathy systems can improve emotional resonance by 34.7%and increase audience engagement by 28.3%compared to traditional communication methods.The findings demonstrate that AI algorithms reconstruct media communication logic through three primary pathways:emotional pattern recognition,personalized content curation,and real-time sentiment adaptation.However,the study also identifies significant challenges including algorithmic bias,emotional authenticity concerns,and ethical implications of automated empathy.The research contributes to understanding how mainstream media can leverage AI technology to build high-quality empathetic communication while maintaining journalistic integrity and social responsibility.
文摘【目的】随着智慧城市建设中信息技术的深度应用,GNSS轨迹数据呈爆炸式增长,但其轨迹生成过程易受信号干扰与传感器故障影响而产生噪声。本文旨在设计新型噪声识别与修复算法,以提升原始GNSS轨迹数据的处理精度与质量。【方法】针对轨迹噪声识别问题,本文提出基于密度矩阵的自适应DBSCAN算法,其具有超参数无关特性,可敏感捕获低幅值噪声点,同时避免连续转向点的误判。针对噪声修复问题,提出基于轨迹分段的函数构造式修复算法:首先采用道格拉斯-普克(Douglas-Peucker,DP)算法压缩轨迹数据实现分段;其次定位含噪声轨迹段,基于段内有效点构造拟合函数;最终依据相邻点时空属性修复噪声数据。相较于主流插值算法(如拉格朗日、牛顿、埃尔米特、线性、三次样条及最近邻插值),本方法通过规避全局特征依赖,显著保留了噪声点蕴含的局部信息特征。【结果】基于长春市1500名志愿者2024年8月19日—9月1日的原始GNSS轨迹数据,设计2组对比实验。第1组将新型识别算法与原始DBSCAN及其主流衍生算法(KANN-DBSCAN、BDT-ADBSCAN)进行对比。实验表明:新算法在轮廓系数(SC)、Calinski-Harabasz指数(CHI)、Da‐vies-Bouldin指数(DBI)3项指标均取得最优值,优化幅度分别为40.17%~381.80%、20.03%~235.18%、23.42%~79.53%。第2组实验对比新型修复算法与6类经典插值方法(拉格朗日、牛顿、埃尔米特、线性、三次样条、最近邻),结果显示:新算法在轨迹相似性度量指标(Dynamic Time Warping,DTW)上全面优于对比方法,整体优化幅度达43.18%~80.43%。【结论】本文提出的噪声识别与修复算法显著提升了原始GNSS轨迹的质量精度,可高效支撑大规模轨迹数据预处理任务,为时空轨迹挖掘研究提供高质量数据基础。