The original intention of the algorithmic recommender system is to grapple with the negative impacts caused by information overload,but the system also can be used as"hypernudge",a new form of online manipul...The original intention of the algorithmic recommender system is to grapple with the negative impacts caused by information overload,but the system also can be used as"hypernudge",a new form of online manipulation,to inten⁃tionally exploit people's cognitive and decision-making gaps to influence their decisions in practice,which is particu⁃larly detrimental to the sustainable development of the digital market.Limiting harmful algorithmic online manipula⁃tion in digital markets has become a challenging task.Globally,both the EU and China have responded to this issue,and the differences between them are so evident that their governance measures can serve as the typical case.The EU focuses on improving citizens'digital literacy and their ability to integrate into digital social life to independently ad⁃dress this issue,and expects to address harmful manipulation behavior through binding and applicable hard law,which is part of the digital strategy.By comparison,although there exist certain legal norms that have made relevant stipula⁃tions on manipulation issues,China continues to issue specific departmental regulations to regulate algorithmic recom⁃mender services,and pays more attention to addressing collective harm caused by algorithmic online manipulation through a multiple co-governance approach led by the government or industry associations to implement supervision.展开更多
The study conducts a bibliometric review of artificial intelligence applications in two areas:the entrepreneurial finance literature,and the corporate finance literature with implications for entrepreneurship.A rigoro...The study conducts a bibliometric review of artificial intelligence applications in two areas:the entrepreneurial finance literature,and the corporate finance literature with implications for entrepreneurship.A rigorous search and screening of the web of science core collection identified 1,890 journal articles for analysis.The bibliometrics provide a detailed view of the knowledge field,indicating underdeveloped research directions.An important contribution comes from insights through artificial intelligence methods in entrepreneurship.The results demonstrate a high representation of artificial neural networks,deep neural networks,and support vector machines across almost all identified topic niches.In contrast,applications of topic modeling,fuzzy neural networks,and growing hierarchical self-organizing maps are rare.Additionally,we take a broader view by addressing the problem of applying artificial intelligence in economic science.Specifically,we present the foundational paradigm and a bespoke demonstration of the Monte Carlo randomized algorithm.展开更多
Efficient elastic wave focusing is crucial in materials and physical engineering.Elastic coding metasurfaces,which are innovative planar artificial structures,show great potential for use in the field of wave focusing...Efficient elastic wave focusing is crucial in materials and physical engineering.Elastic coding metasurfaces,which are innovative planar artificial structures,show great potential for use in the field of wave focusing.However,elastic coding lenses(ECLs)still suffer from low focusing performance,thickness comparable to wavelength,and frequency sensitivity.Here,we consider both the structural and material properties of the coding unit,thus realizing further compression of the thickness of the ECL.We chose the simplest ECL,which consists of only two encoding units.The coding unit 0 is a straight structure constructed using a carbon fiber reinforced composite material,and the coding unit 1 is a zigzag structure constructed using an aluminum material,and the thickness of the ECL constructed using them is only 1/8 of the wavelength.Based on the theoretical design,the arrangement of coding units is further optimized using genetic algorithms,which significantly improves the focusing performance of the lens at different focus and frequencies.This study provides a more effective way to control vibration and noise in advanced structures.展开更多
We explored the effects of algorithmic opacity on employees’playing dumb and evasive hiding rather than rationalized hiding.We examined the mediating role of job insecurity and the moderating role of employee-AI coll...We explored the effects of algorithmic opacity on employees’playing dumb and evasive hiding rather than rationalized hiding.We examined the mediating role of job insecurity and the moderating role of employee-AI collaboration.Participants were 421 full-time employees(female=46.32%,junior employees=31.83%)from a variety of organizations and industries that interact with AI.Employees filled out data on algorithm opacity,job insecurity,knowledge hiding,employee-AI collaboration,and control variables.The results of the structural equation modeling indicated that algorithm opacity exacerbated employees’job insecurity,and job insecurity mediated between algorithm opacity and playing dumb and evasive hiding rather than rationalized hiding.The relationship between algorithmic opacity and playing dumb and evasive hiding was more positive when the level of employee-AI collaboration was higher.These findings suggest that employee-AI collaboration reinforces the indirect relationship between algorithmic opacity and playing dumb and evasive hiding.Our study contributes to research on human and AI collaboration by exploring the dark side of employee-AI collaboration.展开更多
This paper investigates the optimization of data sampling and target labeling techniques to enhance algorithmic trading strategies in cryptocurrency markets,focusing on Bitcoin(BTC)and Ethereum(ETH).Traditional data s...This paper investigates the optimization of data sampling and target labeling techniques to enhance algorithmic trading strategies in cryptocurrency markets,focusing on Bitcoin(BTC)and Ethereum(ETH).Traditional data sampling methods,such as time bars,often fail to capture the nuances of the continuously active and highly volatile cryptocurrency market and force traders to wait for arbitrary points in time.To address this,we propose an alternative approach using information-driven sampling methods,including the CUSUM filter,range bars,volume bars,and dollar bars,and evaluate their performance using tick-level data from January 2018 to June 2023.Additionally,we introduce the Triple Barrier method for target labeling,which offers a solution tailored for algorithmic trading as opposed to the widely used next-bar prediction.We empirically assess the effectiveness of these data sampling and labeling methods to craft profitable trading strategies.The results demonstrate that the innovative combination of CUSUM-filtered data with Triple Barrier labeling outperforms traditional time bars and next-bar prediction,achieving consistently positive trading performance even after accounting for transaction costs.Moreover,our system enables making trading decisions at any point in time on the basis of market conditions,providing an advantage over traditional methods that rely on fixed time intervals.Furthermore,the paper contributes to the ongoing debate on the applicability of Transformer models to time series classification in the context of algorithmic trading by evaluating various Transformer architectures—including the vanilla Transformer encoder,FEDformer,and Autoformer—alongside other deep learning architectures and classical machine learning models,revealing insights into their relative performance.展开更多
With the rapid advancement of medical artificial intelligence(AI)technology,particularly the widespread adoption of AI diagnostic systems,ethical challenges in medical decision-making have garnered increasing attentio...With the rapid advancement of medical artificial intelligence(AI)technology,particularly the widespread adoption of AI diagnostic systems,ethical challenges in medical decision-making have garnered increasing attention.This paper analyzes the limitations of algorithmic ethics in medical decision-making and explores accountability mechanisms,aiming to provide theoretical support for ethically informed medical practices.The study highlights how the opacity of AI algorithms complicates the definition of decision-making responsibility,undermines doctor-patient trust,and affects informed consent.By thoroughly investigating issues such as the algorithmic“black box”problem and data privacy protection,we develop accountability assessment models to address ethical concerns related to medical resource allocation.Furthermore,this research examines the effective implementation of AI diagnostic systems through case studies of both successful and unsuccessful applications,extracting lessons on accountability mechanisms and response strategies.Finally,we emphasize that establishing a transparent accountability framework is crucial for enhancing the ethical standards of medical AI systems and protecting patients’rights and interests.展开更多
This paper presents a novel method for reconstructing a highly accurate 3D nose model of the human from 2D images and pre-marked landmarks based on algorithmic methods.The study focuses on the reconstruction of a 3D n...This paper presents a novel method for reconstructing a highly accurate 3D nose model of the human from 2D images and pre-marked landmarks based on algorithmic methods.The study focuses on the reconstruction of a 3D nose model tailored for applications in healthcare and cosmetic surgery.The approach leverages advanced image processing techniques,3D Morphable Models(3DMM),and deformation techniques to overcome the limita-tions of deep learning models,particularly addressing the interpretability issues commonly encountered in medical applications.The proposed method estimates the 3D coordinates of landmark points using a 3D structure estimation algorithm.Sub-landmarks are extracted through image processing techniques and interpolation.The initial surface is generated using a 3DMM,though its accuracy remains limited.To enhance precision,deformation techniques are applied,utilizing the coordinates of 76 identified landmarks and sub-landmarks.The resulting 3D nose model is constructed based on algorithmic methods and pre-marked landmarks.Evaluation of the 3D model is conducted by comparing landmark distances and shape similarity with expert-determined ground truth on 30 Vietnamese volunteers aged 18 to 47,all of whom were either preparing for or required nasal surgery.Experimental results demonstrate a strong agreement between the reconstructed 3D model and the ground truth.The method achieved a mean landmark distance error of 0.631 mm and a shape error of 1.738 mm,demonstrating its potential for medical applications.展开更多
Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection me...Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection methods-rooted in statistical heuristics,feature engineering,and shallow machine learning-struggle to adapt to the increasing sophistication,linguistic mimicry,and adversarial variability of DGA variants.The emergence of Large Language Models(LLMs)marks a transformative shift in this landscape.Leveraging deep contextual understanding,semantic generalization,and few-shot learning capabilities,LLMs such as BERT,GPT,and T5 have shown promising results in detecting both character-based and dictionary-based DGAs,including previously unseen(zeroday)variants.This paper provides a comprehensive and critical review of LLM-driven DGA detection,introducing a structured taxonomy of LLM architectures,evaluating the linguistic and behavioral properties of benchmark datasets,and comparing recent detection frameworks across accuracy,latency,robustness,and multilingual performance.We also highlight key limitations,including challenges in adversarial resilience,model interpretability,deployment scalability,and privacy risks.To address these gaps,we present a forward-looking research roadmap encompassing adversarial training,model compression,cross-lingual benchmarking,and real-time integration with SIEM/SOAR platforms.This survey aims to serve as a foundational resource for advancing the development of scalable,explainable,and operationally viable LLM-based DGA detection systems.展开更多
This study investigates how artificial intelligence(AI)algorithms enable mainstream media to achieve precise emotional matching and improve communication efficiency through reconstructed communication logic.As digital...This study investigates how artificial intelligence(AI)algorithms enable mainstream media to achieve precise emotional matching and improve communication efficiency through reconstructed communication logic.As digital intelligence technology rapidly evolves,mainstream media organizations are increasingly leveraging AI-driven empathy algorithms to enhance audience engagement and optimize content delivery.This research employs a mixed-methods approach,combining quantitative analysis of algorithmic performance metrics with qualitative examination of media communication patterns.Through systematic review of 150 academic papers and analysis of data from 12 major media platforms,this study reveals that algorithmic empathy systems can improve emotional resonance by 34.7%and increase audience engagement by 28.3%compared to traditional communication methods.The findings demonstrate that AI algorithms reconstruct media communication logic through three primary pathways:emotional pattern recognition,personalized content curation,and real-time sentiment adaptation.However,the study also identifies significant challenges including algorithmic bias,emotional authenticity concerns,and ethical implications of automated empathy.The research contributes to understanding how mainstream media can leverage AI technology to build high-quality empathetic communication while maintaining journalistic integrity and social responsibility.展开更多
文摘The original intention of the algorithmic recommender system is to grapple with the negative impacts caused by information overload,but the system also can be used as"hypernudge",a new form of online manipulation,to inten⁃tionally exploit people's cognitive and decision-making gaps to influence their decisions in practice,which is particu⁃larly detrimental to the sustainable development of the digital market.Limiting harmful algorithmic online manipula⁃tion in digital markets has become a challenging task.Globally,both the EU and China have responded to this issue,and the differences between them are so evident that their governance measures can serve as the typical case.The EU focuses on improving citizens'digital literacy and their ability to integrate into digital social life to independently ad⁃dress this issue,and expects to address harmful manipulation behavior through binding and applicable hard law,which is part of the digital strategy.By comparison,although there exist certain legal norms that have made relevant stipula⁃tions on manipulation issues,China continues to issue specific departmental regulations to regulate algorithmic recom⁃mender services,and pays more attention to addressing collective harm caused by algorithmic online manipulation through a multiple co-governance approach led by the government or industry associations to implement supervision.
文摘The study conducts a bibliometric review of artificial intelligence applications in two areas:the entrepreneurial finance literature,and the corporate finance literature with implications for entrepreneurship.A rigorous search and screening of the web of science core collection identified 1,890 journal articles for analysis.The bibliometrics provide a detailed view of the knowledge field,indicating underdeveloped research directions.An important contribution comes from insights through artificial intelligence methods in entrepreneurship.The results demonstrate a high representation of artificial neural networks,deep neural networks,and support vector machines across almost all identified topic niches.In contrast,applications of topic modeling,fuzzy neural networks,and growing hierarchical self-organizing maps are rare.Additionally,we take a broader view by addressing the problem of applying artificial intelligence in economic science.Specifically,we present the foundational paradigm and a bespoke demonstration of the Monte Carlo randomized algorithm.
基金Project supported by the National Natural Science Foundation of China(Grant No.12404531)the Natural Science Foundation of the Higher Education Institutions of Jiangsu Province,China(Grant No.23KJB140011)。
文摘Efficient elastic wave focusing is crucial in materials and physical engineering.Elastic coding metasurfaces,which are innovative planar artificial structures,show great potential for use in the field of wave focusing.However,elastic coding lenses(ECLs)still suffer from low focusing performance,thickness comparable to wavelength,and frequency sensitivity.Here,we consider both the structural and material properties of the coding unit,thus realizing further compression of the thickness of the ECL.We chose the simplest ECL,which consists of only two encoding units.The coding unit 0 is a straight structure constructed using a carbon fiber reinforced composite material,and the coding unit 1 is a zigzag structure constructed using an aluminum material,and the thickness of the ECL constructed using them is only 1/8 of the wavelength.Based on the theoretical design,the arrangement of coding units is further optimized using genetic algorithms,which significantly improves the focusing performance of the lens at different focus and frequencies.This study provides a more effective way to control vibration and noise in advanced structures.
基金supported by the Social Science Foundation of Liaoning Province(L23BJY022).
文摘We explored the effects of algorithmic opacity on employees’playing dumb and evasive hiding rather than rationalized hiding.We examined the mediating role of job insecurity and the moderating role of employee-AI collaboration.Participants were 421 full-time employees(female=46.32%,junior employees=31.83%)from a variety of organizations and industries that interact with AI.Employees filled out data on algorithm opacity,job insecurity,knowledge hiding,employee-AI collaboration,and control variables.The results of the structural equation modeling indicated that algorithm opacity exacerbated employees’job insecurity,and job insecurity mediated between algorithm opacity and playing dumb and evasive hiding rather than rationalized hiding.The relationship between algorithmic opacity and playing dumb and evasive hiding was more positive when the level of employee-AI collaboration was higher.These findings suggest that employee-AI collaboration reinforces the indirect relationship between algorithmic opacity and playing dumb and evasive hiding.Our study contributes to research on human and AI collaboration by exploring the dark side of employee-AI collaboration.
基金support of the University of Warsaw under’New Ideas 3B’competition in POB Ⅲ implemented under the’Excellence Initiative-Research University’Programme.
文摘This paper investigates the optimization of data sampling and target labeling techniques to enhance algorithmic trading strategies in cryptocurrency markets,focusing on Bitcoin(BTC)and Ethereum(ETH).Traditional data sampling methods,such as time bars,often fail to capture the nuances of the continuously active and highly volatile cryptocurrency market and force traders to wait for arbitrary points in time.To address this,we propose an alternative approach using information-driven sampling methods,including the CUSUM filter,range bars,volume bars,and dollar bars,and evaluate their performance using tick-level data from January 2018 to June 2023.Additionally,we introduce the Triple Barrier method for target labeling,which offers a solution tailored for algorithmic trading as opposed to the widely used next-bar prediction.We empirically assess the effectiveness of these data sampling and labeling methods to craft profitable trading strategies.The results demonstrate that the innovative combination of CUSUM-filtered data with Triple Barrier labeling outperforms traditional time bars and next-bar prediction,achieving consistently positive trading performance even after accounting for transaction costs.Moreover,our system enables making trading decisions at any point in time on the basis of market conditions,providing an advantage over traditional methods that rely on fixed time intervals.Furthermore,the paper contributes to the ongoing debate on the applicability of Transformer models to time series classification in the context of algorithmic trading by evaluating various Transformer architectures—including the vanilla Transformer encoder,FEDformer,and Autoformer—alongside other deep learning architectures and classical machine learning models,revealing insights into their relative performance.
文摘With the rapid advancement of medical artificial intelligence(AI)technology,particularly the widespread adoption of AI diagnostic systems,ethical challenges in medical decision-making have garnered increasing attention.This paper analyzes the limitations of algorithmic ethics in medical decision-making and explores accountability mechanisms,aiming to provide theoretical support for ethically informed medical practices.The study highlights how the opacity of AI algorithms complicates the definition of decision-making responsibility,undermines doctor-patient trust,and affects informed consent.By thoroughly investigating issues such as the algorithmic“black box”problem and data privacy protection,we develop accountability assessment models to address ethical concerns related to medical resource allocation.Furthermore,this research examines the effective implementation of AI diagnostic systems through case studies of both successful and unsuccessful applications,extracting lessons on accountability mechanisms and response strategies.Finally,we emphasize that establishing a transparent accountability framework is crucial for enhancing the ethical standards of medical AI systems and protecting patients’rights and interests.
文摘This paper presents a novel method for reconstructing a highly accurate 3D nose model of the human from 2D images and pre-marked landmarks based on algorithmic methods.The study focuses on the reconstruction of a 3D nose model tailored for applications in healthcare and cosmetic surgery.The approach leverages advanced image processing techniques,3D Morphable Models(3DMM),and deformation techniques to overcome the limita-tions of deep learning models,particularly addressing the interpretability issues commonly encountered in medical applications.The proposed method estimates the 3D coordinates of landmark points using a 3D structure estimation algorithm.Sub-landmarks are extracted through image processing techniques and interpolation.The initial surface is generated using a 3DMM,though its accuracy remains limited.To enhance precision,deformation techniques are applied,utilizing the coordinates of 76 identified landmarks and sub-landmarks.The resulting 3D nose model is constructed based on algorithmic methods and pre-marked landmarks.Evaluation of the 3D model is conducted by comparing landmark distances and shape similarity with expert-determined ground truth on 30 Vietnamese volunteers aged 18 to 47,all of whom were either preparing for or required nasal surgery.Experimental results demonstrate a strong agreement between the reconstructed 3D model and the ground truth.The method achieved a mean landmark distance error of 0.631 mm and a shape error of 1.738 mm,demonstrating its potential for medical applications.
基金the Deanship of Scientific Research at King Khalid University for funding this work through large group under grant number(GRP.2/663/46).
文摘Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection methods-rooted in statistical heuristics,feature engineering,and shallow machine learning-struggle to adapt to the increasing sophistication,linguistic mimicry,and adversarial variability of DGA variants.The emergence of Large Language Models(LLMs)marks a transformative shift in this landscape.Leveraging deep contextual understanding,semantic generalization,and few-shot learning capabilities,LLMs such as BERT,GPT,and T5 have shown promising results in detecting both character-based and dictionary-based DGAs,including previously unseen(zeroday)variants.This paper provides a comprehensive and critical review of LLM-driven DGA detection,introducing a structured taxonomy of LLM architectures,evaluating the linguistic and behavioral properties of benchmark datasets,and comparing recent detection frameworks across accuracy,latency,robustness,and multilingual performance.We also highlight key limitations,including challenges in adversarial resilience,model interpretability,deployment scalability,and privacy risks.To address these gaps,we present a forward-looking research roadmap encompassing adversarial training,model compression,cross-lingual benchmarking,and real-time integration with SIEM/SOAR platforms.This survey aims to serve as a foundational resource for advancing the development of scalable,explainable,and operationally viable LLM-based DGA detection systems.
文摘This study investigates how artificial intelligence(AI)algorithms enable mainstream media to achieve precise emotional matching and improve communication efficiency through reconstructed communication logic.As digital intelligence technology rapidly evolves,mainstream media organizations are increasingly leveraging AI-driven empathy algorithms to enhance audience engagement and optimize content delivery.This research employs a mixed-methods approach,combining quantitative analysis of algorithmic performance metrics with qualitative examination of media communication patterns.Through systematic review of 150 academic papers and analysis of data from 12 major media platforms,this study reveals that algorithmic empathy systems can improve emotional resonance by 34.7%and increase audience engagement by 28.3%compared to traditional communication methods.The findings demonstrate that AI algorithms reconstruct media communication logic through three primary pathways:emotional pattern recognition,personalized content curation,and real-time sentiment adaptation.However,the study also identifies significant challenges including algorithmic bias,emotional authenticity concerns,and ethical implications of automated empathy.The research contributes to understanding how mainstream media can leverage AI technology to build high-quality empathetic communication while maintaining journalistic integrity and social responsibility.