Over the last decades, email has been the major carrier for transporting spam and malicious contents over the network. Email is also the primary source of numerous criminal activities on the Internet. Computer Forensi...Over the last decades, email has been the major carrier for transporting spam and malicious contents over the network. Email is also the primary source of numerous criminal activities on the Internet. Computer Forensics is a systematic process to retain and analyze saved emails for the purpose of legal proceedings and other civil matters. Email analysis is challenging due to not only various fields that can be forged by hackers or malicious users, but also the flexibility of composing, editing, deleting of emails using offline (e.g., MS Outlook) or online (e.g., Web mail) email applications. Towards this direction, a number of open source forensics tools have been widely used by the practitioners. However, these tools have been developed in an isolated manner rather than a collaborative approach. Given that email forensic tool users need to understand to what extent a tool would be useful for his/her circumstances and conducting forensic analysis accordingly. In this paper, we examine a set of common features to compare and contrast five popular open source email forensic tools. The study finds that all email forensic tools are not similar, offer diverse types of facility. By combining analysis tools, it may be possible to gain detailed information in the area of email forensic.展开更多
Sixth-generation(6G)communication system promises unprecedented data density and transformative applications over different industries.However,managing heterogeneous data with different distributions in 6G-enabled mul...Sixth-generation(6G)communication system promises unprecedented data density and transformative applications over different industries.However,managing heterogeneous data with different distributions in 6G-enabled multi-access edge cloud networks presents challenges for efficient Machine Learning(ML)training and aggregation,often leading to increased energy consumption and reduced model generalization.To solve this problem,this research proposes a Weighted Proximal Policy-based Federated Learning approach integrated with Res Net50 and Scaled Exponential Linear Unit activation function(WPPFL-RS).The proposed method optimizes resource allocation such as CPU and memory,through enhancing the Cyber-twin technology to estimate the computing capacities of edge clouds.The proposed WPPFL-RS approach significantly minimizes the latency and energy consumption,solving complex challenges in 6G-enabled edge computing.This makes sure that efficient resource utilization and enhanced performance in heterogeneous edge networks.The proposed WPPFL-RS achieves a minimum latency of 8.20 s on 100 tasks,a significant improvement over the baseline Deep Reinforcement Learning(DRL),which recorded 11.39 s.This approach highlights its potential to enhance resource utilization and performance in 6G edge networks.展开更多
Transformer architectures have facilitated the development of large-scale and general-purpose sequence models for prediction tasks in natural language processing and computer vision,e.g.,GPT-3 and Swin Transformer.Alt...Transformer architectures have facilitated the development of large-scale and general-purpose sequence models for prediction tasks in natural language processing and computer vision,e.g.,GPT-3 and Swin Transformer.Although originally designed for prediction problems,it is natural to inquire about their suitability for sequential decision-making and reinforcement learning problems,which are typically beset by long-standing issues involving sample efficiency,credit assignment,and partial observability.In recent years,sequence models,especially the Transformer,have attracted increasing interest in the RL communities,spawning numerous approaches with notable effectiveness and generalizability.This survey presents a comprehensive overview of recent works aimed at solving sequential decision-making tasks with sequence models such as the Transformer,by discussing the connection between sequential decision-making and sequence modeling,and categorizing them based on the way they utilize the Transformer.Moreover,this paper puts forth various potential avenues for future research intending to improve the effectiveness of large sequence models for sequential decision-making,encompassing theoretical foundations,network architectures,algorithms,and efficient training systems.展开更多
Importance:Heart sound auscultation is a routinely used physical examination in clinical practice to identify potential cardiac abnormalities. However, accurate interpretation of heart sounds requires specialized trai...Importance:Heart sound auscultation is a routinely used physical examination in clinical practice to identify potential cardiac abnormalities. However, accurate interpretation of heart sounds requires specialized training and experience, which limits its generalizability. Deep learning, a subset of machine learning, involves training artiffcial neural networks to learn from large datasets and perform complex tasks with intricate patterns. Over the past decade, deep learning has been successfully applied to heart sound analysis, achieving remarkable results and accumulating substantial heart sound data for model training. Although several reviews have summarized deep learning algorithms for heart sound analysis, there is a lack of comprehensive summaries regarding the available heart sound data and the clinical applications. Highlights:This review will compile the commonly used heart sound datasets, introduce the fundamentals and state-of-the-art techniques in heart sound analysis and deep learning, and summarize the current applications of deep learning for heart sound analysis, along with their limitations and areas for future improvement. Conclusions:The integration of deep learning into heart sound analysis represents a signiffcant advancement in clinical practice. The growing availability of heart sound datasets and the continuous development of deep learning techniques contribute to the improvement and broader clinical adoption of these models. However, ongoing research is needed to address existing challenges and reffne these technologies for broader clinical use.展开更多
文摘Over the last decades, email has been the major carrier for transporting spam and malicious contents over the network. Email is also the primary source of numerous criminal activities on the Internet. Computer Forensics is a systematic process to retain and analyze saved emails for the purpose of legal proceedings and other civil matters. Email analysis is challenging due to not only various fields that can be forged by hackers or malicious users, but also the flexibility of composing, editing, deleting of emails using offline (e.g., MS Outlook) or online (e.g., Web mail) email applications. Towards this direction, a number of open source forensics tools have been widely used by the practitioners. However, these tools have been developed in an isolated manner rather than a collaborative approach. Given that email forensic tool users need to understand to what extent a tool would be useful for his/her circumstances and conducting forensic analysis accordingly. In this paper, we examine a set of common features to compare and contrast five popular open source email forensic tools. The study finds that all email forensic tools are not similar, offer diverse types of facility. By combining analysis tools, it may be possible to gain detailed information in the area of email forensic.
文摘Sixth-generation(6G)communication system promises unprecedented data density and transformative applications over different industries.However,managing heterogeneous data with different distributions in 6G-enabled multi-access edge cloud networks presents challenges for efficient Machine Learning(ML)training and aggregation,often leading to increased energy consumption and reduced model generalization.To solve this problem,this research proposes a Weighted Proximal Policy-based Federated Learning approach integrated with Res Net50 and Scaled Exponential Linear Unit activation function(WPPFL-RS).The proposed method optimizes resource allocation such as CPU and memory,through enhancing the Cyber-twin technology to estimate the computing capacities of edge clouds.The proposed WPPFL-RS approach significantly minimizes the latency and energy consumption,solving complex challenges in 6G-enabled edge computing.This makes sure that efficient resource utilization and enhanced performance in heterogeneous edge networks.The proposed WPPFL-RS achieves a minimum latency of 8.20 s on 100 tasks,a significant improvement over the baseline Deep Reinforcement Learning(DRL),which recorded 11.39 s.This approach highlights its potential to enhance resource utilization and performance in 6G edge networks.
基金The SJTU team was partially supported by“New Generation of AI 2030”Major Project(2018AAA0100900)Shanghai Municipal Science and Technology Major Project(2021SHZDZX0102)+1 种基金the National Natural Science Foundation of China(Grant No.62076161)Muning Wen is supported by Wu Wen Jun Honorary Scholarship,AI Institute,Shanghai Jiao Tong University.
文摘Transformer architectures have facilitated the development of large-scale and general-purpose sequence models for prediction tasks in natural language processing and computer vision,e.g.,GPT-3 and Swin Transformer.Although originally designed for prediction problems,it is natural to inquire about their suitability for sequential decision-making and reinforcement learning problems,which are typically beset by long-standing issues involving sample efficiency,credit assignment,and partial observability.In recent years,sequence models,especially the Transformer,have attracted increasing interest in the RL communities,spawning numerous approaches with notable effectiveness and generalizability.This survey presents a comprehensive overview of recent works aimed at solving sequential decision-making tasks with sequence models such as the Transformer,by discussing the connection between sequential decision-making and sequence modeling,and categorizing them based on the way they utilize the Transformer.Moreover,this paper puts forth various potential avenues for future research intending to improve the effectiveness of large sequence models for sequential decision-making,encompassing theoretical foundations,network architectures,algorithms,and efficient training systems.
基金supported by the National Natural Science Foundation of China(No.62102008)the Peking University People’s Hospital Scientific Research Development Funds(RDJP2022-39)the Clinical Medicine Plus X-Young Scholars Project of Peking University,and the Fundamental Research Funds for the Central Universities(PKU2024LCXQ030).
文摘Importance:Heart sound auscultation is a routinely used physical examination in clinical practice to identify potential cardiac abnormalities. However, accurate interpretation of heart sounds requires specialized training and experience, which limits its generalizability. Deep learning, a subset of machine learning, involves training artiffcial neural networks to learn from large datasets and perform complex tasks with intricate patterns. Over the past decade, deep learning has been successfully applied to heart sound analysis, achieving remarkable results and accumulating substantial heart sound data for model training. Although several reviews have summarized deep learning algorithms for heart sound analysis, there is a lack of comprehensive summaries regarding the available heart sound data and the clinical applications. Highlights:This review will compile the commonly used heart sound datasets, introduce the fundamentals and state-of-the-art techniques in heart sound analysis and deep learning, and summarize the current applications of deep learning for heart sound analysis, along with their limitations and areas for future improvement. Conclusions:The integration of deep learning into heart sound analysis represents a signiffcant advancement in clinical practice. The growing availability of heart sound datasets and the continuous development of deep learning techniques contribute to the improvement and broader clinical adoption of these models. However, ongoing research is needed to address existing challenges and reffne these technologies for broader clinical use.