With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comp...With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy.展开更多
The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)an...The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)and Deep Learning(DL)techniques have demonstrated promising early detection capabilities.However,their effectiveness is limited when handling the vast volumes of IoT-generated data due to scalability constraints,high computational costs,and the costly time-intensive process of data labeling.To address these challenges,this study proposes a Federated Learning(FL)framework that leverages collaborative and hybrid supervised learning to enhance cyber threat detection in IoT networks.By employing Deep Neural Networks(DNNs)and decentralized model training,the approach reduces computational complexity while improving detection accuracy.The proposed model demonstrates robust performance,achieving accuracies of 94.34%,99.95%,and 87.94%on the publicly available kitsune,Bot-IoT,and UNSW-NB15 datasets,respectively.Furthermore,its ability to detect zero-day attacks is validated through evaluations on two additional benchmark datasets,TON-IoT and IoT-23,using a Deep Federated Learning(DFL)framework,underscoring the generalization and effectiveness of the model in heterogeneous and decentralized IoT environments.Experimental results demonstrate superior performance over existing methods,establishing the proposed framework as an efficient and scalable solution for IoT security.展开更多
Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal...Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.展开更多
Processes supported by process-aware information systems are subject to continuous and often subtle changes due to evolving operational,organizational,or regulatory factors.These changes,referred to as incremental con...Processes supported by process-aware information systems are subject to continuous and often subtle changes due to evolving operational,organizational,or regulatory factors.These changes,referred to as incremental concept drift,gradually alter the behavior or structure of processes,making their detection and localization a challenging task.Traditional process mining techniques frequently assume process stationarity and are limited in their ability to detect such drift,particularly from a control-flow perspective.The objective of this research is to develop an interpretable and robust framework capable of detecting and localizing incremental concept drift in event logs,with a specific emphasis on the structural evolution of control-flow semantics in processes.We propose DriftXMiner,a control-flow-aware hybrid framework that combines statistical,machine learning,and process model analysis techniques.The approach comprises three key components:(1)Cumulative Drift Scanner that tracks directional statistical deviations to detect early drift signals;(2)a Temporal Clustering and Drift-Aware Forest Ensemble(DAFE)to capture distributional and classification-level changes in process behavior;and(3)Petri net-based process model reconstruction,which enables the precise localization of structural drift using transition deviation metrics and replay fitness scores.Experimental validation on the BPI Challenge 2017 event log demonstrates that DriftXMiner effectively identifies and localizes gradual and incremental process drift over time.The framework achieves a detection accuracy of 92.5%,a localization precision of 90.3%,and an F1-score of 0.91,outperforming competitive baselines such as CUSUM+Histograms and ADWIN+Alpha Miner.Visual analyses further confirm that identified drift points align with transitions in control-flow models and behavioral cluster structures.DriftXMiner offers a novel and interpretable solution for incremental concept drift detection and localization in dynamic,process-aware systems.By integrating statistical signal accumulation,temporal behavior profiling,and structural process mining,the framework enables finegrained drift explanation and supports adaptive process intelligence in evolving environments.Its modular architecture supports extension to streaming data and real-time monitoring contexts.展开更多
In the past decade,online Peer-to-Peer(P2P)lending platforms have transformed the lending industry,which has been historically dominated by commercial banks.Information technology breakthroughs such as big data-based ...In the past decade,online Peer-to-Peer(P2P)lending platforms have transformed the lending industry,which has been historically dominated by commercial banks.Information technology breakthroughs such as big data-based financial technologies(Fintech)have been identified as important disruptive driving forces for this paradigm shift.In this paper,we take an information economics perspective to investigate how big data affects the transformation of the lending industry.By identifying how signaling and search costs are reduced by big data analytics for credit risk management of P2P lending,we discuss how information asymmetry is reduced in the big data era.Rooted in the lending business,we propose a theory on the economics of big data and outline a number of research opportunities and challenging issues.展开更多
In this paper,the problem of increasing information transfer authenticity is formulated.And to reach a decision,the control methods and algorithms based on the use of statistical and structural information redundancy ...In this paper,the problem of increasing information transfer authenticity is formulated.And to reach a decision,the control methods and algorithms based on the use of statistical and structural information redundancy are presented.It is assumed that the controllable information is submitted as the text element images and it contains redundancy,caused by statistical relations and non-uniformity probability distribution of the transmitted data.The use of statistical redundancy allows to develop the adaptive rules of the authenticity control which take into account non-stationarity properties of image data while transferring the information.The structural redundancy peculiar to the container of image in a data transfer package is used for developing new rules to control the information authenticity on the basis of pattern recognition mechanisms.The techniques offered in this work are used to estimate the authenticity in structure of data transfer packages.The results of comparative analysis for developed methods and algorithms show that their parameters of efficiency are increased by criterion of probability of undetected mistakes,labour input and cost of realization.展开更多
An information representation framework is designed to overcome the problem of semantic heterogeneity in distributed environments in this paper, Emphasis is placed on establishing an XML-oriented semantic data model a...An information representation framework is designed to overcome the problem of semantic heterogeneity in distributed environments in this paper, Emphasis is placed on establishing an XML-oriented semantic data model and the mapping between XML data based on a global ontology semantic view. The framework is implemented in Web Service, which enhances information process efficiency, accuracy and the semantic interoperability as well.展开更多
Considering the secure authentication problem for equipment support information network,a clustering method based on the business information flow is proposed. Based on the proposed method,a cluster-based distributed ...Considering the secure authentication problem for equipment support information network,a clustering method based on the business information flow is proposed. Based on the proposed method,a cluster-based distributed authentication mechanism and an optimal design method for distributed certificate authority( CA)are designed. Compared with some conventional clustering methods for network,the proposed clustering method considers the business information flow of the network and the task of the network nodes,which can decrease the communication spending between the clusters and improve the network efficiency effectively. The identity authentication protocols between the nodes in the same cluster and in different clusters are designed. From the perspective of the security of network and the availability of distributed authentication service,the definition of the secure service success rate of distributed CA is given and it is taken as the aim of the optimal design for distributed CA. The efficiency of providing the distributed certificate service successfully by the distributed CA is taken as the constraint condition of the optimal design for distributed CA. The determination method for the optimal value of the threshold is investigated. The proposed method can provide references for the optimal design for distributed CA.展开更多
Background:We examine the signaling effect of borrowers’social media behavior,especially self-disclosure behavior,on the default probability of money borrowers on a peer-to-peer(P2P)lending site.Method:We use a uniqu...Background:We examine the signaling effect of borrowers’social media behavior,especially self-disclosure behavior,on the default probability of money borrowers on a peer-to-peer(P2P)lending site.Method:We use a unique dataset that combines loan data from a large P2P lending site with the borrower’s social media presence data from a popular social media site.Results:Through a natural experiment enabled by an instrument variable,we identify two forms of social media information that act as signals of borrowers’creditworthiness:(1)borrowers’choice to self-disclose their social media account to the P2P lending site,and(2)borrowers’social media behavior,such as their social network scope and social media engagement.Conclusion:This study offers new insights for screening borrowers in P2P lending and a novel usage of social media information.展开更多
The need for information systems in organizations and economic units increases as there is a great deal of data that arise from doing many of the processes in order to be addressed to provide information that can brin...The need for information systems in organizations and economic units increases as there is a great deal of data that arise from doing many of the processes in order to be addressed to provide information that can bring interest to multi-users, the new and distinctive management accounting systems which meet in a manner easily all the needs of institutions and individuals from financial business, accounting and management, which take into account the accuracy, speed and confidentiality of the information for which the system is designed. The paper aims to describe a computerized system that is able to predict the budget for the new year based on past budgets by using time series analysis, which gives results with errors to a minimum and controls the budget during the year, through the ability to control exchange, compared to the scheme with the investigator and calculating the deviation, measurement of performance ratio and the expense of a number of indicators relating to budgets, such as the rate of condensation of capital, the growth rate and profitability ratio and gives a clear indication whether these ratios are good or not. There is a positive impact on information systems through this system for its ability to accomplish complex calculations and process paperwork, which is faster than it was previously and there is also a high flexibility, where the system can do any adjustments required in helping relevant parties to control the financial matters of the decision-making appropriate action thereon.展开更多
With the rapid growth of information and communication technology (ICT), the violation of information privacy has increased in recent years. The privacy concerns now re-emerge right because people perceives a threat...With the rapid growth of information and communication technology (ICT), the violation of information privacy has increased in recent years. The privacy concerns now re-emerge right because people perceives a threat from new ICT that are equipped with enhanced capabilities for surveillance, storage, retrieval, and diffusion of personal information. With the trend in the prevalence and the easy use of ICT, it is of necessary to pay much attention to the issue how the ICT can threaten the privacy of individuals on the Internet. While the Email and P2P (Peer-to-Peer) tools are the most popular ICT, this paper aims at understanding their respectively dissemination patterns in spreading of personal private information. To this purpose, this paper using dynamic model technique to simulate the pattern of sensitive or personal private information propagating situation. In this study, an Email propagation model and a Susceptible-lnfected-Removed (SIR) model are proposed to simulate the propagation patterns of Email and P2P network respectively. Knowing their dissemination patterns would be helpful for system designers, ICT manager, corporate IT personnel, educators, policy makers, and legislators to incorporate consciousness of social and ethical information issues into the protection of information privacy.展开更多
Due to the overwhelming characteristics of the Internet of Things(IoT)and its adoption in approximately every aspect of our lives,the concept of individual devices’privacy has gained prominent attention from both cus...Due to the overwhelming characteristics of the Internet of Things(IoT)and its adoption in approximately every aspect of our lives,the concept of individual devices’privacy has gained prominent attention from both customers,i.e.,people,and industries as wearable devices collect sensitive information about patients(both admitted and outdoor)in smart healthcare infrastructures.In addition to privacy,outliers or noise are among the crucial issues,which are directly correlated with IoT infrastructures,as most member devices are resource-limited and could generate or transmit false data that is required to be refined before processing,i.e.,transmitting.Therefore,the development of privacy-preserving information fusion techniques is highly encouraged,especially those designed for smart IoT-enabled domains.In this paper,we are going to present an effective hybrid approach that can refine raw data values captured by the respectivemember device before transmission while preserving its privacy through the utilization of the differential privacy technique in IoT infrastructures.Sliding window,i.e.,δi based dynamic programming methodology,is implemented at the device level to ensure precise and accurate detection of outliers or noisy data,and refine it prior to activation of the respective transmission activity.Additionally,an appropriate privacy budget has been selected,which is enough to ensure the privacy of every individualmodule,i.e.,a wearable device such as a smartwatch attached to the patient’s body.In contrast,the end module,i.e.,the server in this case,can extract important information with approximately the maximum level of accuracy.Moreover,refined data has been processed by adding an appropriate nose through the Laplace mechanism to make it useless or meaningless for the adversary modules in the IoT.The proposed hybrid approach is trusted from both the device’s privacy and the integrity of the transmitted information perspectives.Simulation and analytical results have proved that the proposed privacy-preserving information fusion technique for wearable devices is an ideal solution for resource-constrained infrastructures such as IoT and the Internet ofMedical Things,where both device privacy and information integrity are important.Finally,the proposed hybrid approach is proven against well-known intruder attacks,especially those related to the privacy of the respective device in IoT infrastructures.展开更多
The goal of this manuscript is to present a research finding, based on a study conducted to identify, examine, and validate Social Media (SM) socio-technical information security factors, in line with usable-security ...The goal of this manuscript is to present a research finding, based on a study conducted to identify, examine, and validate Social Media (SM) socio-technical information security factors, in line with usable-security principles. The study followed literature search techniques, as well as theoretical and empirical methods of factor validation. The strategy used in literature search includes Boolean keywords search, and citation guides, using mainly web of science databases. As guided by study objectives, 9 SM socio-technical factors were identified, verified and validated. Both theoretical and empirical validation processes were followed. Thus, a theoretical validity test was conducted on 45 Likert scale items, involving 10 subject experts. From the score ratings of the experts, Content Validity Index (CVI) was calculated to determine the degree to which the identified factors exhibit appropriate items for the construct being measured, and 7 factors attained an adequate level of validity index. However, for reliability test, 32 respondents and 45 Likert scale items were used. Whereby, Cronbach’s alpha coefficient (α-values) were generated using SPSS. Subsequently, 8 factors attained an adequate level of reliability. Overall, the validated factors include;1) usability—visibility, learnability, and satisfaction;2) education and training—help and documentation;3) SM technology development—error handling, and revocability;4) information security —security, privacy, and expressiveness. In this case, the confirmed factors would add knowledge by providing a theoretical basis for rationalizing information security requirements on SM usage.展开更多
Our study aims to take a closer look at China's current information literacy(IL) program standards at secondary schools and to analyze their level of success and/or failures in a comparative way with those of the ...Our study aims to take a closer look at China's current information literacy(IL) program standards at secondary schools and to analyze their level of success and/or failures in a comparative way with those of the United States in terms of fulfilling their each other's mission-oriented mandates. Our research findings show that China's current IL standards of high schools contain a disproportionate emphasis on information technology(IT). Moreover, the stipulations of these IL standards are narrowly construed and without being solidly grounded on a broad and comprehensive educational perspective. We also suggest that there are two underlying causes for this set of unsound IL standards in China.Firstly, there is a lack of collaboration between two major competing forces engaged in the curricular development and research of IL in China: Those professionals in educational IT discipline vis-à-vis those in Library and Information Science. Secondly, library professionals have a very limited influence on major socio-cultural policies, even at their own institutions. As a result, this paper recommends the following three possible measures,which may help remedy this situation strategically: 1) Establishing a set of new IL curriculum standards based on an IL-centered educational perspective; 2) establishing a teacher-librarian's training program to promote school librarians' role in IL education; and 3) strengthening the research and development of an online IL education program and an accompanied evaluation mechanism.展开更多
Evaluating government openness is important in monitoring government performance and promoting government transparency. Therefore, it is necessary to develop an evaluation system for information openness of local gove...Evaluating government openness is important in monitoring government performance and promoting government transparency. Therefore, it is necessary to develop an evaluation system for information openness of local governments. In order to select evaluation indicators, we conducted a content analysis on current evaluation systems constructed by researchers and local governments and the materials of a case study on a local government. This evaluation system is composed of 5 first-tier indicators, 30 secondtier indicators and 69 third-tier indicators. Then Delphi Method and Analytic Hierarchy Process(AHP) Method are adopted to determine the weight of each indicator. At last, the practicability of the system is tested by an evaluation of the local government of Tianjin Binhai New Area, which has been undergoing administrative reform and attempting to reinvent itself in the past 5 years.展开更多
This paper presents a system to alert of dangerous a child situation of a child by applying context information collected from a home network to ontology that is capable of inference. Radio frequency Identification (R...This paper presents a system to alert of dangerous a child situation of a child by applying context information collected from a home network to ontology that is capable of inference. Radio frequency Identification (RFID) and sensors were used for the configuration of a home network, to obtain the raw data to convert into context information. To express the ontology, web ontology language (OWL) was used to provide the inference of context information. Then, simple object access protocol (SOAP) messages were used to notify of the dangerous situations that a child may be involved in via mobile devices. The proposed system consists of Context Manager, Service Manager, and Notification Manager. The child’s safety management system can proactively detect the context data of a child on the basis of context awareness. In the experiment, the Jena 2.0 by ontology reasoner and the OSGi(Open Service Gateway initiative) Gateway developed using open source software Knopflerfish 1.3.3 were used to implement the service frame work.展开更多
Purpose: This research aims to identify product search tasks in online shopplng ana analyze the characteristics of consumer multi-tasking search sessions. Design/methodology/approach: The experimental dataset contai...Purpose: This research aims to identify product search tasks in online shopplng ana analyze the characteristics of consumer multi-tasking search sessions. Design/methodology/approach: The experimental dataset contains 8,949 queries of 582 users from 3,483 search sessions. A sequential comparison of the Jaccard similarity coefficient between two adjacent search queries and hierarchical clustering of queries is used to identify search tasks. Findings: (1) Users issued a similar number of queries (1.43 to 1.47) with similar lengths (7.3-7.6 characters) per task in mono-tasking and multi-tasking sessions, and (2) Users spent more time on average in sessions with more tasks, but spent less time for each task when the number of tasks increased in a session. Research limitations: The task identification method that relies only on query terms does not completely reflect the complex nature of consumer shopping behavior.Practical implications: These results provide an exploratory understanding of the relationships among multiple shopping tasks, and can be useful for product recommendation and shopping task prediction. Originality/value: The originality of this research is its use of query clustering with online shopping task identification and analysis, and the analysis of product search session characteristics.展开更多
Investment of information technology (IT) in the government sector in Indonesia continues to increase every year. However, this increase has not been followed by good governance due to the lack of attention to good IT...Investment of information technology (IT) in the government sector in Indonesia continues to increase every year. However, this increase has not been followed by good governance due to the lack of attention to good IT management. Measurement of IT governance is therefore required as a basis for the continuous improvement of the IT services to government agencies. This study is aimed at producing an application to measure the maturity level of IT governance in government institutions, thus facilitating the process of improvement of IT services. The application developed is based on COBIT 4.1 framework and the design used is Unified Modeling Language. Through stages of information system development, this research results in an application for measuring the maturity level of IT governance that can be used by government agencies in assessing existing IT governance.展开更多
The spread of social media has increased contacts of members of communities on the lntemet. Members of these communities often use account names instead of real names. When they meet in the real world, they will find ...The spread of social media has increased contacts of members of communities on the lntemet. Members of these communities often use account names instead of real names. When they meet in the real world, they will find it useful to have a tool that enables them to associate the faces in fiont of them with the account names they know. This paper proposes a method that enables a person to identify the account name of the person ("target") in front of him/her using a smartphone. The attendees to a meeting exchange their identifiers (i.e., the account name) and GPS information using smartphones. When the user points his/her smartphone towards a target, the target's identifier is displayed near the target's head on the camera screen using AR (augmented reality). The position where the identifier is displayed is calculated from the differences in longitude and latitude between the user and the target and the azimuth direction of the target from the user. The target is identified based on this information, the face detection coordinates, and the distance between the two. The proposed method has been implemented using Android terminals, and identification accuracy has been examined through experiments.展开更多
Terms of intelligence in 20th and 21th century mean the methods of automatic extraction, analysis, interpretation and use of information. Thus, the intelligence services in the future created an electronic database in...Terms of intelligence in 20th and 21th century mean the methods of automatic extraction, analysis, interpretation and use of information. Thus, the intelligence services in the future created an electronic database in which to their being classified intelligence products, users could choose between the latter themselves relevant information. The EU (European Union) that activities are carried out from at least in year 1996, terrorist attacks in year 200l is only accelerating. Proposals to increase surveillance and international cooperation in this field have been drawn up before September 11 2011. On the Web you can fmd a list of networks (Cryptome, 2011), which could be connected, or are under the control of the security service--NSA (National Security Agency). United States of America in year 1994 enacted a law for telephone communication--Digital Telephony Act, which would require manufacturers of telecommunications equipment, leaving some security holes for control. In addition, we monitor the Internet and large corporations. The example of the United States of America in this action reveals the organization for electronic freedoms against a telecom company that the NSA illegally gains access to data on information technology users and Internet telephony.展开更多
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.RS-2023-00235509Development of security monitoring technology based network behavior against encrypted cyber threats in ICT convergence environment).
文摘With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2025R97)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)and Deep Learning(DL)techniques have demonstrated promising early detection capabilities.However,their effectiveness is limited when handling the vast volumes of IoT-generated data due to scalability constraints,high computational costs,and the costly time-intensive process of data labeling.To address these challenges,this study proposes a Federated Learning(FL)framework that leverages collaborative and hybrid supervised learning to enhance cyber threat detection in IoT networks.By employing Deep Neural Networks(DNNs)and decentralized model training,the approach reduces computational complexity while improving detection accuracy.The proposed model demonstrates robust performance,achieving accuracies of 94.34%,99.95%,and 87.94%on the publicly available kitsune,Bot-IoT,and UNSW-NB15 datasets,respectively.Furthermore,its ability to detect zero-day attacks is validated through evaluations on two additional benchmark datasets,TON-IoT and IoT-23,using a Deep Federated Learning(DFL)framework,underscoring the generalization and effectiveness of the model in heterogeneous and decentralized IoT environments.Experimental results demonstrate superior performance over existing methods,establishing the proposed framework as an efficient and scalable solution for IoT security.
文摘Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.
文摘Processes supported by process-aware information systems are subject to continuous and often subtle changes due to evolving operational,organizational,or regulatory factors.These changes,referred to as incremental concept drift,gradually alter the behavior or structure of processes,making their detection and localization a challenging task.Traditional process mining techniques frequently assume process stationarity and are limited in their ability to detect such drift,particularly from a control-flow perspective.The objective of this research is to develop an interpretable and robust framework capable of detecting and localizing incremental concept drift in event logs,with a specific emphasis on the structural evolution of control-flow semantics in processes.We propose DriftXMiner,a control-flow-aware hybrid framework that combines statistical,machine learning,and process model analysis techniques.The approach comprises three key components:(1)Cumulative Drift Scanner that tracks directional statistical deviations to detect early drift signals;(2)a Temporal Clustering and Drift-Aware Forest Ensemble(DAFE)to capture distributional and classification-level changes in process behavior;and(3)Petri net-based process model reconstruction,which enables the precise localization of structural drift using transition deviation metrics and replay fitness scores.Experimental validation on the BPI Challenge 2017 event log demonstrates that DriftXMiner effectively identifies and localizes gradual and incremental process drift over time.The framework achieves a detection accuracy of 92.5%,a localization precision of 90.3%,and an F1-score of 0.91,outperforming competitive baselines such as CUSUM+Histograms and ADWIN+Alpha Miner.Visual analyses further confirm that identified drift points align with transitions in control-flow models and behavioral cluster structures.DriftXMiner offers a novel and interpretable solution for incremental concept drift detection and localization in dynamic,process-aware systems.By integrating statistical signal accumulation,temporal behavior profiling,and structural process mining,the framework enables finegrained drift explanation and supports adaptive process intelligence in evolving environments.Its modular architecture supports extension to streaming data and real-time monitoring contexts.
文摘In the past decade,online Peer-to-Peer(P2P)lending platforms have transformed the lending industry,which has been historically dominated by commercial banks.Information technology breakthroughs such as big data-based financial technologies(Fintech)have been identified as important disruptive driving forces for this paradigm shift.In this paper,we take an information economics perspective to investigate how big data affects the transformation of the lending industry.By identifying how signaling and search costs are reduced by big data analytics for credit risk management of P2P lending,we discuss how information asymmetry is reduced in the big data era.Rooted in the lending business,we propose a theory on the economics of big data and outline a number of research opportunities and challenging issues.
文摘In this paper,the problem of increasing information transfer authenticity is formulated.And to reach a decision,the control methods and algorithms based on the use of statistical and structural information redundancy are presented.It is assumed that the controllable information is submitted as the text element images and it contains redundancy,caused by statistical relations and non-uniformity probability distribution of the transmitted data.The use of statistical redundancy allows to develop the adaptive rules of the authenticity control which take into account non-stationarity properties of image data while transferring the information.The structural redundancy peculiar to the container of image in a data transfer package is used for developing new rules to control the information authenticity on the basis of pattern recognition mechanisms.The techniques offered in this work are used to estimate the authenticity in structure of data transfer packages.The results of comparative analysis for developed methods and algorithms show that their parameters of efficiency are increased by criterion of probability of undetected mistakes,labour input and cost of realization.
基金Supported by the Natural Science Founda-tion of Hubei Province of China (2003ABA049) and NaturalScience Foundation of Hubei Education Agency of China(Z200511005 ,2003A012)
文摘An information representation framework is designed to overcome the problem of semantic heterogeneity in distributed environments in this paper, Emphasis is placed on establishing an XML-oriented semantic data model and the mapping between XML data based on a global ontology semantic view. The framework is implemented in Web Service, which enhances information process efficiency, accuracy and the semantic interoperability as well.
基金National Natural Science Foundation of China(No.61271152)Natural Science Foundation of Hebei Province,China(No.F2012506008)the Original Innovation Foundation of Ordnance Engineering College,China(No.YSCX0903)
文摘Considering the secure authentication problem for equipment support information network,a clustering method based on the business information flow is proposed. Based on the proposed method,a cluster-based distributed authentication mechanism and an optimal design method for distributed certificate authority( CA)are designed. Compared with some conventional clustering methods for network,the proposed clustering method considers the business information flow of the network and the task of the network nodes,which can decrease the communication spending between the clusters and improve the network efficiency effectively. The identity authentication protocols between the nodes in the same cluster and in different clusters are designed. From the perspective of the security of network and the availability of distributed authentication service,the definition of the secure service success rate of distributed CA is given and it is taken as the aim of the optimal design for distributed CA. The efficiency of providing the distributed certificate service successfully by the distributed CA is taken as the constraint condition of the optimal design for distributed CA. The determination method for the optimal value of the threshold is investigated. The proposed method can provide references for the optimal design for distributed CA.
基金Juan Feng would like to acknowledge GRF(General Research Fund)9042133City U SRG grant 7004566Bin Gu would like to acknowledge National Natural Science Foundation of China[Grant 71328102].
文摘Background:We examine the signaling effect of borrowers’social media behavior,especially self-disclosure behavior,on the default probability of money borrowers on a peer-to-peer(P2P)lending site.Method:We use a unique dataset that combines loan data from a large P2P lending site with the borrower’s social media presence data from a popular social media site.Results:Through a natural experiment enabled by an instrument variable,we identify two forms of social media information that act as signals of borrowers’creditworthiness:(1)borrowers’choice to self-disclose their social media account to the P2P lending site,and(2)borrowers’social media behavior,such as their social network scope and social media engagement.Conclusion:This study offers new insights for screening borrowers in P2P lending and a novel usage of social media information.
文摘The need for information systems in organizations and economic units increases as there is a great deal of data that arise from doing many of the processes in order to be addressed to provide information that can bring interest to multi-users, the new and distinctive management accounting systems which meet in a manner easily all the needs of institutions and individuals from financial business, accounting and management, which take into account the accuracy, speed and confidentiality of the information for which the system is designed. The paper aims to describe a computerized system that is able to predict the budget for the new year based on past budgets by using time series analysis, which gives results with errors to a minimum and controls the budget during the year, through the ability to control exchange, compared to the scheme with the investigator and calculating the deviation, measurement of performance ratio and the expense of a number of indicators relating to budgets, such as the rate of condensation of capital, the growth rate and profitability ratio and gives a clear indication whether these ratios are good or not. There is a positive impact on information systems through this system for its ability to accomplish complex calculations and process paperwork, which is faster than it was previously and there is also a high flexibility, where the system can do any adjustments required in helping relevant parties to control the financial matters of the decision-making appropriate action thereon.
文摘With the rapid growth of information and communication technology (ICT), the violation of information privacy has increased in recent years. The privacy concerns now re-emerge right because people perceives a threat from new ICT that are equipped with enhanced capabilities for surveillance, storage, retrieval, and diffusion of personal information. With the trend in the prevalence and the easy use of ICT, it is of necessary to pay much attention to the issue how the ICT can threaten the privacy of individuals on the Internet. While the Email and P2P (Peer-to-Peer) tools are the most popular ICT, this paper aims at understanding their respectively dissemination patterns in spreading of personal private information. To this purpose, this paper using dynamic model technique to simulate the pattern of sensitive or personal private information propagating situation. In this study, an Email propagation model and a Susceptible-lnfected-Removed (SIR) model are proposed to simulate the propagation patterns of Email and P2P network respectively. Knowing their dissemination patterns would be helpful for system designers, ICT manager, corporate IT personnel, educators, policy makers, and legislators to incorporate consciousness of social and ethical information issues into the protection of information privacy.
基金Ministry of Higher Education of Malaysia under the Research GrantLRGS/1/2019/UKM-UKM/5/2 and Princess Nourah bint Abdulrahman University for financing this researcher through Supporting Project Number(PNURSP2024R235),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Due to the overwhelming characteristics of the Internet of Things(IoT)and its adoption in approximately every aspect of our lives,the concept of individual devices’privacy has gained prominent attention from both customers,i.e.,people,and industries as wearable devices collect sensitive information about patients(both admitted and outdoor)in smart healthcare infrastructures.In addition to privacy,outliers or noise are among the crucial issues,which are directly correlated with IoT infrastructures,as most member devices are resource-limited and could generate or transmit false data that is required to be refined before processing,i.e.,transmitting.Therefore,the development of privacy-preserving information fusion techniques is highly encouraged,especially those designed for smart IoT-enabled domains.In this paper,we are going to present an effective hybrid approach that can refine raw data values captured by the respectivemember device before transmission while preserving its privacy through the utilization of the differential privacy technique in IoT infrastructures.Sliding window,i.e.,δi based dynamic programming methodology,is implemented at the device level to ensure precise and accurate detection of outliers or noisy data,and refine it prior to activation of the respective transmission activity.Additionally,an appropriate privacy budget has been selected,which is enough to ensure the privacy of every individualmodule,i.e.,a wearable device such as a smartwatch attached to the patient’s body.In contrast,the end module,i.e.,the server in this case,can extract important information with approximately the maximum level of accuracy.Moreover,refined data has been processed by adding an appropriate nose through the Laplace mechanism to make it useless or meaningless for the adversary modules in the IoT.The proposed hybrid approach is trusted from both the device’s privacy and the integrity of the transmitted information perspectives.Simulation and analytical results have proved that the proposed privacy-preserving information fusion technique for wearable devices is an ideal solution for resource-constrained infrastructures such as IoT and the Internet ofMedical Things,where both device privacy and information integrity are important.Finally,the proposed hybrid approach is proven against well-known intruder attacks,especially those related to the privacy of the respective device in IoT infrastructures.
文摘The goal of this manuscript is to present a research finding, based on a study conducted to identify, examine, and validate Social Media (SM) socio-technical information security factors, in line with usable-security principles. The study followed literature search techniques, as well as theoretical and empirical methods of factor validation. The strategy used in literature search includes Boolean keywords search, and citation guides, using mainly web of science databases. As guided by study objectives, 9 SM socio-technical factors were identified, verified and validated. Both theoretical and empirical validation processes were followed. Thus, a theoretical validity test was conducted on 45 Likert scale items, involving 10 subject experts. From the score ratings of the experts, Content Validity Index (CVI) was calculated to determine the degree to which the identified factors exhibit appropriate items for the construct being measured, and 7 factors attained an adequate level of validity index. However, for reliability test, 32 respondents and 45 Likert scale items were used. Whereby, Cronbach’s alpha coefficient (α-values) were generated using SPSS. Subsequently, 8 factors attained an adequate level of reliability. Overall, the validated factors include;1) usability—visibility, learnability, and satisfaction;2) education and training—help and documentation;3) SM technology development—error handling, and revocability;4) information security —security, privacy, and expressiveness. In this case, the confirmed factors would add knowledge by providing a theoretical basis for rationalizing information security requirements on SM usage.
文摘Our study aims to take a closer look at China's current information literacy(IL) program standards at secondary schools and to analyze their level of success and/or failures in a comparative way with those of the United States in terms of fulfilling their each other's mission-oriented mandates. Our research findings show that China's current IL standards of high schools contain a disproportionate emphasis on information technology(IT). Moreover, the stipulations of these IL standards are narrowly construed and without being solidly grounded on a broad and comprehensive educational perspective. We also suggest that there are two underlying causes for this set of unsound IL standards in China.Firstly, there is a lack of collaboration between two major competing forces engaged in the curricular development and research of IL in China: Those professionals in educational IT discipline vis-à-vis those in Library and Information Science. Secondly, library professionals have a very limited influence on major socio-cultural policies, even at their own institutions. As a result, this paper recommends the following three possible measures,which may help remedy this situation strategically: 1) Establishing a set of new IL curriculum standards based on an IL-centered educational perspective; 2) establishing a teacher-librarian's training program to promote school librarians' role in IL education; and 3) strengthening the research and development of an online IL education program and an accompanied evaluation mechanism.
基金jointly supported by the Foundation for Humanities and Social Sciences of the Chinese Ministry of Education(Grant No.10YJA870021)Center for Asia Research of Nankai University(Grant No.AS0917)
文摘Evaluating government openness is important in monitoring government performance and promoting government transparency. Therefore, it is necessary to develop an evaluation system for information openness of local governments. In order to select evaluation indicators, we conducted a content analysis on current evaluation systems constructed by researchers and local governments and the materials of a case study on a local government. This evaluation system is composed of 5 first-tier indicators, 30 secondtier indicators and 69 third-tier indicators. Then Delphi Method and Analytic Hierarchy Process(AHP) Method are adopted to determine the weight of each indicator. At last, the practicability of the system is tested by an evaluation of the local government of Tianjin Binhai New Area, which has been undergoing administrative reform and attempting to reinvent itself in the past 5 years.
基金This work is supported by the MIC( Ministry of Information and Communication) , Korea , under the ITRC(Information Technology Research Center) support program supervised by the IITA(Institute of Information Technology Assessment) (IITA-2005-C1090-0502-0031) .
文摘This paper presents a system to alert of dangerous a child situation of a child by applying context information collected from a home network to ontology that is capable of inference. Radio frequency Identification (RFID) and sensors were used for the configuration of a home network, to obtain the raw data to convert into context information. To express the ontology, web ontology language (OWL) was used to provide the inference of context information. Then, simple object access protocol (SOAP) messages were used to notify of the dangerous situations that a child may be involved in via mobile devices. The proposed system consists of Context Manager, Service Manager, and Notification Manager. The child’s safety management system can proactively detect the context data of a child on the basis of context awareness. In the experiment, the Jena 2.0 by ontology reasoner and the OSGi(Open Service Gateway initiative) Gateway developed using open source software Knopflerfish 1.3.3 were used to implement the service frame work.
基金supported by the National Science Foundation of China(NSFC)Grant(No.71373015)
文摘Purpose: This research aims to identify product search tasks in online shopplng ana analyze the characteristics of consumer multi-tasking search sessions. Design/methodology/approach: The experimental dataset contains 8,949 queries of 582 users from 3,483 search sessions. A sequential comparison of the Jaccard similarity coefficient between two adjacent search queries and hierarchical clustering of queries is used to identify search tasks. Findings: (1) Users issued a similar number of queries (1.43 to 1.47) with similar lengths (7.3-7.6 characters) per task in mono-tasking and multi-tasking sessions, and (2) Users spent more time on average in sessions with more tasks, but spent less time for each task when the number of tasks increased in a session. Research limitations: The task identification method that relies only on query terms does not completely reflect the complex nature of consumer shopping behavior.Practical implications: These results provide an exploratory understanding of the relationships among multiple shopping tasks, and can be useful for product recommendation and shopping task prediction. Originality/value: The originality of this research is its use of query clustering with online shopping task identification and analysis, and the analysis of product search session characteristics.
文摘Investment of information technology (IT) in the government sector in Indonesia continues to increase every year. However, this increase has not been followed by good governance due to the lack of attention to good IT management. Measurement of IT governance is therefore required as a basis for the continuous improvement of the IT services to government agencies. This study is aimed at producing an application to measure the maturity level of IT governance in government institutions, thus facilitating the process of improvement of IT services. The application developed is based on COBIT 4.1 framework and the design used is Unified Modeling Language. Through stages of information system development, this research results in an application for measuring the maturity level of IT governance that can be used by government agencies in assessing existing IT governance.
文摘The spread of social media has increased contacts of members of communities on the lntemet. Members of these communities often use account names instead of real names. When they meet in the real world, they will find it useful to have a tool that enables them to associate the faces in fiont of them with the account names they know. This paper proposes a method that enables a person to identify the account name of the person ("target") in front of him/her using a smartphone. The attendees to a meeting exchange their identifiers (i.e., the account name) and GPS information using smartphones. When the user points his/her smartphone towards a target, the target's identifier is displayed near the target's head on the camera screen using AR (augmented reality). The position where the identifier is displayed is calculated from the differences in longitude and latitude between the user and the target and the azimuth direction of the target from the user. The target is identified based on this information, the face detection coordinates, and the distance between the two. The proposed method has been implemented using Android terminals, and identification accuracy has been examined through experiments.
文摘Terms of intelligence in 20th and 21th century mean the methods of automatic extraction, analysis, interpretation and use of information. Thus, the intelligence services in the future created an electronic database in which to their being classified intelligence products, users could choose between the latter themselves relevant information. The EU (European Union) that activities are carried out from at least in year 1996, terrorist attacks in year 200l is only accelerating. Proposals to increase surveillance and international cooperation in this field have been drawn up before September 11 2011. On the Web you can fmd a list of networks (Cryptome, 2011), which could be connected, or are under the control of the security service--NSA (National Security Agency). United States of America in year 1994 enacted a law for telephone communication--Digital Telephony Act, which would require manufacturers of telecommunications equipment, leaving some security holes for control. In addition, we monitor the Internet and large corporations. The example of the United States of America in this action reveals the organization for electronic freedoms against a telecom company that the NSA illegally gains access to data on information technology users and Internet telephony.