Deep-time Earth research plays a pivotal role in deciphering the rates,patterns,and mechanisms of Earth's evolutionary processes throughout geological history,providing essential scientific foundations for climate...Deep-time Earth research plays a pivotal role in deciphering the rates,patterns,and mechanisms of Earth's evolutionary processes throughout geological history,providing essential scientific foundations for climate prediction,natural resource exploration,and sustainable planetary stewardship.To advance Deep-time Earth research in the era of big data and artificial intelligence,the International Union of Geological Sciences initiated the“Deeptime Digital Earth International Big Science Program”(DDE)in 2019.At the core of this ambitious program lies the development of geoscience knowledge graphs,serving as a transformative knowledge infrastructure that enables the integration,sharing,mining,and analysis of heterogeneous geoscience big data.The DDE knowledge graph initiative has made significant strides in three critical dimensions:(1)establishing a unified knowledge structure across geoscience disciplines that ensures consistent representation of geological entities and their interrelationships through standardized ontologies and semantic frameworks;(2)developing a robust and scalable software infrastructure capable of supporting both expert-driven and machine-assisted knowledge engineering for large-scale graph construction and management;(3)implementing a comprehensive three-tiered architecture encompassing basic,discipline-specific,and application-oriented knowledge graphs,spanning approximately 20 geoscience disciplines.Through its open knowledge framework and international collaborative network,this initiative has fostered multinational research collaborations,establishing a robust foundation for next-generation geoscience research while propelling the discipline toward FAIR(Findable,Accessible,Interoperable,Reusable)data practices in deep-time Earth systems research.展开更多
Brain-computer interfaces(BCIs)represent an emerging technology that facilitates direct communication between the brain and external devices.In recent years,numerous review articles have explored various aspects of BC...Brain-computer interfaces(BCIs)represent an emerging technology that facilitates direct communication between the brain and external devices.In recent years,numerous review articles have explored various aspects of BCIs,including their fundamental principles,technical advancements,and applications in specific domains.However,these reviews often focus on signal processing,hardware development,or limited applications such as motor rehabilitation or communication.This paper aims to offer a comprehensive review of recent electroencephalogram(EEG)-based BCI applications in the medical field across 8 critical areas,encompassing rehabilitation,daily communication,epilepsy,cerebral resuscitation,sleep,neurodegenerative diseases,anesthesiology,and emotion recognition.Moreover,the current challenges and future trends of BCIs were also discussed,including personal privacy and ethical concerns,network security vulnerabilities,safety issues,and biocompatibility.展开更多
Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples ca...Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples can easily mislead DNNs into incorrect behavior via the injection of imperceptible modification to the input data.In this survey,we focus on(1)adversarial attack algorithms to generate adversarial examples,(2)adversarial defense techniques to secure DNNs against adversarial examples,and(3)important problems in the realm of adversarial examples beyond attack and defense,including the theoretical explanations,trade-off issues and benign attacks in adversarial examples.Additionally,we draw a brief comparison between recently published surveys on adversarial examples,and identify the future directions for the research of adversarial examples,such as the generalization of methods and the understanding of transferability,that might be solutions to the open problems in this field.展开更多
Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain...Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.展开更多
Segmenting the lesion regions from the ultrasound (US) images is an important step in the intra-operative planning of some computer-aided therapies. High-Intensity Focused Ultrasound (HIFU), as a popular computer-...Segmenting the lesion regions from the ultrasound (US) images is an important step in the intra-operative planning of some computer-aided therapies. High-Intensity Focused Ultrasound (HIFU), as a popular computer-aided therapy, has been widely used in the treatment of uterine fibroids. However, such segmentation in HIFU remains challenge for two reasons: (1) the blurry or missing boundaries of lesion regions in the HIFU images and (2) the deformation of uterine fibroids caused by the patient's breathing or an external force during the US imaging process, which can lead to complex shapes of lesion regions. These factors have prevented classical active contour-based segmentation methods from yielding desired results for uterine fibroids in US images. In this paper, a novel active contour-based segmentation method is proposed, which utilizes the correlation information of target shapes among a sequence of images as prior knowledge to aid the existing active contour method. This prior knowledge can be interpreted as a unsupervised clustering of shapes prior modeling. Meanwhile, it is also proved that the shapes correlation has the low-rank property in a linear space, and the theory of matrix recovery is used as an effective tool to impose the proposed prior on an existing active contour model. Finally, an accurate method is developed to solve the proposed model by using the Augmented Lagrange Multiplier (ALM). Experimental results from both synthetic and clinical uterine fibroids US image sequences demonstrate that the proposed method can consistently improve the performance of active contour models and increase the robustness against missing or misleading boundaries, and can greatly improve the efficiency of HIFU therapy.展开更多
This research studies the process of 3D reconstruction and dynamic concision based on 2D medical digital images using virtual reality modelling language (VRML) and JavaScript language, with a focus on how to realize t...This research studies the process of 3D reconstruction and dynamic concision based on 2D medical digital images using virtual reality modelling language (VRML) and JavaScript language, with a focus on how to realize the dynamic concision of 3D medical model with script node and sensor node in VRML. The 3D reconstruction and concision of body internal organs can be built with such high quality that they are better than those obtained from the traditional methods. With the function of dynamic concision, the VRML browser can offer better windows for man-computer interaction in real-time environment than ever before. 3D reconstruction and dynamic concision with VRML can be used to meet the requirement for the medical observation of 3D reconstruction and have a promising prospect in the fields of medical imaging.展开更多
The accuracy and repeatability of computer aided cervical vertebra landmarking (CACVL) were investigated in cephalogram.120 adolescents (60 boys,60 girls) aged from 9.1 to 17.2 years old were randomly selected.Twenty-...The accuracy and repeatability of computer aided cervical vertebra landmarking (CACVL) were investigated in cephalogram.120 adolescents (60 boys,60 girls) aged from 9.1 to 17.2 years old were randomly selected.Twenty-seven landmarks from the second to fifth cervical vertebrae on the lat-eral cephalogram were identified.In this study,the system of CACVL was developed and used to iden-tify and calculate the landmarks by fast marching method and parabolic curve fitting.The accuracy and repeatability in CACVL group were compared with those in two manual landmarking groups [orthodon-tic experts (OE) group and orthodontic novices (ON) group].The results showed that,as for the accu-racy,there was no significant difference between CACVL group and OE group no matter in x-axis or y-axis (P>0.05),but there was significant difference between CACVL group and ON group,as well as OE group and ON group in both axes (P<0.05).As for the repeatability,CACVL group was more reli-able than OE group and ON group in both axes.It is concluded that CACVL has the same or higher ac-curacy,better repeatability and less workload than manual landmarking methods.It’s reliable for cervi-cal parameters identification on the lateral cephalogram and cervical vertebral maturation prediction in orthodontic practice and research.展开更多
The integrity and fidelity of digital evidence are very important in live forensics. Previous studies have focused the uncertainty of live forensics based on different memory snapshots. However,this kind of method is ...The integrity and fidelity of digital evidence are very important in live forensics. Previous studies have focused the uncertainty of live forensics based on different memory snapshots. However,this kind of method is not effective in practice. In fact,memory images are usually acquired by using forensics tools instead of using snapshots. Therefore,the integrity and fidelity of live evidence should be evaluated during the acquisition process. In this paper,we study the problem in a novel viewpoint. Firstly,several definitions about memory acquisition measure error are introduced to describe the trusty. Then,we analyze the experimental error and propose some suggestions on how to reduce it. A novel method is also developed to calculate the system error in detail. The results of a case study on Windows 7 and VMware virtual machine show that the experimental error has good accuracy and precision,which demonstrate the efficacy of the proposed reducing methods. The system error is also evaluated,that is,it accounts for the whole error from 30% to 50%.展开更多
In the system of Computer Network Collaborative Defense(CNCD),it is difficult to evaluate the trustworthiness of defense agents which are newly added to the system,since they lack historical interaction for trust eval...In the system of Computer Network Collaborative Defense(CNCD),it is difficult to evaluate the trustworthiness of defense agents which are newly added to the system,since they lack historical interaction for trust evaluation.This will lead that the newly added agents could not get reasonable initial trustworthiness,and affect the whole process of trust evaluation.To solve this problem in CNCD,a trust type based trust bootstrapping model was introduced in this research.First,the division of trust type,trust utility and defense cost were discussed.Then the constraints of defense tasks were analyzed based on game theory.According to the constraints obtained,the trust type of defense agents was identified and the initial trustworthiness was assigned to defense agents.The simulated experiment shows that the methods proposed have lower failure rate of defense tasks and better adaptability in the respect of defense task execution.展开更多
The conventional time function of electromechanical relays is hard to coordinate with other relays. In order to promote the application of inverse-time overcurrent relays, a new time function for microprocessor-type r...The conventional time function of electromechanical relays is hard to coordinate with other relays. In order to promote the application of inverse-time overcurrent relays, a new time function for microprocessor-type relay is proposed. The setting of the trip time for this relay is performed by determining the shortest trip time and the longest trip time, respectively. The results of analysis show that with the new time function, the inverse-time overcurrent relay is easy to coordinate with other relays and has a comparatively shorter trip time, and that the fault happens in the protective zone.展开更多
Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are...Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.展开更多
The rise or fall of the stock markets directly affects investors’interest and loyalty.Therefore,it is necessary to measure the performance of stocks in the market in advance to prevent our assets from suffering signi...The rise or fall of the stock markets directly affects investors’interest and loyalty.Therefore,it is necessary to measure the performance of stocks in the market in advance to prevent our assets from suffering significant losses.In our proposed study,six supervised machine learning(ML)strategies and deep learning(DL)models with long short-term memory(LSTM)of data science was deployed for thorough analysis and measurement of the performance of the technology stocks.Under discussion are Apple Inc.(AAPL),Microsoft Corporation(MSFT),Broadcom Inc.,Taiwan Semiconductor Manufacturing Company Limited(TSM),NVIDIA Corporation(NVDA),and Avigilon Corporation(AVGO).The datasets were taken from the Yahoo Finance API from 06-05-2005 to 06-05-2022(seventeen years)with 4280 samples.As already noted,multiple studies have been performed to resolve this problem using linear regression,support vectormachines,deep long short-termmemory(LSTM),and many other models.In this research,the Hidden Markov Model(HMM)outperformed other employed machine learning ensembles,tree-based models,the ARIMA(Auto Regressive IntegratedMoving Average)model,and long short-term memory with a robust mean accuracy score of 99.98.Other statistical analyses and measurements for machine learning ensemble algorithms,the Long Short-TermModel,and ARIMA were also carried out for further investigation of the performance of advanced models for forecasting time series data.Thus,the proposed research found the best model to be HMM,and LSTM was the second-best model that performed well in all aspects.A developedmodel will be highly recommended and helpful for early measurement of technology stock performance for investment or withdrawal based on the future stock rise or fall for creating smart environments.展开更多
The mortar pumpability is essential in the construction industry,which requires much labor to estimate manually and always causes material waste.This paper proposes an effective method by combining a 3-dimensional con...The mortar pumpability is essential in the construction industry,which requires much labor to estimate manually and always causes material waste.This paper proposes an effective method by combining a 3-dimensional convolutional neural network(3D CNN)with a 2-dimensional convolutional long short-term memory network(ConvLSTM2D)to automatically classify the mortar pumpability.Experiment results show that the proposed model has an accuracy rate of 100%with a fast convergence speed,based on the dataset organized by collecting the corresponding mortar image sequences.This work demonstrates the feasibility of using computer vision and deep learning for mortar pumpability classification.展开更多
The chest radiograph has been one of the most frequently performed radiological investigation tools.In clinical medicine,the chest radiograph can provide technical basis and scientific instruction to recognize a serie...The chest radiograph has been one of the most frequently performed radiological investigation tools.In clinical medicine,the chest radiograph can provide technical basis and scientific instruction to recognize a series of thoracic diseases(such as Atelectasis,Nodule,and Pneumonia,etc.).Importantly,it is of paramount importance for clinical screening,diagnosis,treatment planning,and efficacy evaluation.However,it remains challenging for automated chest radiograph diagnosis and interpretation at the level of an experienced radiologist.In recent years,many studies on biomedical image processing have advanced rapidly with the development of artificial intelligence especially deep learning techniques and algorithms.How to build an efficient and accurate deep learning model for automatic chest radiograph processing is an important scientific problem that needs to be solved.展开更多
With COM,VB and VC,we develop a visual human computer interaction system,which is used to mine association rules.It can mine association rules from the database which is created by Access and SQL server,as well as the...With COM,VB and VC,we develop a visual human computer interaction system,which is used to mine association rules.It can mine association rules from the database which is created by Access and SQL server,as well as the text mode.With the interaction interface,user participates in the process of data mining,making the system mine the satisfying rules.展开更多
The rapid rise of cyberattacks and the gradual failure of traditional defense systems and approaches led to using artificial intelligence(AI)techniques(such as machine learning(ML)and deep learning(DL))to build more e...The rapid rise of cyberattacks and the gradual failure of traditional defense systems and approaches led to using artificial intelligence(AI)techniques(such as machine learning(ML)and deep learning(DL))to build more efficient and reliable intrusion detection systems(IDSs).However,the advent of larger IDS datasets has negatively impacted the performance and computational complexity of AI-based IDSs.Many researchers used data preprocessing techniques such as feature selection and normalization to overcome such issues.While most of these researchers reported the success of these preprocessing techniques on a shallow level,very few studies have been performed on their effects on a wider scale.Furthermore,the performance of an IDS model is subject to not only the utilized preprocessing techniques but also the dataset and the ML/DL algorithm used,which most of the existing studies give little emphasis on.Thus,this study provides an in-depth analysis of feature selection and normalization effects on IDS models built using three IDS datasets:NSL-KDD,UNSW-NB15,and CSE–CIC–IDS2018,and various AI algorithms.A wrapper-based approach,which tends to give superior performance,and min-max normalization methods were used for feature selection and normalization,respectively.Numerous IDS models were implemented using the full and feature-selected copies of the datasets with and without normalization.The models were evaluated using popular evaluation metrics in IDS modeling,intra-and inter-model comparisons were performed between models and with state-of-the-art works.Random forest(RF)models performed better on NSL-KDD and UNSW-NB15 datasets with accuracies of 99.86%and 96.01%,respectively,whereas artificial neural network(ANN)achieved the best accuracy of 95.43%on the CSE–CIC–IDS2018 dataset.The RF models also achieved an excellent performance compared to recent works.The results show that normalization and feature selection positively affect IDS modeling.Furthermore,while feature selection benefits simpler algorithms(such as RF),normalization is more useful for complex algorithms like ANNs and deep neural networks(DNNs),and algorithms such as Naive Bayes are unsuitable for IDS modeling.The study also found that the UNSW-NB15 and CSE–CIC–IDS2018 datasets are more complex and more suitable for building and evaluating modern-day IDS than the NSL-KDD dataset.Our findings suggest that prioritizing robust algorithms like RF,alongside complex models such as ANN and DNN,can significantly enhance IDS performance.These insights provide valuable guidance for managers to develop more effective security measures by focusing on high detection rates and low false alert rates.展开更多
In civil aviation security screening, laptops, with their intricate structural composition, provide the potential for criminals to conceal dangerous items. Presently, the security process necessitates passengers to in...In civil aviation security screening, laptops, with their intricate structural composition, provide the potential for criminals to conceal dangerous items. Presently, the security process necessitates passengers to individually present their laptops for inspection. The paper introduced a method for laptop removal. By combining projection algorithms with the YOLOv7-Seg model, a laptop’s three views were generated through projection, and instance segmentation of these views was achieved using YOLOv7-Seg. The resulting 2D masks from instance segmentation at different angles were employed to reconstruct a 3D mask through angle restoration. Ultimately, the intersection of this 3D mask with the original 3D data enabled the successful extraction of the laptop’s 3D information. Experimental results demonstrated that the fusion of projection and instance segmentation facilitated the automatic removal of laptops from CT data. Moreover, higher instance segmentation model accuracy leads to more precise removal outcomes. By implementing the laptop removal functionality, the civil aviation security screening process becomes more efficient and convenient. Passengers will no longer be required to individually handle their laptops, effectively enhancing the efficiency and accuracy of security screening.展开更多
Based on the retrospect and analysis of the main points of the development of computer management information system,on the basis of the study of relevant data such as enterprise units,mass media,personal information ...Based on the retrospect and analysis of the main points of the development of computer management information system,on the basis of the study of relevant data such as enterprise units,mass media,personal information and information security,and according to the actual situation of our country,this paper intentionally and systematically collects relevant research materials on the role of management information system in the survival and development of enterprises,and aims at one or more merits.To study its impact,this paper focuses on three aspects:application analysis,development direction and implementation strategy,which can provide strong reference and guidance for the application of enterprise computer management system.展开更多
The growth of computing power in data centers(DCs)leads to an increase in energy consumption and noise pollution of air cooling systems.Chip-level cooling with high-efficiency coolant is one of the promising methods t...The growth of computing power in data centers(DCs)leads to an increase in energy consumption and noise pollution of air cooling systems.Chip-level cooling with high-efficiency coolant is one of the promising methods to address the cooling challenge for high-power devices in DCs.Hybrid nanofluid(HNF)has the advantages of high thermal conductivity and good rheological properties.This study summarizes the numerical investigations of HNFs in mini/micro heat sinks,including the numerical methods,hydrothermal characteristics,and enhanced heat transfer technologies.The innovations of this paper include:(1)the characteristics,applicable conditions,and scenarios of each theoretical method and numerical method are clarified;(2)the molecular dynamics(MD)simulation can reveal the synergy effect,micro motion,and agglomeration morphology of different nanoparticles.Machine learning(ML)presents a feasiblemethod for parameter prediction,which provides the opportunity for the intelligent regulation of the thermal performance of HNFs;(3)the HNFs flowboiling and the synergy of passive and active technologies may further improve the overall efficiency of liquid cooling systems in DCs.This review provides valuable insights and references for exploring the multi-phase flow and heat transport mechanisms of HNFs,and promoting the practical application of HNFs in chip-level liquid cooling in DCs.展开更多
基金Strategic Priority Research Program of the Chinese Academy of Sciences,No.XDB0740000National Key Research and Development Program of China,No.2022YFB3904200,No.2022YFF0711601+1 种基金Key Project of Innovation LREIS,No.PI009National Natural Science Foundation of China,No.42471503。
文摘Deep-time Earth research plays a pivotal role in deciphering the rates,patterns,and mechanisms of Earth's evolutionary processes throughout geological history,providing essential scientific foundations for climate prediction,natural resource exploration,and sustainable planetary stewardship.To advance Deep-time Earth research in the era of big data and artificial intelligence,the International Union of Geological Sciences initiated the“Deeptime Digital Earth International Big Science Program”(DDE)in 2019.At the core of this ambitious program lies the development of geoscience knowledge graphs,serving as a transformative knowledge infrastructure that enables the integration,sharing,mining,and analysis of heterogeneous geoscience big data.The DDE knowledge graph initiative has made significant strides in three critical dimensions:(1)establishing a unified knowledge structure across geoscience disciplines that ensures consistent representation of geological entities and their interrelationships through standardized ontologies and semantic frameworks;(2)developing a robust and scalable software infrastructure capable of supporting both expert-driven and machine-assisted knowledge engineering for large-scale graph construction and management;(3)implementing a comprehensive three-tiered architecture encompassing basic,discipline-specific,and application-oriented knowledge graphs,spanning approximately 20 geoscience disciplines.Through its open knowledge framework and international collaborative network,this initiative has fostered multinational research collaborations,establishing a robust foundation for next-generation geoscience research while propelling the discipline toward FAIR(Findable,Accessible,Interoperable,Reusable)data practices in deep-time Earth systems research.
基金supported by the National Key R&D Program of China(2021YFF1200602)the National Science Fund for Excellent Overseas Scholars(0401260011)+3 种基金the National Defense Science and Technology Innovation Fund of Chinese Academy of Sciences(c02022088)the Tianjin Science and Technology Program(20JCZDJC00810)the National Natural Science Foundation of China(82202798)the Shanghai Sailing Program(22YF1404200).
文摘Brain-computer interfaces(BCIs)represent an emerging technology that facilitates direct communication between the brain and external devices.In recent years,numerous review articles have explored various aspects of BCIs,including their fundamental principles,technical advancements,and applications in specific domains.However,these reviews often focus on signal processing,hardware development,or limited applications such as motor rehabilitation or communication.This paper aims to offer a comprehensive review of recent electroencephalogram(EEG)-based BCI applications in the medical field across 8 critical areas,encompassing rehabilitation,daily communication,epilepsy,cerebral resuscitation,sleep,neurodegenerative diseases,anesthesiology,and emotion recognition.Moreover,the current challenges and future trends of BCIs were also discussed,including personal privacy and ethical concerns,network security vulnerabilities,safety issues,and biocompatibility.
基金Supported by the National Natural Science Foundation of China(U1903214,62372339,62371350,61876135)the Ministry of Education Industry University Cooperative Education Project(202102246004,220800006041043,202002142012)the Fundamental Research Funds for the Central Universities(2042023kf1033)。
文摘Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples can easily mislead DNNs into incorrect behavior via the injection of imperceptible modification to the input data.In this survey,we focus on(1)adversarial attack algorithms to generate adversarial examples,(2)adversarial defense techniques to secure DNNs against adversarial examples,and(3)important problems in the realm of adversarial examples beyond attack and defense,including the theoretical explanations,trade-off issues and benign attacks in adversarial examples.Additionally,we draw a brief comparison between recently published surveys on adversarial examples,and identify the future directions for the research of adversarial examples,such as the generalization of methods and the understanding of transferability,that might be solutions to the open problems in this field.
基金supported by Key Science and Technology Program of Henan Province,China(Grant Nos.242102210147,242102210027)Fujian Province Young and Middle aged Teacher Education Research Project(Science and Technology Category)(No.JZ240101)(Corresponding author:Dong Yuan).
文摘Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.
基金Supported by the National Basic Research Program of China(2011CB707904)the Natural Science Foundation of China(61472289)Hubei Province Natural Science Foundation of China(2015CFB254)
文摘Segmenting the lesion regions from the ultrasound (US) images is an important step in the intra-operative planning of some computer-aided therapies. High-Intensity Focused Ultrasound (HIFU), as a popular computer-aided therapy, has been widely used in the treatment of uterine fibroids. However, such segmentation in HIFU remains challenge for two reasons: (1) the blurry or missing boundaries of lesion regions in the HIFU images and (2) the deformation of uterine fibroids caused by the patient's breathing or an external force during the US imaging process, which can lead to complex shapes of lesion regions. These factors have prevented classical active contour-based segmentation methods from yielding desired results for uterine fibroids in US images. In this paper, a novel active contour-based segmentation method is proposed, which utilizes the correlation information of target shapes among a sequence of images as prior knowledge to aid the existing active contour method. This prior knowledge can be interpreted as a unsupervised clustering of shapes prior modeling. Meanwhile, it is also proved that the shapes correlation has the low-rank property in a linear space, and the theory of matrix recovery is used as an effective tool to impose the proposed prior on an existing active contour model. Finally, an accurate method is developed to solve the proposed model by using the Augmented Lagrange Multiplier (ALM). Experimental results from both synthetic and clinical uterine fibroids US image sequences demonstrate that the proposed method can consistently improve the performance of active contour models and increase the robustness against missing or misleading boundaries, and can greatly improve the efficiency of HIFU therapy.
基金Postdoctoral Fund of China (No. 2003034518), Fund of Health Bureau of Zhejiang Province (No. 2004B042), China
文摘This research studies the process of 3D reconstruction and dynamic concision based on 2D medical digital images using virtual reality modelling language (VRML) and JavaScript language, with a focus on how to realize the dynamic concision of 3D medical model with script node and sensor node in VRML. The 3D reconstruction and concision of body internal organs can be built with such high quality that they are better than those obtained from the traditional methods. With the function of dynamic concision, the VRML browser can offer better windows for man-computer interaction in real-time environment than ever before. 3D reconstruction and dynamic concision with VRML can be used to meet the requirement for the medical observation of 3D reconstruction and have a promising prospect in the fields of medical imaging.
基金supported by grants from National Natural Sciences Foundation of China (No. 30801314)China Hubei Provincial Science and Technology Department (No.2008CBD088)
文摘The accuracy and repeatability of computer aided cervical vertebra landmarking (CACVL) were investigated in cephalogram.120 adolescents (60 boys,60 girls) aged from 9.1 to 17.2 years old were randomly selected.Twenty-seven landmarks from the second to fifth cervical vertebrae on the lat-eral cephalogram were identified.In this study,the system of CACVL was developed and used to iden-tify and calculate the landmarks by fast marching method and parabolic curve fitting.The accuracy and repeatability in CACVL group were compared with those in two manual landmarking groups [orthodon-tic experts (OE) group and orthodontic novices (ON) group].The results showed that,as for the accu-racy,there was no significant difference between CACVL group and OE group no matter in x-axis or y-axis (P>0.05),but there was significant difference between CACVL group and ON group,as well as OE group and ON group in both axes (P<0.05).As for the repeatability,CACVL group was more reli-able than OE group and ON group in both axes.It is concluded that CACVL has the same or higher ac-curacy,better repeatability and less workload than manual landmarking methods.It’s reliable for cervi-cal parameters identification on the lateral cephalogram and cervical vertebral maturation prediction in orthodontic practice and research.
基金Sponsored by the National Natural Science Foundation of China (Grant No.61303199)Natural Science Foundation of Shandong Province (Grant No.ZR2013FQ001 and ZR2011FQ030)+1 种基金Outstanding Research Award Fund for Young Scientists of Shandong Province,China (Grant No.BS2013DX010)Academy of Sciences Youth Fund Project of Shandong Province (Grant No.2013QN007)
文摘The integrity and fidelity of digital evidence are very important in live forensics. Previous studies have focused the uncertainty of live forensics based on different memory snapshots. However,this kind of method is not effective in practice. In fact,memory images are usually acquired by using forensics tools instead of using snapshots. Therefore,the integrity and fidelity of live evidence should be evaluated during the acquisition process. In this paper,we study the problem in a novel viewpoint. Firstly,several definitions about memory acquisition measure error are introduced to describe the trusty. Then,we analyze the experimental error and propose some suggestions on how to reduce it. A novel method is also developed to calculate the system error in detail. The results of a case study on Windows 7 and VMware virtual machine show that the experimental error has good accuracy and precision,which demonstrate the efficacy of the proposed reducing methods. The system error is also evaluated,that is,it accounts for the whole error from 30% to 50%.
基金supported by the National Natural Science Foundation of China under Grant No.61170295
文摘In the system of Computer Network Collaborative Defense(CNCD),it is difficult to evaluate the trustworthiness of defense agents which are newly added to the system,since they lack historical interaction for trust evaluation.This will lead that the newly added agents could not get reasonable initial trustworthiness,and affect the whole process of trust evaluation.To solve this problem in CNCD,a trust type based trust bootstrapping model was introduced in this research.First,the division of trust type,trust utility and defense cost were discussed.Then the constraints of defense tasks were analyzed based on game theory.According to the constraints obtained,the trust type of defense agents was identified and the initial trustworthiness was assigned to defense agents.The simulated experiment shows that the methods proposed have lower failure rate of defense tasks and better adaptability in the respect of defense task execution.
基金TheNationalNaturalScienceFoundationofChina (No .6 9774 0 2 4 )
文摘The conventional time function of electromechanical relays is hard to coordinate with other relays. In order to promote the application of inverse-time overcurrent relays, a new time function for microprocessor-type relay is proposed. The setting of the trip time for this relay is performed by determining the shortest trip time and the longest trip time, respectively. The results of analysis show that with the new time function, the inverse-time overcurrent relay is easy to coordinate with other relays and has a comparatively shorter trip time, and that the fault happens in the protective zone.
基金supported by the Ministry of Science and Technology of China,No.2020AAA0109605(to XL)Meizhou Major Scientific and Technological Innovation PlatformsProjects of Guangdong Provincial Science & Technology Plan Projects,No.2019A0102005(to HW).
文摘Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.
基金supported by Kyungpook National University Research Fund,2020.
文摘The rise or fall of the stock markets directly affects investors’interest and loyalty.Therefore,it is necessary to measure the performance of stocks in the market in advance to prevent our assets from suffering significant losses.In our proposed study,six supervised machine learning(ML)strategies and deep learning(DL)models with long short-term memory(LSTM)of data science was deployed for thorough analysis and measurement of the performance of the technology stocks.Under discussion are Apple Inc.(AAPL),Microsoft Corporation(MSFT),Broadcom Inc.,Taiwan Semiconductor Manufacturing Company Limited(TSM),NVIDIA Corporation(NVDA),and Avigilon Corporation(AVGO).The datasets were taken from the Yahoo Finance API from 06-05-2005 to 06-05-2022(seventeen years)with 4280 samples.As already noted,multiple studies have been performed to resolve this problem using linear regression,support vectormachines,deep long short-termmemory(LSTM),and many other models.In this research,the Hidden Markov Model(HMM)outperformed other employed machine learning ensembles,tree-based models,the ARIMA(Auto Regressive IntegratedMoving Average)model,and long short-term memory with a robust mean accuracy score of 99.98.Other statistical analyses and measurements for machine learning ensemble algorithms,the Long Short-TermModel,and ARIMA were also carried out for further investigation of the performance of advanced models for forecasting time series data.Thus,the proposed research found the best model to be HMM,and LSTM was the second-best model that performed well in all aspects.A developedmodel will be highly recommended and helpful for early measurement of technology stock performance for investment or withdrawal based on the future stock rise or fall for creating smart environments.
基金supported by the Key Project of National Natural Science Foundation of China-Civil Aviation Joint Fund under Grant No.U2033212。
文摘The mortar pumpability is essential in the construction industry,which requires much labor to estimate manually and always causes material waste.This paper proposes an effective method by combining a 3-dimensional convolutional neural network(3D CNN)with a 2-dimensional convolutional long short-term memory network(ConvLSTM2D)to automatically classify the mortar pumpability.Experiment results show that the proposed model has an accuracy rate of 100%with a fast convergence speed,based on the dataset organized by collecting the corresponding mortar image sequences.This work demonstrates the feasibility of using computer vision and deep learning for mortar pumpability classification.
文摘The chest radiograph has been one of the most frequently performed radiological investigation tools.In clinical medicine,the chest radiograph can provide technical basis and scientific instruction to recognize a series of thoracic diseases(such as Atelectasis,Nodule,and Pneumonia,etc.).Importantly,it is of paramount importance for clinical screening,diagnosis,treatment planning,and efficacy evaluation.However,it remains challenging for automated chest radiograph diagnosis and interpretation at the level of an experienced radiologist.In recent years,many studies on biomedical image processing have advanced rapidly with the development of artificial intelligence especially deep learning techniques and algorithms.How to build an efficient and accurate deep learning model for automatic chest radiograph processing is an important scientific problem that needs to be solved.
文摘With COM,VB and VC,we develop a visual human computer interaction system,which is used to mine association rules.It can mine association rules from the database which is created by Access and SQL server,as well as the text mode.With the interaction interface,user participates in the process of data mining,making the system mine the satisfying rules.
文摘The rapid rise of cyberattacks and the gradual failure of traditional defense systems and approaches led to using artificial intelligence(AI)techniques(such as machine learning(ML)and deep learning(DL))to build more efficient and reliable intrusion detection systems(IDSs).However,the advent of larger IDS datasets has negatively impacted the performance and computational complexity of AI-based IDSs.Many researchers used data preprocessing techniques such as feature selection and normalization to overcome such issues.While most of these researchers reported the success of these preprocessing techniques on a shallow level,very few studies have been performed on their effects on a wider scale.Furthermore,the performance of an IDS model is subject to not only the utilized preprocessing techniques but also the dataset and the ML/DL algorithm used,which most of the existing studies give little emphasis on.Thus,this study provides an in-depth analysis of feature selection and normalization effects on IDS models built using three IDS datasets:NSL-KDD,UNSW-NB15,and CSE–CIC–IDS2018,and various AI algorithms.A wrapper-based approach,which tends to give superior performance,and min-max normalization methods were used for feature selection and normalization,respectively.Numerous IDS models were implemented using the full and feature-selected copies of the datasets with and without normalization.The models were evaluated using popular evaluation metrics in IDS modeling,intra-and inter-model comparisons were performed between models and with state-of-the-art works.Random forest(RF)models performed better on NSL-KDD and UNSW-NB15 datasets with accuracies of 99.86%and 96.01%,respectively,whereas artificial neural network(ANN)achieved the best accuracy of 95.43%on the CSE–CIC–IDS2018 dataset.The RF models also achieved an excellent performance compared to recent works.The results show that normalization and feature selection positively affect IDS modeling.Furthermore,while feature selection benefits simpler algorithms(such as RF),normalization is more useful for complex algorithms like ANNs and deep neural networks(DNNs),and algorithms such as Naive Bayes are unsuitable for IDS modeling.The study also found that the UNSW-NB15 and CSE–CIC–IDS2018 datasets are more complex and more suitable for building and evaluating modern-day IDS than the NSL-KDD dataset.Our findings suggest that prioritizing robust algorithms like RF,alongside complex models such as ANN and DNN,can significantly enhance IDS performance.These insights provide valuable guidance for managers to develop more effective security measures by focusing on high detection rates and low false alert rates.
文摘In civil aviation security screening, laptops, with their intricate structural composition, provide the potential for criminals to conceal dangerous items. Presently, the security process necessitates passengers to individually present their laptops for inspection. The paper introduced a method for laptop removal. By combining projection algorithms with the YOLOv7-Seg model, a laptop’s three views were generated through projection, and instance segmentation of these views was achieved using YOLOv7-Seg. The resulting 2D masks from instance segmentation at different angles were employed to reconstruct a 3D mask through angle restoration. Ultimately, the intersection of this 3D mask with the original 3D data enabled the successful extraction of the laptop’s 3D information. Experimental results demonstrated that the fusion of projection and instance segmentation facilitated the automatic removal of laptops from CT data. Moreover, higher instance segmentation model accuracy leads to more precise removal outcomes. By implementing the laptop removal functionality, the civil aviation security screening process becomes more efficient and convenient. Passengers will no longer be required to individually handle their laptops, effectively enhancing the efficiency and accuracy of security screening.
基金(1)Project Name:Research on Key Technologies of safe campus construction based on multi-sensor big data fusion(Project number:20190303096sf)(2)Project Name:Research on Key Technologies of smart campus management platform based on big data(Project No.:18dy026)(3)Research on the application of BIM based high-rise building fire rescue and big data escape planning system(Project No.:2020c019-7)。
文摘Based on the retrospect and analysis of the main points of the development of computer management information system,on the basis of the study of relevant data such as enterprise units,mass media,personal information and information security,and according to the actual situation of our country,this paper intentionally and systematically collects relevant research materials on the role of management information system in the survival and development of enterprises,and aims at one or more merits.To study its impact,this paper focuses on three aspects:application analysis,development direction and implementation strategy,which can provide strong reference and guidance for the application of enterprise computer management system.
基金funded by the Science and Technology Project of Tianjin(No.24YDTPJC00680)the National Natural Science Foundation of China(No.52406191).
文摘The growth of computing power in data centers(DCs)leads to an increase in energy consumption and noise pollution of air cooling systems.Chip-level cooling with high-efficiency coolant is one of the promising methods to address the cooling challenge for high-power devices in DCs.Hybrid nanofluid(HNF)has the advantages of high thermal conductivity and good rheological properties.This study summarizes the numerical investigations of HNFs in mini/micro heat sinks,including the numerical methods,hydrothermal characteristics,and enhanced heat transfer technologies.The innovations of this paper include:(1)the characteristics,applicable conditions,and scenarios of each theoretical method and numerical method are clarified;(2)the molecular dynamics(MD)simulation can reveal the synergy effect,micro motion,and agglomeration morphology of different nanoparticles.Machine learning(ML)presents a feasiblemethod for parameter prediction,which provides the opportunity for the intelligent regulation of the thermal performance of HNFs;(3)the HNFs flowboiling and the synergy of passive and active technologies may further improve the overall efficiency of liquid cooling systems in DCs.This review provides valuable insights and references for exploring the multi-phase flow and heat transport mechanisms of HNFs,and promoting the practical application of HNFs in chip-level liquid cooling in DCs.