This paper advances new directions for cyber security using adversarial learning and conformal prediction in order to enhance network and computing services defenses against adaptive, malicious, persistent, and tactic...This paper advances new directions for cyber security using adversarial learning and conformal prediction in order to enhance network and computing services defenses against adaptive, malicious, persistent, and tactical offensive threats. Conformal prediction is the principled and unified adaptive and learning framework used to design, develop, and deploy a multi-faceted?self-managing defensive shield to detect, disrupt, and deny intrusive attacks, hostile and malicious behavior, and subterfuge. Conformal prediction leverages apparent relationships between immunity and intrusion detection using non-conformity measures characteristic of affinity, a typicality, and surprise, to recognize patterns and messages as friend or foe and to respond to them accordingly. The solutions proffered throughout are built around active learning, meta-reasoning, randomness, distributed semantics and stratification, and most important and above all around adaptive Oracles. The motivation for using conformal prediction and its immediate off-spring, those of semi-supervised learning and transduction, comes from them first and foremost supporting discriminative and non-parametric methods characteristic of principled demarcation using cohorts and sensitivity analysis to hedge on the prediction outcomes including negative selection, on one side, and providing credibility and confidence indices that assist meta-reasoning and information fusion.展开更多
Transthyretin (TTR), a carrier protein present in the liver and choroid plexus of the brain, has been shown to be responsible for binding thyroid hormone thyroxin (T4) and retinol in plasma and cerebrospinal fluid (CS...Transthyretin (TTR), a carrier protein present in the liver and choroid plexus of the brain, has been shown to be responsible for binding thyroid hormone thyroxin (T4) and retinol in plasma and cerebrospinal fluid (CSF). TTR aids in sequestering of beta-amyloid peptides Aβ deposition, and protects the brain from trauma, ischemic stroke and Alzheimer disease (AD). Accordingly, hippocampal gene expression of TTR plays a significant role in learning and memory as well as in simulation of spatial memory tasks. TTR via interacting with transcription factor CREB regulates this process and decreased expression leads to memory deficits. By different signaling pathways, like MAPK, AKT, and ERK via Src, TTR provides tropical support through megalin receptor by promoting neurite outgrowth and protecting the neurons from traumatic brain injury. TTR is also responsible for the transient rise in intracellular Ca2+ via NMDA receptor, playing a dominant role under excitotoxic conditions. In this review, we tried to shed light on how TTR is involved in maintaining normal cognitive processes, its role in learning and memory, under memory deficit conditions;by which mechanisms it promotes neurite outgrowth;and how it protects the brain from Alzheimer disease (AD).展开更多
The current research was grounded in prior interdisciplinary research that showed cognitive ability (verbal ability for translating cognitions into oral language) and multiple-working memory endophenotypes (behavioral...The current research was grounded in prior interdisciplinary research that showed cognitive ability (verbal ability for translating cognitions into oral language) and multiple-working memory endophenotypes (behavioral markers of genetic or brain bases of language learning) predict reading and writing achievement in students with and without specific learning disabilities in written language (SLDs-WL). Results largely replicated prior findings that verbally gifted with dyslexia score higher on reading and writing achievement than those with average verbal ability but not on endophenotypes. The current study extended that research by comparing those with and without SLDs-WL with assessed verbal ability held constant. The verbally gifted without SLDs-WL (n = 14) scored higher than the verbally gifted with SLDs-WL (n = 27) on six language skills (oral sentence construction, best and fastest handwriting in copying, single real word oral reading accuracy, oral pseudoword reading accuracy and rate) and four endophenotypes (orthographic and morphological coding, orthographic loop, and switching attention). The verbally average without SLDs-WL (n = 6) scored higher than the verbally average with SLDs-WL (n = 22) on four language skills (best and fastest hand-writing in copying, oral pseudoword reading accuracy and rate) and two endophenotypes (orthographic coding and orthographic loop). Implications of results for translating interdisciplinary research into flexible definitions for assessment and instruction to serve students with varying verbal abilities and language learning and endophenotype profiles are discussed along with directions for future research.展开更多
The concept of Network Centric Therapy represents an amalgamation of wearable and wireless inertial sensor systems and machine learning with access to a Cloud computing environment. The advent of Network Centric Thera...The concept of Network Centric Therapy represents an amalgamation of wearable and wireless inertial sensor systems and machine learning with access to a Cloud computing environment. The advent of Network Centric Therapy is highly relevant to the treatment of Parkinson’s disease through deep brain stimulation. Originally wearable and wireless systems for quantifying Parkinson’s disease involved the use a smartphone to quantify hand tremor. Although originally novel, the smartphone has notable issues as a wearable application for quantifying movement disorder tremor. The smartphone has evolved in a pathway that has made the smartphone progressively more cumbersome to mount about the dorsum of the hand. Furthermore, the smartphone utilizes an inertial sensor package that is not certified for medical analysis, and the trial data access a provisional Cloud computing environment through an email account. These concerns are resolved with the recent development of a conformal wearable and wireless inertial sensor system. This conformal wearable and wireless system mounts to the hand with the profile of a bandage by adhesive and accesses a secure Cloud computing environment through a segmented wireless connectivity strategy involving a smartphone and tablet. Additionally, the conformal wearable and wireless system is certified by the FDA of the United States of America for ascertaining medical grade inertial sensor data. These characteristics make the conformal wearable and wireless system uniquely suited for the quantification of Parkinson’s disease treatment through deep brain stimulation. Preliminary evaluation of the conformal wearable and wireless system is demonstrated through the differentiation of deep brain stimulation set to “On” and “Off” status. Based on the robustness of the acceleration signal, this signal was selected to quantify hand tremor for the prescribed deep brain stimulation settings. Machine learning classification using the Waikato Environment for Knowledge Analysis (WEKA) was applied using the multilayer perceptron neural network. The multilayer perceptron neural network achieved considerable classification accuracy for distinguishing between the deep brain stimulation system set to “On” and “Off” status through the quantified acceleration signal data obtained by this recently developed conformal wearable and wireless system. The research achievement establishes a progressive pathway to the future objective of achieving deep brain stimulation capabilities that promote closed-loop acquisition of configuration parameters that are uniquely optimized to the individual through extrinsic means of a highly conformal wearable and wireless inertial sensor system and machine learning with access to Cloud computing resources.展开更多
Using the latest available artificial intelligence (AI) technology, an advanced algorithm LIVERFAStTM has been used to evaluate the diagnostic accuracy of machine learning (ML) biomarker algorithms to assess liver dam...Using the latest available artificial intelligence (AI) technology, an advanced algorithm LIVERFAStTM has been used to evaluate the diagnostic accuracy of machine learning (ML) biomarker algorithms to assess liver damage. Prevalence of NAFLD (Nonalcoholic fatty liver disease) and resulting NASH (nonalcoholic steatohepatitis) are constantly increasing worldwide, creating challenges for screening as the diagnosis for NASH requires invasive liver biopsy. Key issues in NAFLD patients are the differentiation of NASH from simple steatosis and identification of advanced hepatic fibrosis. In this prospective study, the staging of three different lesions of the liver to diagnose fatty liver was analyzed using a proprietary ML algorithm LIVERFAStTM developed with a database of 2862 unique medical assessments of biomarkers, where 1027 assessments were used to train the algorithm and 1835 constituted the validation set. Data of 13,068 patients who underwent the LIVERFAStTM test for evaluation of fatty liver disease were analysed. Data evaluation revealed 11% of the patients exhibited significant fibrosis with fibrosis scores 0.6 - 1.00. Approximately 7% of the population had severe hepatic inflammation. Steatosis was observed in most patients, 63%, whereas severe steatosis S3 was observed in 20%. Using modified SAF (Steatosis, Activity and Fibrosis) scores obtained using the LIVERFAStTM algorithm, NAFLD was detected in 13.41% of the patients (Sx > 0, Ay 0). Approximately 1.91% (Sx > 0, Ay = 2, Fz > 0) of the patients showed NAFLD or NASH scorings while 1.08% had confirmed NASH (Sx > 0, Ay > 2, Fz = 1 - 2) and 1.49% had advanced NASH (Sx > 0, Ay > 2, Fz = 3 - 4). The modified SAF scoring system generated by LIVERFAStTM provides a simple and convenient evaluation of NAFLD and NASH in a cohort of Southeast Asians. This system may lead to the use of noninvasive liver tests in extended populations for more accurate diagnosis of liver pathology, prediction of clinical path of individuals at all stages of liver diseases, and provision of an efficient system for therapeutic interventions.展开更多
With this work, we introduce a novel method for the unsupervised learning of conceptual hierarchies, or concept maps as they are sometimes called, which is aimed specifically for use with literary texts, as such disti...With this work, we introduce a novel method for the unsupervised learning of conceptual hierarchies, or concept maps as they are sometimes called, which is aimed specifically for use with literary texts, as such distinguishing itself from the majority of research literature on the topic which is primarily focused on building ontologies from a vast array of different types of data sources, both structured and unstructured, to support various forms of AI, in particular, the Semantic Web as envisioned by Tim Berners-Lee. We first elaborate on mutually informing disciplines of philosophy and computer science, or more specifically the relationship between metaphysics, epistemology, ontology, computing and AI, followed by a technically in-depth discussion of DEBRA, our dependency tree based concept hierarchy constructor, which as its name alludes to, constructs a conceptual map in the form of a directed graph which illustrates the concepts, their respective relations, and the implied ontological structure of the concepts as encoded in the text, decoded with standard Python NLP libraries such as spaCy and NLTK. With this work we hope to both augment the Knowledge Representation literature with opportunities for intellectual advancement in AI with more intuitive, less analytical, and well-known forms of knowledge representation from the cognitive science community, as well as open up new areas of research between Computer Science and the Humanities with respect to the application of the latest in NLP tools and techniques upon literature of cultural significance, shedding light on existing methods of computation with respect to documents in semantic space that effectively allows for, at the very least, the comparison and evolution of texts through time, using vector space math.展开更多
Perinatal hypoxic-ischemic-encephalopathy significantly contributes to neonatal death and life-long disability such as cerebral palsy. Advances in signal processing and machine learning have provided the research comm...Perinatal hypoxic-ischemic-encephalopathy significantly contributes to neonatal death and life-long disability such as cerebral palsy. Advances in signal processing and machine learning have provided the research community with an opportunity to develop automated real-time identification techniques to detect the signs of hypoxic-ischemic-encephalopathy in larger electroencephalography/amplitude-integrated electroencephalography data sets more easily. This review details the recent achievements, performed by a number of prominent research groups across the world, in the automatic identification and classification of hypoxic-ischemic epileptiform neonatal seizures using advanced signal processing and machine learning techniques. This review also addresses the clinical challenges that current automated techniques face in order to be fully utilized by clinicians, and highlights the importance of upgrading the current clinical bedside sampling frequencies to higher sampling rates in order to provide better hypoxic-ischemic biomarker detection frameworks. Additionally, the article highlights that current clinical automated epileptiform detection strategies for human neonates have been only concerned with seizure detection after the therapeutic latent phase of injury. Whereas recent animal studies have demonstrated that the latent phase of opportunity is critically important for early diagnosis of hypoxic-ischemic-encephalopathy electroencephalography biomarkers and although difficult, detection strategies could utilize biomarkers in the latent phase to also predict the onset of future seizures.展开更多
Objectives Medical knowledge extraction (MKE) plays a key role in natural language processing (NLP) research in electronic medical records (EMR),which are the important digital carriers for recording medical activitie...Objectives Medical knowledge extraction (MKE) plays a key role in natural language processing (NLP) research in electronic medical records (EMR),which are the important digital carriers for recording medical activities of patients.Named entity recognition (NER) and medical relation extraction (MRE) are two basic tasks of MKE.This study aims to improve the recognition accuracy of these two tasks by exploring deep learning methods.Methods This study discussed and built two application scenes of bidirectional long short-term memory combined conditional random field (BiLSTM-CRF) model for NER and MRE tasks.In the data preprocessing of both tasks,a GloVe word embedding model was used to vectorize words.In the NER task,a sequence labeling strategy was used to classify each word tag by the joint probability distribution through the CRF layer.In the MRE task,the medical entity relation category was predicted by transforming the classification problem of a single entity into a sequence classification problem and linking the feature combinations between entities also through the CRF layer.Results Through the validation on the I2B2 2010 public dataset,the BiLSTM-CRF models built in this study got much better results than the baseline methods in the two tasks,where the F1-measure was up to 0.88 in NER task and 0.78 in MRE task.Moreover,the model converged faster and avoided problems such as overfitting.Conclusion This study proved the good performance of deep learning on medical knowledge extraction.It also verified the feasibility of the BiLSTM-CRF model in different application scenarios,laying the foundation for the subsequent work in the EMR field.展开更多
Oxidative stress is involved in the pathogenesis of vascular dementia. Studies have shown that lycopene can significantly inhibit oxidative stress;therefore, we hypothesized that lycopene can reduce the level of oxida...Oxidative stress is involved in the pathogenesis of vascular dementia. Studies have shown that lycopene can significantly inhibit oxidative stress;therefore, we hypothesized that lycopene can reduce the level of oxidative stress in vascular dementia. A vascular dementia model was established by permanent bilateral ligation of common carotid arteries. The dosage groups were treated with lycopene(50, 100 and 200 mg/kg) every other day for 2 months. Rats without bilateral carotid artery ligation were prepared as a sham group. To test the ability of learning and memory, the Morris water maze was used to detect the average escape latency and the change of search strategy. Hematoxylin-eosin staining was used to observe changes of hippocampal neurons. The levels of oxidative stress factors, superoxide dismutase and malondialdehyde, were measured in the hippocampus by biochemical detection. The levels of reactive oxygen species in the hippocampus were observed by dihydroethidium staining. The distribution and expression of oxidative stress related protein, neuron-restrictive silencer factor, in hippocampal neurons were detected by immunofluorescence histochemistry and western blot assays. After 2 months of drug administration,(1) in the model group, the average escape latency was longer than that of the sham group, and the proportion of straight and tend tactics was lower than that of the sham group, and the hippocampal neurons were irregularly arranged and the cytoplasm was hyperchromatic.(2) The levels of reactive oxygen species and malondialdehyde in the hippocampus of the model group rats were increased, and the activity of superoxide dismutase was decreased.(3) Lycopene(50, 100 and 200 mg/kg) intervention improved the above changes, and the lycopene 100 mg/kg group showed the most significant improvement effect.(4) Neuron-restrictive silencer factor expression in the hippocampus was lower in the sham group and the lycopene 100 mg/kg group than in the model group.(5) The above data indicate that lycopene 100 mg/kg could protect against the learning-memory ability impairment of vascular dementia rats. The protective mechanism was achieved by inhibiting oxidative stress in the hippocampus. The experiment was approved by the Animal Ethics Committee of Fujian Medical University, China(approval No. 2014-025) in June 2014.展开更多
Post-kidney transplant rejection is a critical factor influencing transplant success rates and the survival of transplanted organs.With the rapid advancement of artificial intelligence technologies,machine learning(ML...Post-kidney transplant rejection is a critical factor influencing transplant success rates and the survival of transplanted organs.With the rapid advancement of artificial intelligence technologies,machine learning(ML)has emerged as a powerful data analysis tool,widely applied in the prediction,diagnosis,and mechanistic study of kidney transplant rejection.This mini-review systematically summarizes the recent applications of ML techniques in post-kidney transplant rejection,covering areas such as the construction of predictive models,identification of biomarkers,analysis of pathological images,assessment of immune cell infiltration,and formulation of personalized treatment strategies.By integrating multi-omics data and clinical information,ML has significantly enhanced the accuracy of early rejection diagnosis and the capability for prognostic evaluation,driving the development of precision medicine in the field of kidney transplantation.Furthermore,this article discusses the challenges faced in existing research and potential future directions,providing a theoretical basis and technical references for related studies.展开更多
The effect of Batroxobin on spatial memory disorder of left temporal ischemic rats and the expression of HSP32 and HSP70 were investigated with Morri`s water maze and immunohistochemistry methods. The results show... The effect of Batroxobin on spatial memory disorder of left temporal ischemic rats and the expression of HSP32 and HSP70 were investigated with Morri`s water maze and immunohistochemistry methods. The results showed that the mean reaction time and distance of temporal ischemic rats in searching a goal were significantly longer than those of the sham-operated rats and at the same time HSP32 and HSP70 expression of left temporal ischemic region in rats was significantly increased as compared with the sham-operated rats. However, the mean reaction time and distance of the Batroxobin-treated rats were shorter and they used normal strategies more often and earlier than those of ischemic rats. The number of HSP32 and HSP70 immune reactive cells of Batroxobin-treated rats was also less than that of the ischemic group. In conclusion, Batroxobin can improve spatial memory disorder of temporal ischemic rats; and the down-regulation of the expression of HSP32 and HSP70 is probably related to the attenuation of ischemic injury.展开更多
BACKGROUND Artificial intelligence,such as convolutional neural networks(CNNs),has been used in the interpretation of images and the diagnosis of hepatocellular cancer(HCC)and liver masses.CNN,a machine-learning algor...BACKGROUND Artificial intelligence,such as convolutional neural networks(CNNs),has been used in the interpretation of images and the diagnosis of hepatocellular cancer(HCC)and liver masses.CNN,a machine-learning algorithm similar to deep learning,has demonstrated its capability to recognise specific features that can detect pathological lesions.AIM To assess the use of CNNs in examining HCC and liver masses images in the diagnosis of cancer and evaluating the accuracy level of CNNs and their performance.METHODS The databases PubMed,EMBASE,and the Web of Science and research books were systematically searched using related keywords.Studies analysing pathological anatomy,cellular,and radiological images on HCC or liver masses using CNNs were identified according to the study protocol to detect cancer,differentiating cancer from other lesions,or staging the lesion.The data were extracted as per a predefined extraction.The accuracy level and performance of the CNNs in detecting cancer or early stages of cancer were analysed.The primary outcomes of the study were analysing the type of cancer or liver mass and identifying the type of images that showed optimum accuracy in cancer detection.RESULTS A total of 11 studies that met the selection criteria and were consistent with the aims of the study were identified.The studies demonstrated the ability to differentiate liver masses or differentiate HCC from other lesions(n=6),HCC from cirrhosis or development of new tumours(n=3),and HCC nuclei grading or segmentation(n=2).The CNNs showed satisfactory levels of accuracy.The studies aimed at detecting lesions(n=4),classification(n=5),and segmentation(n=2).Several methods were used to assess the accuracy of CNN models used.CONCLUSION The role of CNNs in analysing images and as tools in early detection of HCC or liver masses has been demonstrated in these studies.While a few limitations have been identified in these studies,overall there was an optimal level of accuracy of the CNNs used in segmentation and classification of liver cancers images.展开更多
In this paper we discuss policy iteration methods for approximate solution of a finite-state discounted Markov decision problem, with a focus on feature-based aggregation methods and their connection with deep reinfor...In this paper we discuss policy iteration methods for approximate solution of a finite-state discounted Markov decision problem, with a focus on feature-based aggregation methods and their connection with deep reinforcement learning schemes. We introduce features of the states of the original problem, and we formulate a smaller "aggregate" Markov decision problem, whose states relate to the features. We discuss properties and possible implementations of this type of aggregation, including a new approach to approximate policy iteration. In this approach the policy improvement operation combines feature-based aggregation with feature construction using deep neural networks or other calculations. We argue that the cost function of a policy may be approximated much more accurately by the nonlinear function of the features provided by aggregation, than by the linear function of the features provided by neural networkbased reinforcement learning, thereby potentially leading to more effective policy improvement.展开更多
The field of biometric identification has seen significant advancements over the years,with research focusing on enhancing the accuracy and security of these systems.One of the key developments is the integration of d...The field of biometric identification has seen significant advancements over the years,with research focusing on enhancing the accuracy and security of these systems.One of the key developments is the integration of deep learning techniques in biometric systems.However,despite these advancements,certain challenges persist.One of the most significant challenges is scalability over growing complexity.Traditional methods either require maintaining and securing a growing database,introducing serious security challenges,or relying on retraining the entiremodelwhen new data is introduced-a process that can be computationally expensive and complex.This challenge underscores the need for more efficient methods to scale securely.To this end,we introduce a novel approach that addresses these challenges by integrating multimodal biometrics,cancelable biometrics,and incremental learning techniques.This work is among the first attempts to seamlessly incorporate deep cancelable biometrics with dynamic architectural updates,applied incrementally to the deep learning model as new users are enrolled,achieving high performance with minimal catastrophic forgetting.By leveraging a One-Dimensional Convolutional Neural Network(1D-CNN)architecture combined with a hybrid incremental learning approach,our system achieves high recognition accuracy,averaging 98.98% over incrementing datasets,while ensuring user privacy through cancelable templates generated via a pre-trained CNN model and random projection.The approach demonstrates remarkable adaptability,utilizing the least intrusive biometric traits like facial features and fingerprints,ensuring not only robust performance but also long-term serviceability.展开更多
The nutritional management of patients with esophageal cancer(EC)presents significant complexities,with traditional approaches facing inherent limitations in data collection,real-time decision-making,and personalized ...The nutritional management of patients with esophageal cancer(EC)presents significant complexities,with traditional approaches facing inherent limitations in data collection,real-time decision-making,and personalized care.This narrative review explores the transformative potential of artificial intelligence(AI)and machine learning(ML),particularly deep learning(DL)and reinforcement learning(RL),in revolutionizing nutritional support for this vulnerable patient population.DL has demonstrated remarkable capabilities in enhancing the accuracy and objectivity of nutritional assessment through precise,automated body composition analysis from medical imaging,offering valuable prognostic insights.Concurrently,RL enables the dynamic optimization of nutritional interventions,adapting them in real time to individual patient responses,paving the way for truly personalized care paradigms.Although AI/ML offers potential advantages in efficiency,precision,and personalization by integrating multidimensional data for superior clinical decision support,its widespread adoption is accompanied by critical challenges.These include safeguarding data privacy and security,mitigating algorithmic bias,ensuring transparency and accountability,and establishing rigorous clinical validation.Early evidence suggests the feasibility of applying AI/ML in nutritional risk stratification and workflow optimization,but highquality prospective studies are needed to demonstrate the direct impact on clinical outcomes,including complications,readmissions,and survival.Overcoming these hurdles necessitates robust ethical governance,interdisciplinary collaboration,and continuous education.Ultimately,the strategic integration of AI/ML holds immense promise to profoundly improve patient outcomes,enhance quality of life,and optimize health care resource utilization in the nutritional management of esophageal cancer.展开更多
In this paper, the model output machine learning (MOML) method is proposed for simulating weather consultation, which can improve the forecast results of numerical weather prediction (NWP). During weather consultation...In this paper, the model output machine learning (MOML) method is proposed for simulating weather consultation, which can improve the forecast results of numerical weather prediction (NWP). During weather consultation, the forecasters obtain the final results by combining the observations with the NWP results and giving opinions based on their experience. It is obvious that using a suitable post-processing algorithm for simulating weather consultation is an interesting and important topic. MOML is a post-processing method based on machine learning, which matches NWP forecasts against observations through a regression function. By adopting different feature engineering of datasets and training periods, the observational and model data can be processed into the corresponding training set and test set. The MOML regression function uses an existing machine learning algorithm with the processed dataset to revise the output of NWP models combined with the observations, so as to improve the results of weather forecasts. To test the new approach for grid temperature forecasts, the 2-m surface air temperature in the Beijing area from the ECMWF model is used. MOML with different feature engineering is compared against the ECMWF model and modified model output statistics (MOS) method. MOML shows a better numerical performance than the ECMWF model and MOS, especially for winter. The results of MOML with a linear algorithm, running training period, and dataset using spatial interpolation ideas, are better than others when the forecast time is within a few days. The results of MOML with the Random Forest algorithm, year-round training period, and dataset containing surrounding gridpoint information, are better when the forecast time is longer.展开更多
This paper realizes a sign language-to-speech conversion system to solve the communication problem between healthy people and speech disorders. 30 kinds of different static sign languages are firstly recognized by com...This paper realizes a sign language-to-speech conversion system to solve the communication problem between healthy people and speech disorders. 30 kinds of different static sign languages are firstly recognized by combining the support vector machine (SVM) with a restricted Boltzmann machine (RBM) based regulation and a feedback fine-tuning of the deep model. The text of sign language is then obtained from the recognition results. A context-dependent label is generated from the recognized text of sign language by a text analyzer. Meanwhile,a hiddenMarkov model (HMM) basedMandarin-Tibetan bilingual speech synthesis system is developed by using speaker adaptive training.The Mandarin speech or Tibetan speech is then naturally synthesized by using context-dependent label generated from the recognized sign language. Tests show that the static sign language recognition rate of the designed system achieves 93.6%. Subjective evaluation demonstrates that synthesized speech can get 4.0 of the mean opinion score (MOS).展开更多
Machine learning is an emerging method to discover new materials with specific characteristics.An unsupervised machine learning research is highlighted to discover new potential lithium ionic conductors by screening a...Machine learning is an emerging method to discover new materials with specific characteristics.An unsupervised machine learning research is highlighted to discover new potential lithium ionic conductors by screening and clustering lithium compounds,providing inspirations for the development of solid-state electrolytes and practical batteries.展开更多
CC’s(Cloud Computing)networks are distributed and dynamic as signals appear/disappear or lose significance.MLTs(Machine learning Techniques)train datasets which sometime are inadequate in terms of sample for inferrin...CC’s(Cloud Computing)networks are distributed and dynamic as signals appear/disappear or lose significance.MLTs(Machine learning Techniques)train datasets which sometime are inadequate in terms of sample for inferring information.A dynamic strategy,DevMLOps(Development Machine Learning Operations)used in automatic selections and tunings of MLTs result in significant performance differences.But,the scheme has many disadvantages including continuity in training,more samples and training time in feature selections and increased classification execution times.RFEs(Recursive Feature Eliminations)are computationally very expensive in its operations as it traverses through each feature without considering correlations between them.This problem can be overcome by the use of Wrappers as they select better features by accounting for test and train datasets.The aim of this paper is to use DevQLMLOps for automated tuning and selections based on orchestrations and messaging between containers.The proposed AKFA(Adaptive Kernel Firefly Algorithm)is for selecting features for CNM(Cloud Network Monitoring)operations.AKFA methodology is demonstrated using CNSD(Cloud Network Security Dataset)with satisfactory results in the performance metrics like precision,recall,F-measure and accuracy used.展开更多
BACKGROUND The accurate prediction of lymph node metastasis(LNM)is crucial for managing locally advanced(T3/T4)colorectal cancer(CRC).However,both traditional histopathology and standard slide-level deep learning ofte...BACKGROUND The accurate prediction of lymph node metastasis(LNM)is crucial for managing locally advanced(T3/T4)colorectal cancer(CRC).However,both traditional histopathology and standard slide-level deep learning often fail to capture the sparse and diagnostically critical features of metastatic potential.AIM To develop and validate a case-level multiple-instance learning(MIL)framework mimicking a pathologist's comprehensive review and improve T3/T4 CRC LNM prediction.METHODS The whole-slide images of 130 patients with T3/T4 CRC were retrospectively collected.A case-level MIL framework utilising the CONCH v1.5 and UNI2-h deep learning models was trained on features from all haematoxylin and eosinstained primary tumour slides for each patient.These pathological features were subsequently integrated with clinical data,and model performance was evaluated using the area under the curve(AUC).RESULTS The case-level framework demonstrated superior LNM prediction over slide-level training,with the CONCH v1.5 model achieving a mean AUC(±SD)of 0.899±0.033 vs 0.814±0.083,respectively.Integrating pathology features with clinical data further enhanced performance,yielding a top model with a mean AUC of 0.904±0.047,in sharp contrast to a clinical-only model(mean AUC 0.584±0.084).Crucially,a pathologist’s review confirmed that the model-identified high-attention regions correspond to known high-risk histopathological features.CONCLUSION A case-level MIL framework provides a superior approach for predicting LNM in advanced CRC.This method shows promise for risk stratification and therapy decisions,requiring further validation.展开更多
文摘This paper advances new directions for cyber security using adversarial learning and conformal prediction in order to enhance network and computing services defenses against adaptive, malicious, persistent, and tactical offensive threats. Conformal prediction is the principled and unified adaptive and learning framework used to design, develop, and deploy a multi-faceted?self-managing defensive shield to detect, disrupt, and deny intrusive attacks, hostile and malicious behavior, and subterfuge. Conformal prediction leverages apparent relationships between immunity and intrusion detection using non-conformity measures characteristic of affinity, a typicality, and surprise, to recognize patterns and messages as friend or foe and to respond to them accordingly. The solutions proffered throughout are built around active learning, meta-reasoning, randomness, distributed semantics and stratification, and most important and above all around adaptive Oracles. The motivation for using conformal prediction and its immediate off-spring, those of semi-supervised learning and transduction, comes from them first and foremost supporting discriminative and non-parametric methods characteristic of principled demarcation using cohorts and sensitivity analysis to hedge on the prediction outcomes including negative selection, on one side, and providing credibility and confidence indices that assist meta-reasoning and information fusion.
文摘Transthyretin (TTR), a carrier protein present in the liver and choroid plexus of the brain, has been shown to be responsible for binding thyroid hormone thyroxin (T4) and retinol in plasma and cerebrospinal fluid (CSF). TTR aids in sequestering of beta-amyloid peptides Aβ deposition, and protects the brain from trauma, ischemic stroke and Alzheimer disease (AD). Accordingly, hippocampal gene expression of TTR plays a significant role in learning and memory as well as in simulation of spatial memory tasks. TTR via interacting with transcription factor CREB regulates this process and decreased expression leads to memory deficits. By different signaling pathways, like MAPK, AKT, and ERK via Src, TTR provides tropical support through megalin receptor by promoting neurite outgrowth and protecting the neurons from traumatic brain injury. TTR is also responsible for the transient rise in intracellular Ca2+ via NMDA receptor, playing a dominant role under excitotoxic conditions. In this review, we tried to shed light on how TTR is involved in maintaining normal cognitive processes, its role in learning and memory, under memory deficit conditions;by which mechanisms it promotes neurite outgrowth;and how it protects the brain from Alzheimer disease (AD).
文摘The current research was grounded in prior interdisciplinary research that showed cognitive ability (verbal ability for translating cognitions into oral language) and multiple-working memory endophenotypes (behavioral markers of genetic or brain bases of language learning) predict reading and writing achievement in students with and without specific learning disabilities in written language (SLDs-WL). Results largely replicated prior findings that verbally gifted with dyslexia score higher on reading and writing achievement than those with average verbal ability but not on endophenotypes. The current study extended that research by comparing those with and without SLDs-WL with assessed verbal ability held constant. The verbally gifted without SLDs-WL (n = 14) scored higher than the verbally gifted with SLDs-WL (n = 27) on six language skills (oral sentence construction, best and fastest handwriting in copying, single real word oral reading accuracy, oral pseudoword reading accuracy and rate) and four endophenotypes (orthographic and morphological coding, orthographic loop, and switching attention). The verbally average without SLDs-WL (n = 6) scored higher than the verbally average with SLDs-WL (n = 22) on four language skills (best and fastest hand-writing in copying, oral pseudoword reading accuracy and rate) and two endophenotypes (orthographic coding and orthographic loop). Implications of results for translating interdisciplinary research into flexible definitions for assessment and instruction to serve students with varying verbal abilities and language learning and endophenotype profiles are discussed along with directions for future research.
文摘The concept of Network Centric Therapy represents an amalgamation of wearable and wireless inertial sensor systems and machine learning with access to a Cloud computing environment. The advent of Network Centric Therapy is highly relevant to the treatment of Parkinson’s disease through deep brain stimulation. Originally wearable and wireless systems for quantifying Parkinson’s disease involved the use a smartphone to quantify hand tremor. Although originally novel, the smartphone has notable issues as a wearable application for quantifying movement disorder tremor. The smartphone has evolved in a pathway that has made the smartphone progressively more cumbersome to mount about the dorsum of the hand. Furthermore, the smartphone utilizes an inertial sensor package that is not certified for medical analysis, and the trial data access a provisional Cloud computing environment through an email account. These concerns are resolved with the recent development of a conformal wearable and wireless inertial sensor system. This conformal wearable and wireless system mounts to the hand with the profile of a bandage by adhesive and accesses a secure Cloud computing environment through a segmented wireless connectivity strategy involving a smartphone and tablet. Additionally, the conformal wearable and wireless system is certified by the FDA of the United States of America for ascertaining medical grade inertial sensor data. These characteristics make the conformal wearable and wireless system uniquely suited for the quantification of Parkinson’s disease treatment through deep brain stimulation. Preliminary evaluation of the conformal wearable and wireless system is demonstrated through the differentiation of deep brain stimulation set to “On” and “Off” status. Based on the robustness of the acceleration signal, this signal was selected to quantify hand tremor for the prescribed deep brain stimulation settings. Machine learning classification using the Waikato Environment for Knowledge Analysis (WEKA) was applied using the multilayer perceptron neural network. The multilayer perceptron neural network achieved considerable classification accuracy for distinguishing between the deep brain stimulation system set to “On” and “Off” status through the quantified acceleration signal data obtained by this recently developed conformal wearable and wireless system. The research achievement establishes a progressive pathway to the future objective of achieving deep brain stimulation capabilities that promote closed-loop acquisition of configuration parameters that are uniquely optimized to the individual through extrinsic means of a highly conformal wearable and wireless inertial sensor system and machine learning with access to Cloud computing resources.
文摘Using the latest available artificial intelligence (AI) technology, an advanced algorithm LIVERFAStTM has been used to evaluate the diagnostic accuracy of machine learning (ML) biomarker algorithms to assess liver damage. Prevalence of NAFLD (Nonalcoholic fatty liver disease) and resulting NASH (nonalcoholic steatohepatitis) are constantly increasing worldwide, creating challenges for screening as the diagnosis for NASH requires invasive liver biopsy. Key issues in NAFLD patients are the differentiation of NASH from simple steatosis and identification of advanced hepatic fibrosis. In this prospective study, the staging of three different lesions of the liver to diagnose fatty liver was analyzed using a proprietary ML algorithm LIVERFAStTM developed with a database of 2862 unique medical assessments of biomarkers, where 1027 assessments were used to train the algorithm and 1835 constituted the validation set. Data of 13,068 patients who underwent the LIVERFAStTM test for evaluation of fatty liver disease were analysed. Data evaluation revealed 11% of the patients exhibited significant fibrosis with fibrosis scores 0.6 - 1.00. Approximately 7% of the population had severe hepatic inflammation. Steatosis was observed in most patients, 63%, whereas severe steatosis S3 was observed in 20%. Using modified SAF (Steatosis, Activity and Fibrosis) scores obtained using the LIVERFAStTM algorithm, NAFLD was detected in 13.41% of the patients (Sx > 0, Ay 0). Approximately 1.91% (Sx > 0, Ay = 2, Fz > 0) of the patients showed NAFLD or NASH scorings while 1.08% had confirmed NASH (Sx > 0, Ay > 2, Fz = 1 - 2) and 1.49% had advanced NASH (Sx > 0, Ay > 2, Fz = 3 - 4). The modified SAF scoring system generated by LIVERFAStTM provides a simple and convenient evaluation of NAFLD and NASH in a cohort of Southeast Asians. This system may lead to the use of noninvasive liver tests in extended populations for more accurate diagnosis of liver pathology, prediction of clinical path of individuals at all stages of liver diseases, and provision of an efficient system for therapeutic interventions.
文摘With this work, we introduce a novel method for the unsupervised learning of conceptual hierarchies, or concept maps as they are sometimes called, which is aimed specifically for use with literary texts, as such distinguishing itself from the majority of research literature on the topic which is primarily focused on building ontologies from a vast array of different types of data sources, both structured and unstructured, to support various forms of AI, in particular, the Semantic Web as envisioned by Tim Berners-Lee. We first elaborate on mutually informing disciplines of philosophy and computer science, or more specifically the relationship between metaphysics, epistemology, ontology, computing and AI, followed by a technically in-depth discussion of DEBRA, our dependency tree based concept hierarchy constructor, which as its name alludes to, constructs a conceptual map in the form of a directed graph which illustrates the concepts, their respective relations, and the implied ontological structure of the concepts as encoded in the text, decoded with standard Python NLP libraries such as spaCy and NLTK. With this work we hope to both augment the Knowledge Representation literature with opportunities for intellectual advancement in AI with more intuitive, less analytical, and well-known forms of knowledge representation from the cognitive science community, as well as open up new areas of research between Computer Science and the Humanities with respect to the application of the latest in NLP tools and techniques upon literature of cultural significance, shedding light on existing methods of computation with respect to documents in semantic space that effectively allows for, at the very least, the comparison and evolution of texts through time, using vector space math.
基金supported by the Auckland Medical Research Foundation,No.1117017(to CPU)
文摘Perinatal hypoxic-ischemic-encephalopathy significantly contributes to neonatal death and life-long disability such as cerebral palsy. Advances in signal processing and machine learning have provided the research community with an opportunity to develop automated real-time identification techniques to detect the signs of hypoxic-ischemic-encephalopathy in larger electroencephalography/amplitude-integrated electroencephalography data sets more easily. This review details the recent achievements, performed by a number of prominent research groups across the world, in the automatic identification and classification of hypoxic-ischemic epileptiform neonatal seizures using advanced signal processing and machine learning techniques. This review also addresses the clinical challenges that current automated techniques face in order to be fully utilized by clinicians, and highlights the importance of upgrading the current clinical bedside sampling frequencies to higher sampling rates in order to provide better hypoxic-ischemic biomarker detection frameworks. Additionally, the article highlights that current clinical automated epileptiform detection strategies for human neonates have been only concerned with seizure detection after the therapeutic latent phase of injury. Whereas recent animal studies have demonstrated that the latent phase of opportunity is critically important for early diagnosis of hypoxic-ischemic-encephalopathy electroencephalography biomarkers and although difficult, detection strategies could utilize biomarkers in the latent phase to also predict the onset of future seizures.
基金Supported by the Zhejiang Provincial Natural Science Foundation(No.LQ16H180004)~~
文摘Objectives Medical knowledge extraction (MKE) plays a key role in natural language processing (NLP) research in electronic medical records (EMR),which are the important digital carriers for recording medical activities of patients.Named entity recognition (NER) and medical relation extraction (MRE) are two basic tasks of MKE.This study aims to improve the recognition accuracy of these two tasks by exploring deep learning methods.Methods This study discussed and built two application scenes of bidirectional long short-term memory combined conditional random field (BiLSTM-CRF) model for NER and MRE tasks.In the data preprocessing of both tasks,a GloVe word embedding model was used to vectorize words.In the NER task,a sequence labeling strategy was used to classify each word tag by the joint probability distribution through the CRF layer.In the MRE task,the medical entity relation category was predicted by transforming the classification problem of a single entity into a sequence classification problem and linking the feature combinations between entities also through the CRF layer.Results Through the validation on the I2B2 2010 public dataset,the BiLSTM-CRF models built in this study got much better results than the baseline methods in the two tasks,where the F1-measure was up to 0.88 in NER task and 0.78 in MRE task.Moreover,the model converged faster and avoided problems such as overfitting.Conclusion This study proved the good performance of deep learning on medical knowledge extraction.It also verified the feasibility of the BiLSTM-CRF model in different application scenarios,laying the foundation for the subsequent work in the EMR field.
基金financially supported by the National Innovation and Entrepreneurship Training Project of China in 2013,No.201310392009(to XZZ)the Innovation and Entrepreneurship Training Project of Fujian Province of China in 2014,No.201410392058(to XZZ)
文摘Oxidative stress is involved in the pathogenesis of vascular dementia. Studies have shown that lycopene can significantly inhibit oxidative stress;therefore, we hypothesized that lycopene can reduce the level of oxidative stress in vascular dementia. A vascular dementia model was established by permanent bilateral ligation of common carotid arteries. The dosage groups were treated with lycopene(50, 100 and 200 mg/kg) every other day for 2 months. Rats without bilateral carotid artery ligation were prepared as a sham group. To test the ability of learning and memory, the Morris water maze was used to detect the average escape latency and the change of search strategy. Hematoxylin-eosin staining was used to observe changes of hippocampal neurons. The levels of oxidative stress factors, superoxide dismutase and malondialdehyde, were measured in the hippocampus by biochemical detection. The levels of reactive oxygen species in the hippocampus were observed by dihydroethidium staining. The distribution and expression of oxidative stress related protein, neuron-restrictive silencer factor, in hippocampal neurons were detected by immunofluorescence histochemistry and western blot assays. After 2 months of drug administration,(1) in the model group, the average escape latency was longer than that of the sham group, and the proportion of straight and tend tactics was lower than that of the sham group, and the hippocampal neurons were irregularly arranged and the cytoplasm was hyperchromatic.(2) The levels of reactive oxygen species and malondialdehyde in the hippocampus of the model group rats were increased, and the activity of superoxide dismutase was decreased.(3) Lycopene(50, 100 and 200 mg/kg) intervention improved the above changes, and the lycopene 100 mg/kg group showed the most significant improvement effect.(4) Neuron-restrictive silencer factor expression in the hippocampus was lower in the sham group and the lycopene 100 mg/kg group than in the model group.(5) The above data indicate that lycopene 100 mg/kg could protect against the learning-memory ability impairment of vascular dementia rats. The protective mechanism was achieved by inhibiting oxidative stress in the hippocampus. The experiment was approved by the Animal Ethics Committee of Fujian Medical University, China(approval No. 2014-025) in June 2014.
文摘Post-kidney transplant rejection is a critical factor influencing transplant success rates and the survival of transplanted organs.With the rapid advancement of artificial intelligence technologies,machine learning(ML)has emerged as a powerful data analysis tool,widely applied in the prediction,diagnosis,and mechanistic study of kidney transplant rejection.This mini-review systematically summarizes the recent applications of ML techniques in post-kidney transplant rejection,covering areas such as the construction of predictive models,identification of biomarkers,analysis of pathological images,assessment of immune cell infiltration,and formulation of personalized treatment strategies.By integrating multi-omics data and clinical information,ML has significantly enhanced the accuracy of early rejection diagnosis and the capability for prognostic evaluation,driving the development of precision medicine in the field of kidney transplantation.Furthermore,this article discusses the challenges faced in existing research and potential future directions,providing a theoretical basis and technical references for related studies.
文摘 The effect of Batroxobin on spatial memory disorder of left temporal ischemic rats and the expression of HSP32 and HSP70 were investigated with Morri`s water maze and immunohistochemistry methods. The results showed that the mean reaction time and distance of temporal ischemic rats in searching a goal were significantly longer than those of the sham-operated rats and at the same time HSP32 and HSP70 expression of left temporal ischemic region in rats was significantly increased as compared with the sham-operated rats. However, the mean reaction time and distance of the Batroxobin-treated rats were shorter and they used normal strategies more often and earlier than those of ischemic rats. The number of HSP32 and HSP70 immune reactive cells of Batroxobin-treated rats was also less than that of the ischemic group. In conclusion, Batroxobin can improve spatial memory disorder of temporal ischemic rats; and the down-regulation of the expression of HSP32 and HSP70 is probably related to the attenuation of ischemic injury.
基金Supported by the College of Medicine Research Centre,Deanship of Scientific Research,King Saud University,Riyadh,Saudi Arabia
文摘BACKGROUND Artificial intelligence,such as convolutional neural networks(CNNs),has been used in the interpretation of images and the diagnosis of hepatocellular cancer(HCC)and liver masses.CNN,a machine-learning algorithm similar to deep learning,has demonstrated its capability to recognise specific features that can detect pathological lesions.AIM To assess the use of CNNs in examining HCC and liver masses images in the diagnosis of cancer and evaluating the accuracy level of CNNs and their performance.METHODS The databases PubMed,EMBASE,and the Web of Science and research books were systematically searched using related keywords.Studies analysing pathological anatomy,cellular,and radiological images on HCC or liver masses using CNNs were identified according to the study protocol to detect cancer,differentiating cancer from other lesions,or staging the lesion.The data were extracted as per a predefined extraction.The accuracy level and performance of the CNNs in detecting cancer or early stages of cancer were analysed.The primary outcomes of the study were analysing the type of cancer or liver mass and identifying the type of images that showed optimum accuracy in cancer detection.RESULTS A total of 11 studies that met the selection criteria and were consistent with the aims of the study were identified.The studies demonstrated the ability to differentiate liver masses or differentiate HCC from other lesions(n=6),HCC from cirrhosis or development of new tumours(n=3),and HCC nuclei grading or segmentation(n=2).The CNNs showed satisfactory levels of accuracy.The studies aimed at detecting lesions(n=4),classification(n=5),and segmentation(n=2).Several methods were used to assess the accuracy of CNN models used.CONCLUSION The role of CNNs in analysing images and as tools in early detection of HCC or liver masses has been demonstrated in these studies.While a few limitations have been identified in these studies,overall there was an optimal level of accuracy of the CNNs used in segmentation and classification of liver cancers images.
文摘In this paper we discuss policy iteration methods for approximate solution of a finite-state discounted Markov decision problem, with a focus on feature-based aggregation methods and their connection with deep reinforcement learning schemes. We introduce features of the states of the original problem, and we formulate a smaller "aggregate" Markov decision problem, whose states relate to the features. We discuss properties and possible implementations of this type of aggregation, including a new approach to approximate policy iteration. In this approach the policy improvement operation combines feature-based aggregation with feature construction using deep neural networks or other calculations. We argue that the cost function of a policy may be approximated much more accurately by the nonlinear function of the features provided by aggregation, than by the linear function of the features provided by neural networkbased reinforcement learning, thereby potentially leading to more effective policy improvement.
基金the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through project number RI-44-0833.
文摘The field of biometric identification has seen significant advancements over the years,with research focusing on enhancing the accuracy and security of these systems.One of the key developments is the integration of deep learning techniques in biometric systems.However,despite these advancements,certain challenges persist.One of the most significant challenges is scalability over growing complexity.Traditional methods either require maintaining and securing a growing database,introducing serious security challenges,or relying on retraining the entiremodelwhen new data is introduced-a process that can be computationally expensive and complex.This challenge underscores the need for more efficient methods to scale securely.To this end,we introduce a novel approach that addresses these challenges by integrating multimodal biometrics,cancelable biometrics,and incremental learning techniques.This work is among the first attempts to seamlessly incorporate deep cancelable biometrics with dynamic architectural updates,applied incrementally to the deep learning model as new users are enrolled,achieving high performance with minimal catastrophic forgetting.By leveraging a One-Dimensional Convolutional Neural Network(1D-CNN)architecture combined with a hybrid incremental learning approach,our system achieves high recognition accuracy,averaging 98.98% over incrementing datasets,while ensuring user privacy through cancelable templates generated via a pre-trained CNN model and random projection.The approach demonstrates remarkable adaptability,utilizing the least intrusive biometric traits like facial features and fingerprints,ensuring not only robust performance but also long-term serviceability.
文摘The nutritional management of patients with esophageal cancer(EC)presents significant complexities,with traditional approaches facing inherent limitations in data collection,real-time decision-making,and personalized care.This narrative review explores the transformative potential of artificial intelligence(AI)and machine learning(ML),particularly deep learning(DL)and reinforcement learning(RL),in revolutionizing nutritional support for this vulnerable patient population.DL has demonstrated remarkable capabilities in enhancing the accuracy and objectivity of nutritional assessment through precise,automated body composition analysis from medical imaging,offering valuable prognostic insights.Concurrently,RL enables the dynamic optimization of nutritional interventions,adapting them in real time to individual patient responses,paving the way for truly personalized care paradigms.Although AI/ML offers potential advantages in efficiency,precision,and personalization by integrating multidimensional data for superior clinical decision support,its widespread adoption is accompanied by critical challenges.These include safeguarding data privacy and security,mitigating algorithmic bias,ensuring transparency and accountability,and establishing rigorous clinical validation.Early evidence suggests the feasibility of applying AI/ML in nutritional risk stratification and workflow optimization,but highquality prospective studies are needed to demonstrate the direct impact on clinical outcomes,including complications,readmissions,and survival.Overcoming these hurdles necessitates robust ethical governance,interdisciplinary collaboration,and continuous education.Ultimately,the strategic integration of AI/ML holds immense promise to profoundly improve patient outcomes,enhance quality of life,and optimize health care resource utilization in the nutritional management of esophageal cancer.
基金supported by the National Key Research and Development Program of China (Grant Nos. 2018YFF0300104 and 2017YFC0209804)the National Natural Science Foundation of China (Grant No. 11421101)Beijing Academy of Artifical Intelligence (BAAI)
文摘In this paper, the model output machine learning (MOML) method is proposed for simulating weather consultation, which can improve the forecast results of numerical weather prediction (NWP). During weather consultation, the forecasters obtain the final results by combining the observations with the NWP results and giving opinions based on their experience. It is obvious that using a suitable post-processing algorithm for simulating weather consultation is an interesting and important topic. MOML is a post-processing method based on machine learning, which matches NWP forecasts against observations through a regression function. By adopting different feature engineering of datasets and training periods, the observational and model data can be processed into the corresponding training set and test set. The MOML regression function uses an existing machine learning algorithm with the processed dataset to revise the output of NWP models combined with the observations, so as to improve the results of weather forecasts. To test the new approach for grid temperature forecasts, the 2-m surface air temperature in the Beijing area from the ECMWF model is used. MOML with different feature engineering is compared against the ECMWF model and modified model output statistics (MOS) method. MOML shows a better numerical performance than the ECMWF model and MOS, especially for winter. The results of MOML with a linear algorithm, running training period, and dataset using spatial interpolation ideas, are better than others when the forecast time is within a few days. The results of MOML with the Random Forest algorithm, year-round training period, and dataset containing surrounding gridpoint information, are better when the forecast time is longer.
基金The research leading to these results was partly funded by the National Natural Science Foundation of China (Grant No. 61263036, 61262055), Gansu Science Fund for Distinguished Young Scholars (Grant No. 1210RJDA007) and Natural Science Foundation of Gansu (Grant No. 1506RJYA126).
文摘This paper realizes a sign language-to-speech conversion system to solve the communication problem between healthy people and speech disorders. 30 kinds of different static sign languages are firstly recognized by combining the support vector machine (SVM) with a restricted Boltzmann machine (RBM) based regulation and a feedback fine-tuning of the deep model. The text of sign language is then obtained from the recognition results. A context-dependent label is generated from the recognized text of sign language by a text analyzer. Meanwhile,a hiddenMarkov model (HMM) basedMandarin-Tibetan bilingual speech synthesis system is developed by using speaker adaptive training.The Mandarin speech or Tibetan speech is then naturally synthesized by using context-dependent label generated from the recognized sign language. Tests show that the static sign language recognition rate of the designed system achieves 93.6%. Subjective evaluation demonstrates that synthesized speech can get 4.0 of the mean opinion score (MOS).
基金Supported by the National Key Research and Development Program(2016YFA0202500)National Natural Science Foundation of China(21825501)。
文摘Machine learning is an emerging method to discover new materials with specific characteristics.An unsupervised machine learning research is highlighted to discover new potential lithium ionic conductors by screening and clustering lithium compounds,providing inspirations for the development of solid-state electrolytes and practical batteries.
文摘CC’s(Cloud Computing)networks are distributed and dynamic as signals appear/disappear or lose significance.MLTs(Machine learning Techniques)train datasets which sometime are inadequate in terms of sample for inferring information.A dynamic strategy,DevMLOps(Development Machine Learning Operations)used in automatic selections and tunings of MLTs result in significant performance differences.But,the scheme has many disadvantages including continuity in training,more samples and training time in feature selections and increased classification execution times.RFEs(Recursive Feature Eliminations)are computationally very expensive in its operations as it traverses through each feature without considering correlations between them.This problem can be overcome by the use of Wrappers as they select better features by accounting for test and train datasets.The aim of this paper is to use DevQLMLOps for automated tuning and selections based on orchestrations and messaging between containers.The proposed AKFA(Adaptive Kernel Firefly Algorithm)is for selecting features for CNM(Cloud Network Monitoring)operations.AKFA methodology is demonstrated using CNSD(Cloud Network Security Dataset)with satisfactory results in the performance metrics like precision,recall,F-measure and accuracy used.
基金Supported by Chongqing Medical Scientific Research Project(Joint Project of Chongqing Health Commission and Science and Technology Bureau),No.2023MSXM060.
文摘BACKGROUND The accurate prediction of lymph node metastasis(LNM)is crucial for managing locally advanced(T3/T4)colorectal cancer(CRC).However,both traditional histopathology and standard slide-level deep learning often fail to capture the sparse and diagnostically critical features of metastatic potential.AIM To develop and validate a case-level multiple-instance learning(MIL)framework mimicking a pathologist's comprehensive review and improve T3/T4 CRC LNM prediction.METHODS The whole-slide images of 130 patients with T3/T4 CRC were retrospectively collected.A case-level MIL framework utilising the CONCH v1.5 and UNI2-h deep learning models was trained on features from all haematoxylin and eosinstained primary tumour slides for each patient.These pathological features were subsequently integrated with clinical data,and model performance was evaluated using the area under the curve(AUC).RESULTS The case-level framework demonstrated superior LNM prediction over slide-level training,with the CONCH v1.5 model achieving a mean AUC(±SD)of 0.899±0.033 vs 0.814±0.083,respectively.Integrating pathology features with clinical data further enhanced performance,yielding a top model with a mean AUC of 0.904±0.047,in sharp contrast to a clinical-only model(mean AUC 0.584±0.084).Crucially,a pathologist’s review confirmed that the model-identified high-attention regions correspond to known high-risk histopathological features.CONCLUSION A case-level MIL framework provides a superior approach for predicting LNM in advanced CRC.This method shows promise for risk stratification and therapy decisions,requiring further validation.