This study aimed to evaluate the correlation between nursing informatics(NI)competency and information literacy skills for evidencebased practice(EBP)among intensive care nurses.This cross-sectional study was conducte...This study aimed to evaluate the correlation between nursing informatics(NI)competency and information literacy skills for evidencebased practice(EBP)among intensive care nurses.This cross-sectional study was conducted on 184 nurses working in intensive care units(ICUs).The study data were collected through demographic information,Nursing Informatics Competency Assessment Tool(NICAT),and information literacy skills for EBP questionnaires.The intensive care nurses received competent and low-moderate levels for the total scores of NI competency and information literacy skills,respectively.They received a moderate score for the use of different information resources but a low score for information searching skills,different search features,and knowledge about search operators,and only 31.5%of the nurses selected the most appropriate statement.NI competency and related subscales had a significant direct bidirectional correlation with information literacy skills for EBP and its subscales(P<0.05).Nurses require a high level of NI competency and information literacy for EBP to obtain up-to-date information and provide better care and decision-making.Health planners and policymakers should develop interventions to enhance NI competency and information literacy skills among nurses and motivate them to use EBP in clinical settings.展开更多
Quantum information processing and communication(QIPC) is an area of science that has two main goals: On one side,it tries to explore(still not well known) potential of quantum phenomena for(efficient and reliable) in...Quantum information processing and communication(QIPC) is an area of science that has two main goals: On one side,it tries to explore(still not well known) potential of quantum phenomena for(efficient and reliable) information processing and(efficient,reliable and secure) communication.On the other side,it tries to use quantum information storing,processing and transmitting paradigms,principles,laws,limitations,concepts,models and tools to get deeper insights into the phenomena of quantum world and to find efficient ways to describe and handle/simulate various complex physical phenomena.In order to do that QIPC has to use concepts,models,theories,methods and tools of both physics and informatics.The main role of physics at that is to discover primitive physical phenomena that can be used to design and maintain complex and reliable information storing,processing and transmitting systems.The main role of informatics is,one one side,to explore,from the information processing and communication point of view,limitations and potentials of the potential quantum information processing and communication technology,and to prepare information processing methods that could utilise potential of quantum information processing and communication technologies.On the other side,the main role of informatics is to guide and support,by theoretical tools and outcomes,physics oriented research in QIPC.The paper is to describe and analyse a variety of ways and potential informatics contributes and should/could contribute to the development of QIPC--see also Gruska(1999,2006,2008).展开更多
Due to the recent developments in communications technology,cognitive computations have been used in smart healthcare techniques that can combine massive medical data,artificial intelligence,federated learning,bio-ins...Due to the recent developments in communications technology,cognitive computations have been used in smart healthcare techniques that can combine massive medical data,artificial intelligence,federated learning,bio-inspired computation,and the Internet of Medical Things.It has helped in knowledge sharing and scaling ability between patients,doctors,and clinics for effective treatment of patients.Speech-based respiratory disease detection and monitoring are crucial in this direction and have shown several promising results.Since the subject’s speech can be remotely recorded and submitted for further examination,it offers a quick,economical,dependable,and noninvasive prospective alternative detection approach.However,the two main requirements of this are higher accuracy and lower computational complexity and,in many cases,these two requirements do not correlate with each other.This problem has been taken up in this paper to develop a low computational complexity-based neural network with higher accuracy.A cascaded perceptual functional link artificial neural network(PFLANN)is used to capture the nonlinearity in the data for better classification performance with low computational complexity.The proposed model is being tested for multiple respiratory diseases,and the analysis of various performance matrices demonstrates the superior performance of the proposed model both in terms of accuracy and complexity.展开更多
Bovine coronavirus(BCoV)poses a significant threat to the global cattle industry,causing both respiratory and gastrointestinal infections in cattle populations.This necessitates the development of efficacious vaccines...Bovine coronavirus(BCoV)poses a significant threat to the global cattle industry,causing both respiratory and gastrointestinal infections in cattle populations.This necessitates the development of efficacious vaccines.While several inactivated and live BCoV vaccines exist,they are predominantly limited to calves.The immunization of adult cattle is imperative for BCoV infection control,as it curtails viral transmission to calves and ameliorates the impact of enteric and respiratory ailments across all age groups within the herd.This study presents an in silico methodology for devising a multiepitope vaccine targeting BCoV.The spike glycoprotein(S)and nucleocapsid(N)proteins,which are integral elements of the BCoV structure,play pivotal roles in the viral infection cycle and immune response.We constructed a remarkably effective multiepitope vaccine candidate specifically designed to combat the BCoV population.Using immunoinformatics technology,B-cell and T-cell epitopes were predicted and linked together using linkers and adjuvants to efficiently trigger both cellular and humoral immune responses in cattle.The in silico construct was characterized,and assessment of its physicochemical properties revealed the formation of a stable vaccine construct.After 3D modeling of the vaccine construct,molecular docking revealed a stable interaction with the bovine receptor bTLR4.Moreover,the viability of the vaccine’s high expression and simple purification was demonstrated by codon optimization and in silico cloning expression into the pET28a(+)vector.By applying immunoinformatics approaches,researchers aim to better understand the immune response to bovine coronavirus,discover potential targets for intervention,and facilitate the development of diagnostic tools and vaccines to mitigate the impact of this virus on cattle health and the livestock industry.We anticipate that the design will be useful as a preventive treatment for BCoV sickness in cattle,opening the door for further laboratory studies.展开更多
There are quintillions of data on deoxyribonucleic acid(DNA)and protein in publicly accessible data banks,and that number is expanding at an exponential rate.Many scientific fields,such as bioinformatics and drug disc...There are quintillions of data on deoxyribonucleic acid(DNA)and protein in publicly accessible data banks,and that number is expanding at an exponential rate.Many scientific fields,such as bioinformatics and drug discovery,rely on such data;nevertheless,gathering and extracting data from these resources is a tough undertaking.This data should go through several processes,including mining,data processing,analysis,and classification.This study proposes software that extracts data from big data repositories automatically and with the particular ability to repeat data extraction phases as many times as needed without human intervention.This software simulates the extraction of data from web-based(point-and-click)resources or graphical user interfaces that cannot be accessed using command-line tools.The software was evaluated by creating a novel database of 34 parameters for 1360 physicochemical properties of antimicrobial peptides(AMP)sequences(46240 hits)from various MARVIN software panels,which can be later utilized to develop novel AMPs.Furthermore,for machine learning research,the program was validated by extracting 10,000 protein tertiary structures from the Protein Data Bank.As a result,data collection from the web will become faster and less expensive,with no need for manual data extraction.The software is critical as a first step to preparing large datasets for subsequent stages of analysis,such as those using machine and deep-learning applications.展开更多
The liver is a multifaceted organ that is responsible for many critical functions encompassing amino acid,carbohydrate,and lipid metabolism,all of which make a healthy liver essential for the human body.Contemporary i...The liver is a multifaceted organ that is responsible for many critical functions encompassing amino acid,carbohydrate,and lipid metabolism,all of which make a healthy liver essential for the human body.Contemporary imaging methodologies have remarkable diagnostic accuracy in discerning focal liver lesions;however,a comprehensive understanding of diffuse liver diseases is a requisite for radiologists to accurately diagnose or predict the progression of such lesions within clinical contexts.Nonetheless,the conventional attributes of radiological features,including morphology,size,margin,density,signal intensity,and echoes,limit their clinical utility.Radiomics is a widely used approach that is characterized by the extraction of copious image features from radiographic depictions,which gives it considerable potential in addressing this limitation.It is worth noting that functional or molecular alterations occur significantly prior to the morphological shifts discernible by imaging modalities.Consequently,the explication of potential mechanisms by multiomics analyses(encompassing genomics,epigenomics,transcriptomics,proteomics,and metabolomics)is essential for investigating putative signal pathway regulations from a radiological viewpoint.In this review,we elaborate on the principal pathological categorizations of diffuse liver diseases,the evaluation of multiomics approaches pertaining to diffuse liver diseases,and the prospective value of predictive models.Accordingly,the overarching objective of this review is to scrutinize the interrelations between radiological features and bioinformatics as well as to consider the development of prediction models predicated on radiobioinformatics as integral components of clinical decision support systems for diffuse liver diseases.展开更多
Severe acute respiratory syndrome coronavirus(SARS-CoV)and SARS-CoV-2 are thought to transmit to humans via wild mammals,especially bats.However,evidence for direct bat-to-human transmission is lacking.Involvement of ...Severe acute respiratory syndrome coronavirus(SARS-CoV)and SARS-CoV-2 are thought to transmit to humans via wild mammals,especially bats.However,evidence for direct bat-to-human transmission is lacking.Involvement of intermediate hosts is considered a reason for SARS-CoV-2 transmission to humans and emergence of outbreak.Large biodiversity is found in tropical territories,such as Brazil.On the similar line,this study aimed to predict potential coronavirus hosts among Brazilian wild mammals based on angiotensin-converting enzyme 2(ACE2)sequences using evolutionary bioinformatics.Cougar,maned wolf,and bush dogs were predicted as potential hosts for coronavirus.These indigenous carnivores are philogenetically closer to the known SARS-CoV/SARS-CoV-2 hosts and presented low ACE2 divergence.A new coronavirus transmission chain was developed in which white-tailed deer,a susceptible SARS-CoV-2 host,have the central position.Cougar play an important role because of its low divergent ACE2 level in deer and humans.The discovery of these potential coronavirus hosts will be useful for epidemiological surveillance and discovery of interventions that can contribute to break the transmission chain.展开更多
Multidisciplinary, integrated planning approach by architects, engineers, scientists and manufacturers to reduce energy consumption of buildings. The CIIRC Complex, located on the main campus of Czech Technical Univer...Multidisciplinary, integrated planning approach by architects, engineers, scientists and manufacturers to reduce energy consumption of buildings. The CIIRC Complex, located on the main campus of Czech Technical University in Prague consists of two buildings, newly constructed building and adaptive reuse of existing building. CIIRC—Czech Institute of Informatics, Robotics and Cybernetics is a contemporary teaching facility of new generation and use for scientific research teams. New building has ten above-ground floors, on the bottom 4 floors of laboratories, scientist modules, classrooms, above are offices, meeting rooms, teaching and research modules for professors and students. Offices of the rector are on the last two floors of the building. On the top floor is congress type auditorium, in the basement is fully automatic car park. Double skin pneumatic cushions facade. In the project are introduced series of architectural and technical features and innovations. Probably the most visible is the double skin facade facing south-transparent double layer membrane ETFE (Ethylen-TetraFluorEthylen) cushions with triple glazed modular system assembly. Acting as solar collector, recuperating of hot air on the top floors, saving up to 30% of an energy consumption.展开更多
Bioinformatics analysis often requires the filtering of multi-datasets,based on frequency or frequency of occurrence,for decisions on retention or deletion.Existing tools for this purpose often present a challenge wit...Bioinformatics analysis often requires the filtering of multi-datasets,based on frequency or frequency of occurrence,for decisions on retention or deletion.Existing tools for this purpose often present a challenge with complex installation,which necessitate custom coding,thereby impeding efficient data processing activities.To address this issue,Filterx,a user-friendly command line tool that written in C language,was developed that supports multi-condition filtering,based on frequency or occurrence.This tool enables users to complete the data processing tasks through a simple command line,greatly reducing both workload and data processing time.In addition,future development of this tool could facilitate its integration into various bioinformatics data analysis pipelines.展开更多
Parkinson’s disease(PD)is a common neurological disease in elderly people,and its morbidity and mortality are increasing with the advent of global ageing.The traditional paradigm of moving from small data to big data...Parkinson’s disease(PD)is a common neurological disease in elderly people,and its morbidity and mortality are increasing with the advent of global ageing.The traditional paradigm of moving from small data to big data in biomedical research is shifting toward big data-based identification of small actionable alterations.To highlight the use of big data for precision PD medicine,we review PD big data and informatics for the translation of basic PD research to clinical applications.We emphasize some key findings in clinically actionable changes,such as susceptibility genetic variations for PD risk population screening,biomarkers for the diagnosis and stratification of PD patients,risk factors for PD,and lifestyles for the prevention of PD.The challenges associated with the collection,storage,and modelling of diverse big data for PD precision medicine and healthcare are also summarized.Future perspectives on systems modelling and intelligent medicine for PD monitoring,diagnosis,treatment,and healthcare are discussed in the end.展开更多
Long noncoding RNAs(lncRNAs)play important roles in human diseases including vascular disease.Given the large number of lncRNAs,however,whether the majority of them are associated with vascular disease remains unknown...Long noncoding RNAs(lncRNAs)play important roles in human diseases including vascular disease.Given the large number of lncRNAs,however,whether the majority of them are associated with vascular disease remains unknown.For this purpose,here we present a genomic location based bioinformatics method to predict the lncRNAs associated with vascular disease.We applied the presented method to globally screen the human lncRNAs potentially involved in vascular disease.As a result,we predicted 3043 putative vascular disease associated lncRNAs.To test the accuracy of the method,we selected 10 lncRNAs predicted to be implicated in proliferation and migration of vascular smooth muscle cells(VSMCs)for further experimental validation.The results confirmed that eight of the 10 lncRNAs(80%)are validated.This result suggests that the presented method has a reliable prediction performance.Finally,the presented bioinformatics method and the predicted vascular disease associated lncRNAs together may provide helps for not only better understanding of the roles of lncRNAs in vascular disease but also the identification of novel molecules for the diagnosis and therapy of vascular disease.展开更多
We report a significantly-enhanced bioinformatics suite and database for proteomics research called Yale Protein Expression Database(YPED) that is used by investigators at more than 300 institutions worldwide. YPED ...We report a significantly-enhanced bioinformatics suite and database for proteomics research called Yale Protein Expression Database(YPED) that is used by investigators at more than 300 institutions worldwide. YPED meets the data management, archival, and analysis needs of a high-throughput mass spectrometry-based proteomics research ranging from a singlelaboratory, group of laboratories within and beyond an institution, to the entire proteomics community. The current version is a significant improvement over the first version in that it contains new modules for liquid chromatography–tandem mass spectrometry(LC–MS/MS) database search results, label and label-free quantitative proteomic analysis, and several scoring outputs for phosphopeptide site localization. In addition, we have added both peptide and protein comparative analysis tools to enable pairwise analysis of distinct peptides/proteins in each sample and of overlapping peptides/proteins between all samples in multiple datasets. We have also implemented a targeted proteomics module for automated multiple reaction monitoring(MRM)/selective reaction monitoring(SRM) assay development. We have linked YPED's database search results and both label-based and label-free fold-change analysis to the Skyline Panorama repository for online spectra visualization. In addition, we have built enhanced functionality to curate peptide identifications into an MS/MS peptide spectral library for all of our protein database search identification results.展开更多
The explosive growth of the bioinformatics field has led to a large amount of data and software applications publicly available as web resources. However, the lack of persistence of web references is a barrier to a co...The explosive growth of the bioinformatics field has led to a large amount of data and software applications publicly available as web resources. However, the lack of persistence of web references is a barrier to a comprehensive shared access. We conducted a study of the current availability and other features of primary bioinforo matics web resources (such as software tools and databases). The majority (95%) of the examined bioinformatics web resources were found running on UNIX/Linux operating systems, and the most widely used web server was found to be Apache (or Apache-related products). Of the overall 1,130 Uniform Resource Locators (URLs) examined, 91% were highly available (more than 90% of the time), while only 4% showed low accessibility (less than 50% of the time) during the survey. Furthermore, the most common URL failure modes are presented and analyzed.展开更多
The rapidly evolving cybersecurity threat landscape exposes a critical flaw in traditional educational programs where static curricula cannot adapt swiftly to novel attack vectors.This creates a significant gap betwee...The rapidly evolving cybersecurity threat landscape exposes a critical flaw in traditional educational programs where static curricula cannot adapt swiftly to novel attack vectors.This creates a significant gap between theoretical knowledge and the practical defensive capabilities needed in the field.To address this,we propose TeachSecure-CTI,a novel framework for adaptive cybersecurity curriculumgeneration that integrates real-time Cyber Threat Intelligence(CTI)with AI-driven personalization.Our framework employs a layered architecture featuring a CTI ingestion and clusteringmodule,natural language processing for semantic concept extraction,and a reinforcement learning agent for adaptive content sequencing.Bydynamically aligning learningmaterialswithboththe evolving threat environment and individual learner profiles,TeachSecure-CTI ensures content remains current,relevant,and tailored.A 12-week study with 150 students across three institutions demonstrated that the framework improves learning gains by 34%,significantly exceeding the 12%–21%reported in recent literature.The system achieved 84.8%personalization accuracy,85.9%recognition accuracy for MITRE ATT&CK tactics,and a 31%faster competency development rate compared to static curricula.These findings have implications beyond academia,extending to workforce development,cyber range training,and certification programs.By bridging the gap between dynamic threats and static educational materials,TeachSecure-CTI offers an empirically validated,scalable solution for cultivating cybersecurity professionals capable of responding to modern threats.展开更多
Activation pruning reduces neural network complexity by eliminating low-importance neuron activations,yet identifying the critical pruning threshold—beyond which accuracy rapidly deteriorates—remains computationally...Activation pruning reduces neural network complexity by eliminating low-importance neuron activations,yet identifying the critical pruning threshold—beyond which accuracy rapidly deteriorates—remains computationally expensive and typically requires exhaustive search.We introduce a thermodynamics-inspired framework that treats activation distributions as energy-filtered physical systems and employs the free energy of activations as a principled evaluation metric.Phase-transition-like phenomena in the free-energy profile—such as extrema,inflection points,and curvature changes—yield reliable estimates of the critical pruning threshold,providing a theoretically grounded means of predicting sharp accuracy degradation.To further enhance efficiency,we propose a renormalized free energy technique that approximates full-evaluation free energy using only the activation distribution of the unpruned network.This eliminates repeated forward passes,dramatically reducing computational overhead and achieving speedups of up to 550×for MLPs.Extensive experiments across diverse vision architectures(MLP,CNN,ResNet,MobileNet,Vision Transformer)and text models(LSTM,BERT,ELECTRA,T5,GPT-2)on multiple datasets validate the generality,robustness,and computational efficiency of our approach.Overall,this work establishes a theoretically grounded and practically effective framework for activation pruning,bridging the gap between analytical understanding and efficient deployment of sparse neural networks.展开更多
This work addresses optimality aspects related to composite laminates having layers with different orientations.RegressionNeuralNetworks can model the mechanical behavior of these laminates,specifically the stressstra...This work addresses optimality aspects related to composite laminates having layers with different orientations.RegressionNeuralNetworks can model the mechanical behavior of these laminates,specifically the stressstrain relationship.If this model has strong generalization ability,it can be coupled with a metaheuristic algorithm–the PSO algorithm used in this article–to address an optimization problem(OP)related to the orientations of composite laminates.To solve OPs,this paper proposes an optimization framework(OFW)that connects the two components,the optimal solution search mechanism and the RNN model.The OFW has two modules:the search mechanism(Adaptive Hybrid Topology PSO)and the Prediction and Computation Module(PCM).The PCM undertakes all the activities concerning the OP at hand:the stress-strain model,constraints checking,and computation of the objective function.Two case studies about the layers’orientations of laminated specimens are conducted to validate the proposed framework.The specimens belong to“Off-axis oriented specimens”and are subjects of two OPs.The algorithms for AHTPSO and for the two PCMs(one for each problem)are proposed and implemented by MATLAB scripts and functions.Simulations are carried out for different initial conditions.The solutions demonstrated that the OFW is effective and has a highly acceptable computational complexity.The limitation of using the OFWis the generalization ability of the RNN model or any other regression models.To harness the RNN model efficiently,it must have a very good generalization power.If this condition ismet,the OFWcan be integrated into any design process to make optimal choices of the layers’orientations.展开更多
Alzheimer’s disease(AD)is the most common form of dementia.In addition to the lack of effective treatments,there are limitations in diagnostic capabilities.The complexity of AD itself,together with a variety of other...Alzheimer’s disease(AD)is the most common form of dementia.In addition to the lack of effective treatments,there are limitations in diagnostic capabilities.The complexity of AD itself,together with a variety of other diseases often observed in a patient’s history in addition to their AD diagnosis,make deciphering the molecular mechanisms that underlie AD,even more important.Large datasets of single-cell RNA sequencing,single-nucleus RNA-sequencing(snRNA-seq),and spatial transcriptomics(ST)have become essential in guiding and supporting new investigations into the cellular and regional susceptibility of AD.However,with unique technology,software,and larger databases emerging;a lack of integration of these data can contribute to ineffective use of valuable knowledge.Importantly,there was no specialized database that concentrates on ST in AD that offers comprehensive differential analyses under various conditions,such as sex-specific,region-specific,and comparisons between AD and control groups until the new Single-cell and Spatial RNA-seq databasE for Alzheimer’s Disease(ssREAD)database(Wang et al.,2024)was introduced to meet the scientific community’s growing demand for comprehensive,integrated,and accessible data analysis.展开更多
Customer churn is the rate at which customers discontinue doing business with a company over a given time period.It is an essential measure for businesses to monitor high churn rates,as they often indicate underlying ...Customer churn is the rate at which customers discontinue doing business with a company over a given time period.It is an essential measure for businesses to monitor high churn rates,as they often indicate underlying issues with services,products,or customer experience,resulting in considerable income loss.Prediction of customer churn is a crucial task aimed at retaining customers and maintaining revenue growth.Traditional machine learning(ML)models often struggle to capture complex temporal dependencies in client behavior data.To address this,an optimized deep learning(DL)approach using a Regularized Bidirectional Long Short-Term Memory(RBiLSTM)model is proposed to mitigate overfitting and improve generalization error.The model integrates dropout,L2-regularization,and early stopping to enhance predictive accuracy while preventing over-reliance on specific patterns.Moreover,this study investigates the effect of optimization techniques on boosting the training efficiency of the developed model.Experimental results on a recent public customer churn dataset demonstrate that the trained model outperforms the traditional ML models and some other DL models,such as Long Short-Term Memory(LSTM)and Deep Neural Network(DNN),in churn prediction performance and stability.The proposed approach achieves 96.1%accuracy,compared with LSTM and DNN,which attain 94.5%and 94.1%accuracy,respectively.These results confirm that the proposed approach can be used as a valuable tool for businesses to identify at-risk consumers proactively and implement targeted retention strategies.展开更多
Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods...Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods,based on reliable existing data stored in project management tools’datasets,automating this evaluation process becomes a natural step forward.In this context,our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems.For this,we mathematically formalize two categories of expertise:technology-specific expertise,which denotes the skills required for a particular technology,and general expertise,which encapsulates overall knowledge in the software industry.Afterward,we automatically classify the zones of expertise associated with each task a developer has worked on using Bidirectional Encoder Representations from Transformers(BERT)-like transformers to handle the unique characteristics of project tool datasets effectively.Finally,our method evaluates the proficiency of each software specialist across already completed projects from both technology-specific and general perspectives.The method was experimentally validated,yielding promising results.展开更多
文摘This study aimed to evaluate the correlation between nursing informatics(NI)competency and information literacy skills for evidencebased practice(EBP)among intensive care nurses.This cross-sectional study was conducted on 184 nurses working in intensive care units(ICUs).The study data were collected through demographic information,Nursing Informatics Competency Assessment Tool(NICAT),and information literacy skills for EBP questionnaires.The intensive care nurses received competent and low-moderate levels for the total scores of NI competency and information literacy skills,respectively.They received a moderate score for the use of different information resources but a low score for information searching skills,different search features,and knowledge about search operators,and only 31.5%of the nurses selected the most appropriate statement.NI competency and related subscales had a significant direct bidirectional correlation with information literacy skills for EBP and its subscales(P<0.05).Nurses require a high level of NI competency and information literacy for EBP to obtain up-to-date information and provide better care and decision-making.Health planners and policymakers should develop interventions to enhance NI competency and information literacy skills among nurses and motivate them to use EBP in clinical settings.
基金Support of the grant MSM00211622419 is to be acknowledge
文摘Quantum information processing and communication(QIPC) is an area of science that has two main goals: On one side,it tries to explore(still not well known) potential of quantum phenomena for(efficient and reliable) information processing and(efficient,reliable and secure) communication.On the other side,it tries to use quantum information storing,processing and transmitting paradigms,principles,laws,limitations,concepts,models and tools to get deeper insights into the phenomena of quantum world and to find efficient ways to describe and handle/simulate various complex physical phenomena.In order to do that QIPC has to use concepts,models,theories,methods and tools of both physics and informatics.The main role of physics at that is to discover primitive physical phenomena that can be used to design and maintain complex and reliable information storing,processing and transmitting systems.The main role of informatics is,one one side,to explore,from the information processing and communication point of view,limitations and potentials of the potential quantum information processing and communication technology,and to prepare information processing methods that could utilise potential of quantum information processing and communication technologies.On the other side,the main role of informatics is to guide and support,by theoretical tools and outcomes,physics oriented research in QIPC.The paper is to describe and analyse a variety of ways and potential informatics contributes and should/could contribute to the development of QIPC--see also Gruska(1999,2006,2008).
文摘Due to the recent developments in communications technology,cognitive computations have been used in smart healthcare techniques that can combine massive medical data,artificial intelligence,federated learning,bio-inspired computation,and the Internet of Medical Things.It has helped in knowledge sharing and scaling ability between patients,doctors,and clinics for effective treatment of patients.Speech-based respiratory disease detection and monitoring are crucial in this direction and have shown several promising results.Since the subject’s speech can be remotely recorded and submitted for further examination,it offers a quick,economical,dependable,and noninvasive prospective alternative detection approach.However,the two main requirements of this are higher accuracy and lower computational complexity and,in many cases,these two requirements do not correlate with each other.This problem has been taken up in this paper to develop a low computational complexity-based neural network with higher accuracy.A cascaded perceptual functional link artificial neural network(PFLANN)is used to capture the nonlinearity in the data for better classification performance with low computational complexity.The proposed model is being tested for multiple respiratory diseases,and the analysis of various performance matrices demonstrates the superior performance of the proposed model both in terms of accuracy and complexity.
文摘Bovine coronavirus(BCoV)poses a significant threat to the global cattle industry,causing both respiratory and gastrointestinal infections in cattle populations.This necessitates the development of efficacious vaccines.While several inactivated and live BCoV vaccines exist,they are predominantly limited to calves.The immunization of adult cattle is imperative for BCoV infection control,as it curtails viral transmission to calves and ameliorates the impact of enteric and respiratory ailments across all age groups within the herd.This study presents an in silico methodology for devising a multiepitope vaccine targeting BCoV.The spike glycoprotein(S)and nucleocapsid(N)proteins,which are integral elements of the BCoV structure,play pivotal roles in the viral infection cycle and immune response.We constructed a remarkably effective multiepitope vaccine candidate specifically designed to combat the BCoV population.Using immunoinformatics technology,B-cell and T-cell epitopes were predicted and linked together using linkers and adjuvants to efficiently trigger both cellular and humoral immune responses in cattle.The in silico construct was characterized,and assessment of its physicochemical properties revealed the formation of a stable vaccine construct.After 3D modeling of the vaccine construct,molecular docking revealed a stable interaction with the bovine receptor bTLR4.Moreover,the viability of the vaccine’s high expression and simple purification was demonstrated by codon optimization and in silico cloning expression into the pET28a(+)vector.By applying immunoinformatics approaches,researchers aim to better understand the immune response to bovine coronavirus,discover potential targets for intervention,and facilitate the development of diagnostic tools and vaccines to mitigate the impact of this virus on cattle health and the livestock industry.We anticipate that the design will be useful as a preventive treatment for BCoV sickness in cattle,opening the door for further laboratory studies.
基金This work was funded by the Graduate Scientific Research School at Yarmouk University under Grant Number:82/2020。
文摘There are quintillions of data on deoxyribonucleic acid(DNA)and protein in publicly accessible data banks,and that number is expanding at an exponential rate.Many scientific fields,such as bioinformatics and drug discovery,rely on such data;nevertheless,gathering and extracting data from these resources is a tough undertaking.This data should go through several processes,including mining,data processing,analysis,and classification.This study proposes software that extracts data from big data repositories automatically and with the particular ability to repeat data extraction phases as many times as needed without human intervention.This software simulates the extraction of data from web-based(point-and-click)resources or graphical user interfaces that cannot be accessed using command-line tools.The software was evaluated by creating a novel database of 34 parameters for 1360 physicochemical properties of antimicrobial peptides(AMP)sequences(46240 hits)from various MARVIN software panels,which can be later utilized to develop novel AMPs.Furthermore,for machine learning research,the program was validated by extracting 10,000 protein tertiary structures from the Protein Data Bank.As a result,data collection from the web will become faster and less expensive,with no need for manual data extraction.The software is critical as a first step to preparing large datasets for subsequent stages of analysis,such as those using machine and deep-learning applications.
基金National Natural Science Foundation of China,Grant/Award Number:81960338Science and Technology Projects of Guizhou Province,Grant/Award Numbers:Qiankehejichu-ZK[2022]422,Qiankehejichu-ZK[2023]353。
文摘The liver is a multifaceted organ that is responsible for many critical functions encompassing amino acid,carbohydrate,and lipid metabolism,all of which make a healthy liver essential for the human body.Contemporary imaging methodologies have remarkable diagnostic accuracy in discerning focal liver lesions;however,a comprehensive understanding of diffuse liver diseases is a requisite for radiologists to accurately diagnose or predict the progression of such lesions within clinical contexts.Nonetheless,the conventional attributes of radiological features,including morphology,size,margin,density,signal intensity,and echoes,limit their clinical utility.Radiomics is a widely used approach that is characterized by the extraction of copious image features from radiographic depictions,which gives it considerable potential in addressing this limitation.It is worth noting that functional or molecular alterations occur significantly prior to the morphological shifts discernible by imaging modalities.Consequently,the explication of potential mechanisms by multiomics analyses(encompassing genomics,epigenomics,transcriptomics,proteomics,and metabolomics)is essential for investigating putative signal pathway regulations from a radiological viewpoint.In this review,we elaborate on the principal pathological categorizations of diffuse liver diseases,the evaluation of multiomics approaches pertaining to diffuse liver diseases,and the prospective value of predictive models.Accordingly,the overarching objective of this review is to scrutinize the interrelations between radiological features and bioinformatics as well as to consider the development of prediction models predicated on radiobioinformatics as integral components of clinical decision support systems for diffuse liver diseases.
文摘Severe acute respiratory syndrome coronavirus(SARS-CoV)and SARS-CoV-2 are thought to transmit to humans via wild mammals,especially bats.However,evidence for direct bat-to-human transmission is lacking.Involvement of intermediate hosts is considered a reason for SARS-CoV-2 transmission to humans and emergence of outbreak.Large biodiversity is found in tropical territories,such as Brazil.On the similar line,this study aimed to predict potential coronavirus hosts among Brazilian wild mammals based on angiotensin-converting enzyme 2(ACE2)sequences using evolutionary bioinformatics.Cougar,maned wolf,and bush dogs were predicted as potential hosts for coronavirus.These indigenous carnivores are philogenetically closer to the known SARS-CoV/SARS-CoV-2 hosts and presented low ACE2 divergence.A new coronavirus transmission chain was developed in which white-tailed deer,a susceptible SARS-CoV-2 host,have the central position.Cougar play an important role because of its low divergent ACE2 level in deer and humans.The discovery of these potential coronavirus hosts will be useful for epidemiological surveillance and discovery of interventions that can contribute to break the transmission chain.
文摘Multidisciplinary, integrated planning approach by architects, engineers, scientists and manufacturers to reduce energy consumption of buildings. The CIIRC Complex, located on the main campus of Czech Technical University in Prague consists of two buildings, newly constructed building and adaptive reuse of existing building. CIIRC—Czech Institute of Informatics, Robotics and Cybernetics is a contemporary teaching facility of new generation and use for scientific research teams. New building has ten above-ground floors, on the bottom 4 floors of laboratories, scientist modules, classrooms, above are offices, meeting rooms, teaching and research modules for professors and students. Offices of the rector are on the last two floors of the building. On the top floor is congress type auditorium, in the basement is fully automatic car park. Double skin pneumatic cushions facade. In the project are introduced series of architectural and technical features and innovations. Probably the most visible is the double skin facade facing south-transparent double layer membrane ETFE (Ethylen-TetraFluorEthylen) cushions with triple glazed modular system assembly. Acting as solar collector, recuperating of hot air on the top floors, saving up to 30% of an energy consumption.
基金supported by grant CNTC-110202101039(JY-16)and YNTC-2022530000241008.
文摘Bioinformatics analysis often requires the filtering of multi-datasets,based on frequency or frequency of occurrence,for decisions on retention or deletion.Existing tools for this purpose often present a challenge with complex installation,which necessitate custom coding,thereby impeding efficient data processing activities.To address this issue,Filterx,a user-friendly command line tool that written in C language,was developed that supports multi-condition filtering,based on frequency or occurrence.This tool enables users to complete the data processing tasks through a simple command line,greatly reducing both workload and data processing time.In addition,future development of this tool could facilitate its integration into various bioinformatics data analysis pipelines.
基金supported by the National Key R&D Program of China(Grant No.2016YFC1306605)the National Natural Science Foundation of China(Grant Nos.31670851,31470821,and 91530320)
文摘Parkinson’s disease(PD)is a common neurological disease in elderly people,and its morbidity and mortality are increasing with the advent of global ageing.The traditional paradigm of moving from small data to big data in biomedical research is shifting toward big data-based identification of small actionable alterations.To highlight the use of big data for precision PD medicine,we review PD big data and informatics for the translation of basic PD research to clinical applications.We emphasize some key findings in clinically actionable changes,such as susceptibility genetic variations for PD risk population screening,biomarkers for the diagnosis and stratification of PD patients,risk factors for PD,and lifestyles for the prevention of PD.The challenges associated with the collection,storage,and modelling of diverse big data for PD precision medicine and healthcare are also summarized.Future perspectives on systems modelling and intelligent medicine for PD monitoring,diagnosis,treatment,and healthcare are discussed in the end.
基金supported by the National Natural Science Foundation of China(91339106)National High Technology Research and Development Program of China(2014AA021102)
文摘Long noncoding RNAs(lncRNAs)play important roles in human diseases including vascular disease.Given the large number of lncRNAs,however,whether the majority of them are associated with vascular disease remains unknown.For this purpose,here we present a genomic location based bioinformatics method to predict the lncRNAs associated with vascular disease.We applied the presented method to globally screen the human lncRNAs potentially involved in vascular disease.As a result,we predicted 3043 putative vascular disease associated lncRNAs.To test the accuracy of the method,we selected 10 lncRNAs predicted to be implicated in proliferation and migration of vascular smooth muscle cells(VSMCs)for further experimental validation.The results confirmed that eight of the 10 lncRNAs(80%)are validated.This result suggests that the presented method has a reliable prediction performance.Finally,the presented bioinformatics method and the predicted vascular disease associated lncRNAs together may provide helps for not only better understanding of the roles of lncRNAs in vascular disease but also the identification of novel molecules for the diagnosis and therapy of vascular disease.
基金supported in part by the National Institutes of Health of the United States(Grant Nos.UL1 RR024139 to Yale Clinical and Translational Science Award,1S10OD018034-01 to 6500 QTrap Mass Spectrometer for Yale University,1S10RR026707-01 to 5500QTrap Mass Spectrometer for Yale University,P30DA018343 to Yale/NIDA Neuroproteomics Center and NIDDK-K01DK089006 awarded to JR)
文摘We report a significantly-enhanced bioinformatics suite and database for proteomics research called Yale Protein Expression Database(YPED) that is used by investigators at more than 300 institutions worldwide. YPED meets the data management, archival, and analysis needs of a high-throughput mass spectrometry-based proteomics research ranging from a singlelaboratory, group of laboratories within and beyond an institution, to the entire proteomics community. The current version is a significant improvement over the first version in that it contains new modules for liquid chromatography–tandem mass spectrometry(LC–MS/MS) database search results, label and label-free quantitative proteomic analysis, and several scoring outputs for phosphopeptide site localization. In addition, we have added both peptide and protein comparative analysis tools to enable pairwise analysis of distinct peptides/proteins in each sample and of overlapping peptides/proteins between all samples in multiple datasets. We have also implemented a targeted proteomics module for automated multiple reaction monitoring(MRM)/selective reaction monitoring(SRM) assay development. We have linked YPED's database search results and both label-based and label-free fold-change analysis to the Skyline Panorama repository for online spectra visualization. In addition, we have built enhanced functionality to curate peptide identifications into an MS/MS peptide spectral library for all of our protein database search identification results.
文摘The explosive growth of the bioinformatics field has led to a large amount of data and software applications publicly available as web resources. However, the lack of persistence of web references is a barrier to a comprehensive shared access. We conducted a study of the current availability and other features of primary bioinforo matics web resources (such as software tools and databases). The majority (95%) of the examined bioinformatics web resources were found running on UNIX/Linux operating systems, and the most widely used web server was found to be Apache (or Apache-related products). Of the overall 1,130 Uniform Resource Locators (URLs) examined, 91% were highly available (more than 90% of the time), while only 4% showed low accessibility (less than 50% of the time) during the survey. Furthermore, the most common URL failure modes are presented and analyzed.
文摘The rapidly evolving cybersecurity threat landscape exposes a critical flaw in traditional educational programs where static curricula cannot adapt swiftly to novel attack vectors.This creates a significant gap between theoretical knowledge and the practical defensive capabilities needed in the field.To address this,we propose TeachSecure-CTI,a novel framework for adaptive cybersecurity curriculumgeneration that integrates real-time Cyber Threat Intelligence(CTI)with AI-driven personalization.Our framework employs a layered architecture featuring a CTI ingestion and clusteringmodule,natural language processing for semantic concept extraction,and a reinforcement learning agent for adaptive content sequencing.Bydynamically aligning learningmaterialswithboththe evolving threat environment and individual learner profiles,TeachSecure-CTI ensures content remains current,relevant,and tailored.A 12-week study with 150 students across three institutions demonstrated that the framework improves learning gains by 34%,significantly exceeding the 12%–21%reported in recent literature.The system achieved 84.8%personalization accuracy,85.9%recognition accuracy for MITRE ATT&CK tactics,and a 31%faster competency development rate compared to static curricula.These findings have implications beyond academia,extending to workforce development,cyber range training,and certification programs.By bridging the gap between dynamic threats and static educational materials,TeachSecure-CTI offers an empirically validated,scalable solution for cultivating cybersecurity professionals capable of responding to modern threats.
基金output of a research project implemented as part of the Basic Research Program at HSE University。
文摘Activation pruning reduces neural network complexity by eliminating low-importance neuron activations,yet identifying the critical pruning threshold—beyond which accuracy rapidly deteriorates—remains computationally expensive and typically requires exhaustive search.We introduce a thermodynamics-inspired framework that treats activation distributions as energy-filtered physical systems and employs the free energy of activations as a principled evaluation metric.Phase-transition-like phenomena in the free-energy profile—such as extrema,inflection points,and curvature changes—yield reliable estimates of the critical pruning threshold,providing a theoretically grounded means of predicting sharp accuracy degradation.To further enhance efficiency,we propose a renormalized free energy technique that approximates full-evaluation free energy using only the activation distribution of the unpruned network.This eliminates repeated forward passes,dramatically reducing computational overhead and achieving speedups of up to 550×for MLPs.Extensive experiments across diverse vision architectures(MLP,CNN,ResNet,MobileNet,Vision Transformer)and text models(LSTM,BERT,ELECTRA,T5,GPT-2)on multiple datasets validate the generality,robustness,and computational efficiency of our approach.Overall,this work establishes a theoretically grounded and practically effective framework for activation pruning,bridging the gap between analytical understanding and efficient deployment of sparse neural networks.
基金supported by the Ministry of Research,Innovation and Digitization,CNCS/CCCDI–UEFISCDI(Romania),Nr.11/2024,within PNCDI IV.The APC received no external funding.
文摘This work addresses optimality aspects related to composite laminates having layers with different orientations.RegressionNeuralNetworks can model the mechanical behavior of these laminates,specifically the stressstrain relationship.If this model has strong generalization ability,it can be coupled with a metaheuristic algorithm–the PSO algorithm used in this article–to address an optimization problem(OP)related to the orientations of composite laminates.To solve OPs,this paper proposes an optimization framework(OFW)that connects the two components,the optimal solution search mechanism and the RNN model.The OFW has two modules:the search mechanism(Adaptive Hybrid Topology PSO)and the Prediction and Computation Module(PCM).The PCM undertakes all the activities concerning the OP at hand:the stress-strain model,constraints checking,and computation of the objective function.Two case studies about the layers’orientations of laminated specimens are conducted to validate the proposed framework.The specimens belong to“Off-axis oriented specimens”and are subjects of two OPs.The algorithms for AHTPSO and for the two PCMs(one for each problem)are proposed and implemented by MATLAB scripts and functions.Simulations are carried out for different initial conditions.The solutions demonstrated that the OFW is effective and has a highly acceptable computational complexity.The limitation of using the OFWis the generalization ability of the RNN model or any other regression models.To harness the RNN model efficiently,it must have a very good generalization power.If this condition ismet,the OFWcan be integrated into any design process to make optimal choices of the layers’orientations.
文摘Alzheimer’s disease(AD)is the most common form of dementia.In addition to the lack of effective treatments,there are limitations in diagnostic capabilities.The complexity of AD itself,together with a variety of other diseases often observed in a patient’s history in addition to their AD diagnosis,make deciphering the molecular mechanisms that underlie AD,even more important.Large datasets of single-cell RNA sequencing,single-nucleus RNA-sequencing(snRNA-seq),and spatial transcriptomics(ST)have become essential in guiding and supporting new investigations into the cellular and regional susceptibility of AD.However,with unique technology,software,and larger databases emerging;a lack of integration of these data can contribute to ineffective use of valuable knowledge.Importantly,there was no specialized database that concentrates on ST in AD that offers comprehensive differential analyses under various conditions,such as sex-specific,region-specific,and comparisons between AD and control groups until the new Single-cell and Spatial RNA-seq databasE for Alzheimer’s Disease(ssREAD)database(Wang et al.,2024)was introduced to meet the scientific community’s growing demand for comprehensive,integrated,and accessible data analysis.
文摘Customer churn is the rate at which customers discontinue doing business with a company over a given time period.It is an essential measure for businesses to monitor high churn rates,as they often indicate underlying issues with services,products,or customer experience,resulting in considerable income loss.Prediction of customer churn is a crucial task aimed at retaining customers and maintaining revenue growth.Traditional machine learning(ML)models often struggle to capture complex temporal dependencies in client behavior data.To address this,an optimized deep learning(DL)approach using a Regularized Bidirectional Long Short-Term Memory(RBiLSTM)model is proposed to mitigate overfitting and improve generalization error.The model integrates dropout,L2-regularization,and early stopping to enhance predictive accuracy while preventing over-reliance on specific patterns.Moreover,this study investigates the effect of optimization techniques on boosting the training efficiency of the developed model.Experimental results on a recent public customer churn dataset demonstrate that the trained model outperforms the traditional ML models and some other DL models,such as Long Short-Term Memory(LSTM)and Deep Neural Network(DNN),in churn prediction performance and stability.The proposed approach achieves 96.1%accuracy,compared with LSTM and DNN,which attain 94.5%and 94.1%accuracy,respectively.These results confirm that the proposed approach can be used as a valuable tool for businesses to identify at-risk consumers proactively and implement targeted retention strategies.
基金supported by the project“Romanian Hub for Artificial Intelligence-HRIA”,Smart Growth,Digitization and Financial Instruments Program,2021–2027,MySMIS No.334906.
文摘Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods,based on reliable existing data stored in project management tools’datasets,automating this evaluation process becomes a natural step forward.In this context,our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems.For this,we mathematically formalize two categories of expertise:technology-specific expertise,which denotes the skills required for a particular technology,and general expertise,which encapsulates overall knowledge in the software industry.Afterward,we automatically classify the zones of expertise associated with each task a developer has worked on using Bidirectional Encoder Representations from Transformers(BERT)-like transformers to handle the unique characteristics of project tool datasets effectively.Finally,our method evaluates the proficiency of each software specialist across already completed projects from both technology-specific and general perspectives.The method was experimentally validated,yielding promising results.