The effective and timely diagnosis and treatment of ocular diseases are key to the rapid recovery of patients.Today,the mass disease that needs attention in this context is cataracts.Although deep learning has signifi...The effective and timely diagnosis and treatment of ocular diseases are key to the rapid recovery of patients.Today,the mass disease that needs attention in this context is cataracts.Although deep learning has significantly advanced the analysis of ocular disease images,there is a need for a probabilistic model to generate the distributions of potential outcomes and thusmake decisions related to uncertainty quantification.Therefore,this study implements a Bayesian Convolutional Neural Networks(BCNN)model for predicting cataracts by assigning probability values to the predictions.It prepares convolutional neural network(CNN)and BCNN models.The proposed BCNN model is CNN-based in which reparameterization is in the first and last layers of the CNN model.This study then trains them on a dataset of cataract images filtered from the ocular disease fundus images fromKaggle.The deep CNN model has an accuracy of 95%,while the BCNN model has an accuracy of 93.75% along with information on uncertainty estimation of cataracts and normal eye conditions.When compared with other methods,the proposed work reveals that it can be a promising solution for cataract prediction with uncertainty estimation.展开更多
In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance o...In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance of two advanced models,the Large Language Model(LLM)LLaMA model and NLP BERT model,in the context of airline review sentiment analysis.Through fine-tuning,domain adaptation,and the application of few-shot learning,the study addresses the subtleties of sentiment expressions in airline-related text data.Employing predictive modeling and comparative analysis,the research evaluates the effectiveness of Large Language Model Meta AI(LLaMA)and Bidirectional Encoder Representations from Transformers(BERT)in capturing sentiment intricacies.Fine-tuning,including domain adaptation,enhances the models'performance in sentiment classification tasks.Additionally,the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis.By conducting experiments on a diverse airline review dataset,the research quantifies the impact of fine-tuning,domain adaptation,and few-shot learning on model performance,providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content(UGC).This research contributes to refining sentiment analysis models,ultimately fostering improved customer satisfaction in the airline industry.展开更多
In the current paper,we present a study of the spatial distribution of luminous blue variables(LBVs)and various LBV candidates(c LBVs)with respect to OB associations in the galaxy M33.The identification of blue star g...In the current paper,we present a study of the spatial distribution of luminous blue variables(LBVs)and various LBV candidates(c LBVs)with respect to OB associations in the galaxy M33.The identification of blue star groups was based on the LGGS data and was carried out by two clustering algorithms with initial parameters determined during simulations of random stellar fields.We have found that the distribution of distances to the nearest OB association obtained for the LBV/c LBV sample is close to that for massive stars with Minit>20 M⊙and WolfRayet stars.This result is in good agreement with the standard assumption that LBVs represent an intermediate stage in the evolution of the most massive stars.However,some objects from the LBV/cLBV sample,particularly Fe II-emission stars,demonstrated severe isolation compared to other massive stars,which,together with certain features of their spectra,implicitly indicates that the nature of these objects and other LBVs/cLBVs may differ radically.展开更多
BACKGROUND Kidney and liver transplantation are two sub-specialized medical disciplines,with transplant professionals spending decades in training.While artificial intelligencebased(AI-based)tools could potentially as...BACKGROUND Kidney and liver transplantation are two sub-specialized medical disciplines,with transplant professionals spending decades in training.While artificial intelligencebased(AI-based)tools could potentially assist in everyday clinical practice,comparative assessment of their effectiveness in clinical decision-making remains limited.AIM To compare the use of ChatGPT and GPT-4 as potential tools in AI-assisted clinical practice in these challenging disciplines.METHODS In total,400 different questions tested ChatGPT’s/GPT-4 knowledge and decision-making capacity in various renal and liver transplantation concepts.Specifically,294 multiple-choice questions were derived from open-access sources,63 questions were derived from published open-access case reports,and 43 from unpublished cases of patients treated at our department.The evaluation covered a plethora of topics,including clinical predictors,treatment options,and diagnostic criteria,among others.RESULTS ChatGPT correctly answered 50.3%of the 294 multiple-choice questions,while GPT-4 demonstrated a higher performance,answering 70.7%of questions(P<0.001).Regarding the 63 questions from published cases,ChatGPT achieved an agreement rate of 50.79%and partial agreement of 17.46%,while GPT-4 demonstrated an agreement rate of 80.95%and partial agreement of 9.52%(P=0.01).Regarding the 43 questions from unpublished cases,ChatGPT demonstrated an agreement rate of 53.49%and partial agreement of 23.26%,while GPT-4 demonstrated an agreement rate of 72.09%and partial agreement of 6.98%(P=0.004).When factoring by the nature of the task for all cases,notably,GPT-4 demonstrated outstanding performance,providing a differential diagnosis that included the final diagnosis in 90%of the cases(P=0.008),and successfully predicting the prognosis of the patient in 100%of related questions(P<0.001).CONCLUSION GPT-4 consistently provided more accurate and reliable clinical recommendations with higher percentages of full agreements both in renal and liver transplantation compared with ChatGPT.Our findings support the potential utility of AI models like ChatGPT and GPT-4 in AI-assisted clinical practice as sources of accurate,individualized medical information and facilitating decision-making.The progression and refinement of such AI-based tools could reshape the future of clinical practice,making their early adoption and adaptation by physicians a necessity.展开更多
All-inorganic perovskites based on cesium-lead-bromine(Cs-Pb-Br)have been a prominent research focus in optoelectronics in recent years.The optimisation and tunability of their macroscopic properties exploit the confo...All-inorganic perovskites based on cesium-lead-bromine(Cs-Pb-Br)have been a prominent research focus in optoelectronics in recent years.The optimisation and tunability of their macroscopic properties exploit the conformational flexibility,resulting in various crystal structures.Varying synthesis parameters can yield distinct crystal structures from Cs,Pb,and Br precursors,and manually exploring the relationship between these synthesis parameters and the resulting crystal structure is both labour-intensive and time-consuming.Machine learning(ML)can rapidly uncover insights and drive discoveries in chemical synthesis with the support of data,significantly reducing both the cost and development cycle of materials.Here,we gathered synthesis parameters from published literature(220 synthesis runs)and implemented eight distinct ML models,including eXtreme Gradient Boosting(XGB),Decision Tree(DT),Support Vector Machine(SVM),Random Forest(RF),Naïve Bayes(NB),Logistic Regression(LR),Gradient Boosting(GB),and K-Nearest(KN)to classify and predict Cs-Pb-Br crystal structures from given synthesis parameters.Validation accuracy,precision,F1 score,recall,and average area under the curve(AUC)are employed to evaluate these ML models.The XGB model exhibited the best performance,achieving a validation accuracy of 0.841.The trained XGB model was subsequently utilised to predict the structure from 10 experimental runs using a randomised set of parameters,achieving a testing accuracy of 0.8.The results indicate that the Cs/Pb molar ratio,reaction time,and the concentration of organic compounds(ligands)play crucial roles in synthesising various crystal structures of Cs-Pb-Br.This study demonstrates a significant decrease in effort required for experimental procedures and builds a foundational basis for predicting crystal structures from synthesis parameters.展开更多
The compliance modeling and rigidity performance evaluation for the lower mobility parallel manipulators are still to be remained as two overwhelming challenges in the stage of conceptual design due to their geometric...The compliance modeling and rigidity performance evaluation for the lower mobility parallel manipulators are still to be remained as two overwhelming challenges in the stage of conceptual design due to their geometric complexities. By using the screw theory, this paper explores the compliance modeling and eigencompliance evaluation of a newly patented 1T2R spindle head whose topological architecture is a 3-RPS parallel mechanism. The kinematic definitions and inverse position analysis are briefly addressed in the first place to provide necessary information for compliance modeling. By considering the 3-RPS parallel kinematic machine(PKM) as a typical compliant parallel device, whose three limb assemblages have bending, extending and torsional deflections, an analytical compliance model for the spindle head is established with screw theory and the analytical stiffness matrix of the platform is formulated. Based on the eigenscrew decomposition, the eigencompliance and corresponding eigenscrews are analyzed and the platform's compliance properties are physically interpreted as the suspension of six screw springs. The distributions of stiffness constants of the six screw springs throughout the workspace are predicted in a quick manner with a piece-by-piece calculation algorithm. The numerical simulation reveals a strong dependency of platform's compliance on its configuration in that they are axially symmetric due to structural features. At the last stage, the effects of some design variables such as structural, configurational and dimensional parameters on system rigidity characteristics are investigated with the purpose of providing useful information for the structural design and performance improvement of the PKM. Compared with previous efforts in compliance analysis of PKMs, the present methodology is more intuitive and universal thus can be easily applied to evaluate the overall rigidity performance of other PKMs with high efficiency.展开更多
The features of carrier-based aircraft’s navigation systems during the approach and landing phases are investigated.A new adaptive Kalman filter with unknown state noise statistics is proposed to improve the accuracy...The features of carrier-based aircraft’s navigation systems during the approach and landing phases are investigated.A new adaptive Kalman filter with unknown state noise statistics is proposed to improve the accuracy of the INS/GNSS integrated navigation system.The adaptive filtering algorithm aims to estimate and adapt the unknown state noise covariance Q in high dynamic conditions,when the measurement noise covariance R is assumed to be known empirically in advance.The new adaptive Kalman filter based on the innovation sequence and pseudo-measurement vector approach makes it more effective to estimate and adapt Q.The simulation results and semi-physical experiments show that the application of the proposed adaptive Kalman filter can guarantee a higher estimation accuracy of the state variables.展开更多
Encouraged by next-generation networks and autonomous vehicle systems,vehicular networks must employ advanced technologies to guarantee personal safety,reduce traffic accidents and ease traffic jams.By leveraging the ...Encouraged by next-generation networks and autonomous vehicle systems,vehicular networks must employ advanced technologies to guarantee personal safety,reduce traffic accidents and ease traffic jams.By leveraging the computing ability at the network edge,multi-access edge computing(MEC)is a promising technique to tackle such challenges.Compared to traditional full offloading,partial offloading offers more flexibility in the perspective of application as well as deployment of such systems.Hence,in this paper,we investigate the application of partial computing offloading in-vehicle networks.In particular,by analyzing the structure of many emerging applications,e.g.,AR and online games,we convert the application structure into a sequential multi-component model.Focusing on shortening the application execution delay,we extend the optimization problem from the single-vehicle computing offloading(SVCOP)scenario to the multi-vehicle computing offloading(MVCOP)by taking multiple constraints into account.A deep reinforcement learning(DRL)based algorithm is proposed as a solution to this problem.Various performance evaluation results have shown that the proposed algorithm achieves superior performance as compared to existing offloading mechanisms in deducing application execution delay.展开更多
The financial industry has been strongly influenced by digitalization in the past few years reflected by the emergence of“FinTech,”which represents the marriage of“finance”and“information technology.”FinTech pro...The financial industry has been strongly influenced by digitalization in the past few years reflected by the emergence of“FinTech,”which represents the marriage of“finance”and“information technology.”FinTech provides opportunities for the creation of new services and business models and poses challenges to traditional financial service providers.Therefore,FinTech has become a subject of debate among practitioners,investors,and researchers and is highly visible in the popular media.In this study,we unveil the drivers motivating the FinTech phenomenon perceived by the English and German popular press including the subjects discussed in the context of FinTech.This study is the first one to reflect the media perspective on the FinTech phenomenon in the research.In doing so,we extend the growing knowledge on FinTech and contribute to a common understanding in the financial and digital innovation literature.These study contributes to research in the areas of information systems,finance and interdisciplinary social sciences.Moreover,it brings value to practitioners(entrepreneurs,investors,regulators,etc.),who explore the field of FinTech.展开更多
This paper proposes a new method for model predictive control (MPC) of nonlinear systems to calculate stability region and feasible initial control profile/sequence, which are important to the implementations of MPC...This paper proposes a new method for model predictive control (MPC) of nonlinear systems to calculate stability region and feasible initial control profile/sequence, which are important to the implementations of MPC. Different from many existing methods, this paper distinguishes stability region from conservative terminal region. With global linearization, linear differential inclusion (LDI) and linear matrix inequality (LMI) techniques, a nonlinear system is transformed into a convex set of linear systems, and then the vertices of the set are used off-line to design the controller, to estimate stability region, and also to determine a feasible initial control profile/sequence. The advantages of the proposed method are demonstrated by simulation study.展开更多
Neuroinformatics is a fascinating research field that applies computational models and analytical tools to high dimensional experimental neuroscience data for a better understanding of how the brain functions or dysfu...Neuroinformatics is a fascinating research field that applies computational models and analytical tools to high dimensional experimental neuroscience data for a better understanding of how the brain functions or dysfunctions in brain diseases. Neuroinformaticians work in the intersection of neuroscience and informatics supporting the integration of various sub-disciplines(behavioural neuroscience, genetics, cognitive psychology, etc.) working on brain research. Neuroinformaticians are the pathway of information exchange between informaticians and clinicians for a better understanding of the outcome of computational models and the clinical interpretation of the analysis. Machine learning is one of the most significant computational developments in the last decade giving tools to neuroinformaticians and finally to radiologists and clinicians for an automatic and early diagnosis-prognosis of a brain disease. Random forest(RF) algorithm has been successfully applied to high-dimensional neuroimaging data for feature reduction and also has been applied to classify the clinical label of a subject using single or multi-modal neuroimaging datasets. Our aim was to review the studies where RF was applied to correctly predict the Alzheimer's disease(AD), the conversion from mild cognitive impairment(MCI) and its robustness to overfitting, outliers and handling of non-linear data. Finally, we described our RF-based model that gave us the 1 ^(st) position in an international challenge for automated prediction of MCI from MRI data.展开更多
In the past decade,online Peer-to-Peer(P2P)lending platforms have transformed the lending industry,which has been historically dominated by commercial banks.Information technology breakthroughs such as big data-based ...In the past decade,online Peer-to-Peer(P2P)lending platforms have transformed the lending industry,which has been historically dominated by commercial banks.Information technology breakthroughs such as big data-based financial technologies(Fintech)have been identified as important disruptive driving forces for this paradigm shift.In this paper,we take an information economics perspective to investigate how big data affects the transformation of the lending industry.By identifying how signaling and search costs are reduced by big data analytics for credit risk management of P2P lending,we discuss how information asymmetry is reduced in the big data era.Rooted in the lending business,we propose a theory on the economics of big data and outline a number of research opportunities and challenging issues.展开更多
Mobile Edge Computing (MEC) has been considered a promising solution that can address capacity and performance challenges in legacy systems such as Mobile Cloud Computing (MCC). In particular, such challenges include ...Mobile Edge Computing (MEC) has been considered a promising solution that can address capacity and performance challenges in legacy systems such as Mobile Cloud Computing (MCC). In particular, such challenges include intolerable delay, congestion in the core network, insufficient Quality of Experience (QoE), high cost of resource utility, such as energy and bandwidth. The aforementioned challenges originate from limited resources in mobile devices, the multi-hop connection between end-users and the cloud, high pressure from computation-intensive and delay-critical applications. Considering the limited resource setting at the MEC, improving the efficiency of task offloading in terms of both energy and delay in MEC applications is an important and urgent problem to be solved. In this paper, the key objective is to propose a task offloading scheme that minimizes the overall energy consumption along with satisfying capacity and delay requirements. Thus, we propose a MEC-assisted energy-efficient task offloading scheme that leverages the cooperative MEC framework. To achieve energy efficiency, we propose a novel hybrid approach established based on Particle Swarm Optimization (PSO) and Grey Wolf Optimizer (GWO) to solve the optimization problem. The proposed approach considers efficient resource allocation such as sub-carriers, power, and bandwidth for offloading to guarantee minimum energy consumption. The simulation results demonstrate that the proposed strategy is computational-efficient compared to benchmark methods. Moreover, it improves energy utilization, energy gain, response delay, and offloading utility.展开更多
This paper investigates the sliding mode control(SMC) problem for a class of discrete-time nonlinear networked Markovian jump systems(MJSs) in the presence of probabilistic denial-of-service(Do S) attacks. The communi...This paper investigates the sliding mode control(SMC) problem for a class of discrete-time nonlinear networked Markovian jump systems(MJSs) in the presence of probabilistic denial-of-service(Do S) attacks. The communication network via which the data is propagated is unsafe and the malicious adversary can attack the system during state feedback. By considering random Denial-of-Service attacks, a new sliding mode variable is designed, which takes into account the distribution information of the probabilistic attacks. Then, by resorting to Lyapunov theory and stochastic analysis methods, sufficient conditions are established for the existence of the desired sliding mode controller, guaranteeing both reachability of the designed sliding surface and stability of the resulting sliding motion.Finally, a simulation example is given to demonstrate the effectiveness of the proposed sliding mode control algorithm.展开更多
In medical imaging,computer vision researchers are faced with a variety of features for verifying the authenticity of classifiers for an accurate diagnosis.In response to the coronavirus 2019(COVID-19)pandemic,new tes...In medical imaging,computer vision researchers are faced with a variety of features for verifying the authenticity of classifiers for an accurate diagnosis.In response to the coronavirus 2019(COVID-19)pandemic,new testing procedures,medical treatments,and vaccines are being developed rapidly.One potential diagnostic tool is a reverse-transcription polymerase chain reaction(RT-PCR).RT-PCR,typically a time-consuming process,was less sensitive to COVID-19 recognition in the disease’s early stages.Here we introduce an optimized deep learning(DL)scheme to distinguish COVID-19-infected patients from normal patients according to computed tomography(CT)scans.In the proposed method,contrast enhancement is used to improve the quality of the original images.A pretrained DenseNet-201 DL model is then trained using transfer learning.Two fully connected layers and an average pool are used for feature extraction.The extracted deep features are then optimized with a Firefly algorithm to select the most optimal learning features.Fusing the selected features is important to improving the accuracy of the approach;however,it directly affects the computational cost of the technique.In the proposed method,a new parallel high index technique is used to fuse two optimal vectors;the outcome is then passed on to an extreme learning machine for final classification.Experiments were conducted on a collected database of patients using a 70:30 training:Testing ratio.Our results indicated an average classification accuracy of 94.76%with the proposed approach.A comparison of the outcomes to several other DL models demonstrated the effectiveness of our DL method for classifying COVID-19 based on CT scans.展开更多
In this paper, the inverse problem of reconstructing reflectivity function of a medium is examined within a blind deconvolution framework. The ultrasound pulse is estimated using higher-order statistics, and Wiener fi...In this paper, the inverse problem of reconstructing reflectivity function of a medium is examined within a blind deconvolution framework. The ultrasound pulse is estimated using higher-order statistics, and Wiener filter is used to obtain the ultrasonic reflectivity function through wavelet-based models. A new approach to the parameter estimation of the inverse filtering step is proposed in the nondestructive evaluation field, which is based on the theory of Fourier-Wavelet regularized deconvolution (ForWaRD). This new approach can be viewed as a solution to the open problem of adaptation of the ForWaRD framework to perform the convolution kernel estimation and deconvolution interdependently. The results indicate stable solutions of the esti- mated pulse and an improvement in the radio-frequency (RF) signal taking into account its signal-to-noise ratio (SNR) and axial resolution. Simulations and experiments showed that the proposed approach can provide robust and optimal estimates of the reflectivity function.展开更多
In this paper we focus on the target capturing problem for a swarm of agents modelled as double integrators in any finite space dimension.Each agent knows the relative position of the target and has only an estimation...In this paper we focus on the target capturing problem for a swarm of agents modelled as double integrators in any finite space dimension.Each agent knows the relative position of the target and has only an estimation of its velocity and acceleration.Given that the estimation errors are bounded by some known values,it is possible to design a control law that ensures that agents enter a user-defined ellipsoidal ring around the moving target.Agents know the relative position of the other members whose distance is smaller than a common detection radius.Finally,in the case of no uncertainty about target data and homogeneous agents,we show how the swarm can reach a static configuration around the moving target.Some simulations are reported to show the effectiveness of the proposed strategy.展开更多
The aim of this research is to develop an algorithm and application that can perform real-time monitoring of the safety operation of offshore platforms and subsea gas pipelines as well as determine the need for ship i...The aim of this research is to develop an algorithm and application that can perform real-time monitoring of the safety operation of offshore platforms and subsea gas pipelines as well as determine the need for ship inspection using data obtained from automatic identification system(AIS).The research also focuses on the integration of shipping database,AIS data,and others to develop a prototype for designing a real-time monitoring system of offshore platforms and pipelines.A simple concept is used in the development of this prototype,which is achieved by using an overlaying map that outlines the coordinates of the offshore platform and subsea gas pipeline with the ship’s coordinates(longitude/latitude)as detected by AIS.Using such information,we can then build an early warning system(EWS)relayed through short message service(SMS),email,or other means when the ship enters the restricted and exclusion zone of platforms and pipelines.The ship inspection system is developed by combining several attributes.Then,decision analysis software is employed to prioritize the vessel’s four attributes,including ship age,ship type,classification,and flag state.Results show that the EWS can increase the safety level of offshore platforms and pipelines,as well as the efficient use of patrol boats in monitoring the safety of the facilities.Meanwhile,ship inspection enables the port to prioritize the ship to be inspected in accordance with the priority ranking inspection score.展开更多
In recent years,the police intervention strategy“Hot spots policing”has been effective in combating crimes.However,as cities are under the intense pressure of increasing crime and scarce police resources,police patr...In recent years,the police intervention strategy“Hot spots policing”has been effective in combating crimes.However,as cities are under the intense pressure of increasing crime and scarce police resources,police patrols are expected to target more accurately at finer geographic units rather than ballpark“hot spot”areas.This study aims to develop an algorithm using geographic information to detect crime patterns at street level,the so-called“hot street”,to further assist the Criminal Investigation Department(CID)in capturing crime change and transitive moments efficiently.The algorithm applies Kernel Density Estimation(KDE)technique onto street networks,rather than traditional areal units,in one case study borough in London;it then maps the detected crime“hot streets”by crime type.It was found that the algorithm could successfully generate“hot street”maps for Law Enforcement Agencies(LEAs),enabling more effective allocation of police patrolling;and bear enough resilience itself for the Strategic Crime Analysis(SCA)team’s sustainable utilization,by either updating the inputs with latest data or modifying the model parameters(i.e.the kernel function,and the range of spillover).Moreover,this study explores contextual characteristics of crime“hot streets”by applying various regression models,in recognition of the best fitted Geographically Weighted Regression(GWR)model,encompassing eight significant contextual factors with their varied effects on crimes at different streets.Having discussed the impact of lockdown on crime rates,it was apparent that the land-use driven mobility change during lockdown was a fundamental reason for changes in crime.Overall,these research findings have provided evidence and practical suggestions for crime prevention to local governors and policy practitioners,through more optimal urban planning(e.g.Low Traffic Neighborhoods),proactive policing(e.g.in the listed top 10“Hot Streets”of crime),publicizing of laws and regulations,and installations of security infrastructures(e.g.CCTV cameras and traffic signals).展开更多
基金Saudi Arabia for funding this work through Small Research Group Project under Grant Number RGP.1/316/45.
文摘The effective and timely diagnosis and treatment of ocular diseases are key to the rapid recovery of patients.Today,the mass disease that needs attention in this context is cataracts.Although deep learning has significantly advanced the analysis of ocular disease images,there is a need for a probabilistic model to generate the distributions of potential outcomes and thusmake decisions related to uncertainty quantification.Therefore,this study implements a Bayesian Convolutional Neural Networks(BCNN)model for predicting cataracts by assigning probability values to the predictions.It prepares convolutional neural network(CNN)and BCNN models.The proposed BCNN model is CNN-based in which reparameterization is in the first and last layers of the CNN model.This study then trains them on a dataset of cataract images filtered from the ocular disease fundus images fromKaggle.The deep CNN model has an accuracy of 95%,while the BCNN model has an accuracy of 93.75% along with information on uncertainty estimation of cataracts and normal eye conditions.When compared with other methods,the proposed work reveals that it can be a promising solution for cataract prediction with uncertainty estimation.
文摘In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance of two advanced models,the Large Language Model(LLM)LLaMA model and NLP BERT model,in the context of airline review sentiment analysis.Through fine-tuning,domain adaptation,and the application of few-shot learning,the study addresses the subtleties of sentiment expressions in airline-related text data.Employing predictive modeling and comparative analysis,the research evaluates the effectiveness of Large Language Model Meta AI(LLaMA)and Bidirectional Encoder Representations from Transformers(BERT)in capturing sentiment intricacies.Fine-tuning,including domain adaptation,enhances the models'performance in sentiment classification tasks.Additionally,the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis.By conducting experiments on a diverse airline review dataset,the research quantifies the impact of fine-tuning,domain adaptation,and few-shot learning on model performance,providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content(UGC).This research contributes to refining sentiment analysis models,ultimately fostering improved customer satisfaction in the airline industry.
文摘In the current paper,we present a study of the spatial distribution of luminous blue variables(LBVs)and various LBV candidates(c LBVs)with respect to OB associations in the galaxy M33.The identification of blue star groups was based on the LGGS data and was carried out by two clustering algorithms with initial parameters determined during simulations of random stellar fields.We have found that the distribution of distances to the nearest OB association obtained for the LBV/c LBV sample is close to that for massive stars with Minit>20 M⊙and WolfRayet stars.This result is in good agreement with the standard assumption that LBVs represent an intermediate stage in the evolution of the most massive stars.However,some objects from the LBV/cLBV sample,particularly Fe II-emission stars,demonstrated severe isolation compared to other massive stars,which,together with certain features of their spectra,implicitly indicates that the nature of these objects and other LBVs/cLBVs may differ radically.
文摘BACKGROUND Kidney and liver transplantation are two sub-specialized medical disciplines,with transplant professionals spending decades in training.While artificial intelligencebased(AI-based)tools could potentially assist in everyday clinical practice,comparative assessment of their effectiveness in clinical decision-making remains limited.AIM To compare the use of ChatGPT and GPT-4 as potential tools in AI-assisted clinical practice in these challenging disciplines.METHODS In total,400 different questions tested ChatGPT’s/GPT-4 knowledge and decision-making capacity in various renal and liver transplantation concepts.Specifically,294 multiple-choice questions were derived from open-access sources,63 questions were derived from published open-access case reports,and 43 from unpublished cases of patients treated at our department.The evaluation covered a plethora of topics,including clinical predictors,treatment options,and diagnostic criteria,among others.RESULTS ChatGPT correctly answered 50.3%of the 294 multiple-choice questions,while GPT-4 demonstrated a higher performance,answering 70.7%of questions(P<0.001).Regarding the 63 questions from published cases,ChatGPT achieved an agreement rate of 50.79%and partial agreement of 17.46%,while GPT-4 demonstrated an agreement rate of 80.95%and partial agreement of 9.52%(P=0.01).Regarding the 43 questions from unpublished cases,ChatGPT demonstrated an agreement rate of 53.49%and partial agreement of 23.26%,while GPT-4 demonstrated an agreement rate of 72.09%and partial agreement of 6.98%(P=0.004).When factoring by the nature of the task for all cases,notably,GPT-4 demonstrated outstanding performance,providing a differential diagnosis that included the final diagnosis in 90%of the cases(P=0.008),and successfully predicting the prognosis of the patient in 100%of related questions(P<0.001).CONCLUSION GPT-4 consistently provided more accurate and reliable clinical recommendations with higher percentages of full agreements both in renal and liver transplantation compared with ChatGPT.Our findings support the potential utility of AI models like ChatGPT and GPT-4 in AI-assisted clinical practice as sources of accurate,individualized medical information and facilitating decision-making.The progression and refinement of such AI-based tools could reshape the future of clinical practice,making their early adoption and adaptation by physicians a necessity.
基金the Italian Space Agency(Agenzia Spaziale Italiana,ASI)in the framework of the Research Day“Giornate della Ricerca Spaziale”initiative through the contract ASI N.2023-4-U.0.
文摘All-inorganic perovskites based on cesium-lead-bromine(Cs-Pb-Br)have been a prominent research focus in optoelectronics in recent years.The optimisation and tunability of their macroscopic properties exploit the conformational flexibility,resulting in various crystal structures.Varying synthesis parameters can yield distinct crystal structures from Cs,Pb,and Br precursors,and manually exploring the relationship between these synthesis parameters and the resulting crystal structure is both labour-intensive and time-consuming.Machine learning(ML)can rapidly uncover insights and drive discoveries in chemical synthesis with the support of data,significantly reducing both the cost and development cycle of materials.Here,we gathered synthesis parameters from published literature(220 synthesis runs)and implemented eight distinct ML models,including eXtreme Gradient Boosting(XGB),Decision Tree(DT),Support Vector Machine(SVM),Random Forest(RF),Naïve Bayes(NB),Logistic Regression(LR),Gradient Boosting(GB),and K-Nearest(KN)to classify and predict Cs-Pb-Br crystal structures from given synthesis parameters.Validation accuracy,precision,F1 score,recall,and average area under the curve(AUC)are employed to evaluate these ML models.The XGB model exhibited the best performance,achieving a validation accuracy of 0.841.The trained XGB model was subsequently utilised to predict the structure from 10 experimental runs using a randomised set of parameters,achieving a testing accuracy of 0.8.The results indicate that the Cs/Pb molar ratio,reaction time,and the concentration of organic compounds(ligands)play crucial roles in synthesising various crystal structures of Cs-Pb-Br.This study demonstrates a significant decrease in effort required for experimental procedures and builds a foundational basis for predicting crystal structures from synthesis parameters.
基金Supported by National Natural Science Foundation of China(Grant No.51375013)Anhui Provincial Natural Science Foundation of China(Grant No.1208085ME64)Open Research Fund of Key Laboratory of High Performance Complex Manufacturing,Central South University(Grant No.Kfkt2013-12)
文摘The compliance modeling and rigidity performance evaluation for the lower mobility parallel manipulators are still to be remained as two overwhelming challenges in the stage of conceptual design due to their geometric complexities. By using the screw theory, this paper explores the compliance modeling and eigencompliance evaluation of a newly patented 1T2R spindle head whose topological architecture is a 3-RPS parallel mechanism. The kinematic definitions and inverse position analysis are briefly addressed in the first place to provide necessary information for compliance modeling. By considering the 3-RPS parallel kinematic machine(PKM) as a typical compliant parallel device, whose three limb assemblages have bending, extending and torsional deflections, an analytical compliance model for the spindle head is established with screw theory and the analytical stiffness matrix of the platform is formulated. Based on the eigenscrew decomposition, the eigencompliance and corresponding eigenscrews are analyzed and the platform's compliance properties are physically interpreted as the suspension of six screw springs. The distributions of stiffness constants of the six screw springs throughout the workspace are predicted in a quick manner with a piece-by-piece calculation algorithm. The numerical simulation reveals a strong dependency of platform's compliance on its configuration in that they are axially symmetric due to structural features. At the last stage, the effects of some design variables such as structural, configurational and dimensional parameters on system rigidity characteristics are investigated with the purpose of providing useful information for the structural design and performance improvement of the PKM. Compared with previous efforts in compliance analysis of PKMs, the present methodology is more intuitive and universal thus can be easily applied to evaluate the overall rigidity performance of other PKMs with high efficiency.
基金supported by the project“Component’s digital transformation methods'fundamental research for micro-and nanosystems”(No.#0705-2020-0041).
文摘The features of carrier-based aircraft’s navigation systems during the approach and landing phases are investigated.A new adaptive Kalman filter with unknown state noise statistics is proposed to improve the accuracy of the INS/GNSS integrated navigation system.The adaptive filtering algorithm aims to estimate and adapt the unknown state noise covariance Q in high dynamic conditions,when the measurement noise covariance R is assumed to be known empirically in advance.The new adaptive Kalman filter based on the innovation sequence and pseudo-measurement vector approach makes it more effective to estimate and adapt Q.The simulation results and semi-physical experiments show that the application of the proposed adaptive Kalman filter can guarantee a higher estimation accuracy of the state variables.
基金the National Natural Science Foundation of China(NSFC)(Grant No.61671072).
文摘Encouraged by next-generation networks and autonomous vehicle systems,vehicular networks must employ advanced technologies to guarantee personal safety,reduce traffic accidents and ease traffic jams.By leveraging the computing ability at the network edge,multi-access edge computing(MEC)is a promising technique to tackle such challenges.Compared to traditional full offloading,partial offloading offers more flexibility in the perspective of application as well as deployment of such systems.Hence,in this paper,we investigate the application of partial computing offloading in-vehicle networks.In particular,by analyzing the structure of many emerging applications,e.g.,AR and online games,we convert the application structure into a sequential multi-component model.Focusing on shortening the application execution delay,we extend the optimization problem from the single-vehicle computing offloading(SVCOP)scenario to the multi-vehicle computing offloading(MVCOP)by taking multiple constraints into account.A deep reinforcement learning(DRL)based algorithm is proposed as a solution to this problem.Various performance evaluation results have shown that the proposed algorithm achieves superior performance as compared to existing offloading mechanisms in deducing application execution delay.
文摘The financial industry has been strongly influenced by digitalization in the past few years reflected by the emergence of“FinTech,”which represents the marriage of“finance”and“information technology.”FinTech provides opportunities for the creation of new services and business models and poses challenges to traditional financial service providers.Therefore,FinTech has become a subject of debate among practitioners,investors,and researchers and is highly visible in the popular media.In this study,we unveil the drivers motivating the FinTech phenomenon perceived by the English and German popular press including the subjects discussed in the context of FinTech.This study is the first one to reflect the media perspective on the FinTech phenomenon in the research.In doing so,we extend the growing knowledge on FinTech and contribute to a common understanding in the financial and digital innovation literature.These study contributes to research in the areas of information systems,finance and interdisciplinary social sciences.Moreover,it brings value to practitioners(entrepreneurs,investors,regulators,etc.),who explore the field of FinTech.
基金This work was supported by an Overseas Research Students Award to Xiao-Bing Hu.
文摘This paper proposes a new method for model predictive control (MPC) of nonlinear systems to calculate stability region and feasible initial control profile/sequence, which are important to the implementations of MPC. Different from many existing methods, this paper distinguishes stability region from conservative terminal region. With global linearization, linear differential inclusion (LDI) and linear matrix inequality (LMI) techniques, a nonlinear system is transformed into a convex set of linear systems, and then the vertices of the set are used off-line to design the controller, to estimate stability region, and also to determine a feasible initial control profile/sequence. The advantages of the proposed method are demonstrated by simulation study.
基金supported by Medical Research Council(MRC)grant MR/K004360/1 to SIDMARIE CURIE COFUND EU-UK Research Fellowship to SID
文摘Neuroinformatics is a fascinating research field that applies computational models and analytical tools to high dimensional experimental neuroscience data for a better understanding of how the brain functions or dysfunctions in brain diseases. Neuroinformaticians work in the intersection of neuroscience and informatics supporting the integration of various sub-disciplines(behavioural neuroscience, genetics, cognitive psychology, etc.) working on brain research. Neuroinformaticians are the pathway of information exchange between informaticians and clinicians for a better understanding of the outcome of computational models and the clinical interpretation of the analysis. Machine learning is one of the most significant computational developments in the last decade giving tools to neuroinformaticians and finally to radiologists and clinicians for an automatic and early diagnosis-prognosis of a brain disease. Random forest(RF) algorithm has been successfully applied to high-dimensional neuroimaging data for feature reduction and also has been applied to classify the clinical label of a subject using single or multi-modal neuroimaging datasets. Our aim was to review the studies where RF was applied to correctly predict the Alzheimer's disease(AD), the conversion from mild cognitive impairment(MCI) and its robustness to overfitting, outliers and handling of non-linear data. Finally, we described our RF-based model that gave us the 1 ^(st) position in an international challenge for automated prediction of MCI from MRI data.
文摘In the past decade,online Peer-to-Peer(P2P)lending platforms have transformed the lending industry,which has been historically dominated by commercial banks.Information technology breakthroughs such as big data-based financial technologies(Fintech)have been identified as important disruptive driving forces for this paradigm shift.In this paper,we take an information economics perspective to investigate how big data affects the transformation of the lending industry.By identifying how signaling and search costs are reduced by big data analytics for credit risk management of P2P lending,we discuss how information asymmetry is reduced in the big data era.Rooted in the lending business,we propose a theory on the economics of big data and outline a number of research opportunities and challenging issues.
基金supported by the Chinese Scholarship Council(CSC)under MOFCOM(No.2017MOC010907)any opinions,findings,and conclusions are those of the authors and do not necessarily reflect the views of the above agency.
文摘Mobile Edge Computing (MEC) has been considered a promising solution that can address capacity and performance challenges in legacy systems such as Mobile Cloud Computing (MCC). In particular, such challenges include intolerable delay, congestion in the core network, insufficient Quality of Experience (QoE), high cost of resource utility, such as energy and bandwidth. The aforementioned challenges originate from limited resources in mobile devices, the multi-hop connection between end-users and the cloud, high pressure from computation-intensive and delay-critical applications. Considering the limited resource setting at the MEC, improving the efficiency of task offloading in terms of both energy and delay in MEC applications is an important and urgent problem to be solved. In this paper, the key objective is to propose a task offloading scheme that minimizes the overall energy consumption along with satisfying capacity and delay requirements. Thus, we propose a MEC-assisted energy-efficient task offloading scheme that leverages the cooperative MEC framework. To achieve energy efficiency, we propose a novel hybrid approach established based on Particle Swarm Optimization (PSO) and Grey Wolf Optimizer (GWO) to solve the optimization problem. The proposed approach considers efficient resource allocation such as sub-carriers, power, and bandwidth for offloading to guarantee minimum energy consumption. The simulation results demonstrate that the proposed strategy is computational-efficient compared to benchmark methods. Moreover, it improves energy utilization, energy gain, response delay, and offloading utility.
基金supported in part by the National Natural Science Foundation of China(61773209)the Six Talent Peaks Project in Jiangsu Province(XYDXX-033)+1 种基金the Postdoctoral Science Foundation of China(2014M551598)the Natural Science Foundation of Jiangsu Province(BK20190021)。
文摘This paper investigates the sliding mode control(SMC) problem for a class of discrete-time nonlinear networked Markovian jump systems(MJSs) in the presence of probabilistic denial-of-service(Do S) attacks. The communication network via which the data is propagated is unsafe and the malicious adversary can attack the system during state feedback. By considering random Denial-of-Service attacks, a new sliding mode variable is designed, which takes into account the distribution information of the probabilistic attacks. Then, by resorting to Lyapunov theory and stochastic analysis methods, sufficient conditions are established for the existence of the desired sliding mode controller, guaranteeing both reachability of the designed sliding surface and stability of the resulting sliding motion.Finally, a simulation example is given to demonstrate the effectiveness of the proposed sliding mode control algorithm.
基金Korea Institute for Advancement of Technology(KIAT)grant funded by the Korea Government(MOTIE)(P0012724,The Competency Development Program for Industry Specialist)and the Soonchunhyang University Research Fund.
文摘In medical imaging,computer vision researchers are faced with a variety of features for verifying the authenticity of classifiers for an accurate diagnosis.In response to the coronavirus 2019(COVID-19)pandemic,new testing procedures,medical treatments,and vaccines are being developed rapidly.One potential diagnostic tool is a reverse-transcription polymerase chain reaction(RT-PCR).RT-PCR,typically a time-consuming process,was less sensitive to COVID-19 recognition in the disease’s early stages.Here we introduce an optimized deep learning(DL)scheme to distinguish COVID-19-infected patients from normal patients according to computed tomography(CT)scans.In the proposed method,contrast enhancement is used to improve the quality of the original images.A pretrained DenseNet-201 DL model is then trained using transfer learning.Two fully connected layers and an average pool are used for feature extraction.The extracted deep features are then optimized with a Firefly algorithm to select the most optimal learning features.Fusing the selected features is important to improving the accuracy of the approach;however,it directly affects the computational cost of the technique.In the proposed method,a new parallel high index technique is used to fuse two optimal vectors;the outcome is then passed on to an extreme learning machine for final classification.Experiments were conducted on a collected database of patients using a 70:30 training:Testing ratio.Our results indicated an average classification accuracy of 94.76%with the proposed approach.A comparison of the outcomes to several other DL models demonstrated the effectiveness of our DL method for classifying COVID-19 based on CT scans.
基金Project (No. PRC 03-41/2003) supported by the Ministry of Con-struction of Cuba
文摘In this paper, the inverse problem of reconstructing reflectivity function of a medium is examined within a blind deconvolution framework. The ultrasound pulse is estimated using higher-order statistics, and Wiener filter is used to obtain the ultrasonic reflectivity function through wavelet-based models. A new approach to the parameter estimation of the inverse filtering step is proposed in the nondestructive evaluation field, which is based on the theory of Fourier-Wavelet regularized deconvolution (ForWaRD). This new approach can be viewed as a solution to the open problem of adaptation of the ForWaRD framework to perform the convolution kernel estimation and deconvolution interdependently. The results indicate stable solutions of the esti- mated pulse and an improvement in the radio-frequency (RF) signal taking into account its signal-to-noise ratio (SNR) and axial resolution. Simulations and experiments showed that the proposed approach can provide robust and optimal estimates of the reflectivity function.
文摘In this paper we focus on the target capturing problem for a swarm of agents modelled as double integrators in any finite space dimension.Each agent knows the relative position of the target and has only an estimation of its velocity and acceleration.Given that the estimation errors are bounded by some known values,it is possible to design a control law that ensures that agents enter a user-defined ellipsoidal ring around the moving target.Agents know the relative position of the other members whose distance is smaller than a common detection radius.Finally,in the case of no uncertainty about target data and homogeneous agents,we show how the swarm can reach a static configuration around the moving target.Some simulations are reported to show the effectiveness of the proposed strategy.
文摘The aim of this research is to develop an algorithm and application that can perform real-time monitoring of the safety operation of offshore platforms and subsea gas pipelines as well as determine the need for ship inspection using data obtained from automatic identification system(AIS).The research also focuses on the integration of shipping database,AIS data,and others to develop a prototype for designing a real-time monitoring system of offshore platforms and pipelines.A simple concept is used in the development of this prototype,which is achieved by using an overlaying map that outlines the coordinates of the offshore platform and subsea gas pipeline with the ship’s coordinates(longitude/latitude)as detected by AIS.Using such information,we can then build an early warning system(EWS)relayed through short message service(SMS),email,or other means when the ship enters the restricted and exclusion zone of platforms and pipelines.The ship inspection system is developed by combining several attributes.Then,decision analysis software is employed to prioritize the vessel’s four attributes,including ship age,ship type,classification,and flag state.Results show that the EWS can increase the safety level of offshore platforms and pipelines,as well as the efficient use of patrol boats in monitoring the safety of the facilities.Meanwhile,ship inspection enables the port to prioritize the ship to be inspected in accordance with the priority ranking inspection score.
基金partly supported by King’s Global Engagement Partnership Fund[2020-2021#PF2021_Mar_005].
文摘In recent years,the police intervention strategy“Hot spots policing”has been effective in combating crimes.However,as cities are under the intense pressure of increasing crime and scarce police resources,police patrols are expected to target more accurately at finer geographic units rather than ballpark“hot spot”areas.This study aims to develop an algorithm using geographic information to detect crime patterns at street level,the so-called“hot street”,to further assist the Criminal Investigation Department(CID)in capturing crime change and transitive moments efficiently.The algorithm applies Kernel Density Estimation(KDE)technique onto street networks,rather than traditional areal units,in one case study borough in London;it then maps the detected crime“hot streets”by crime type.It was found that the algorithm could successfully generate“hot street”maps for Law Enforcement Agencies(LEAs),enabling more effective allocation of police patrolling;and bear enough resilience itself for the Strategic Crime Analysis(SCA)team’s sustainable utilization,by either updating the inputs with latest data or modifying the model parameters(i.e.the kernel function,and the range of spillover).Moreover,this study explores contextual characteristics of crime“hot streets”by applying various regression models,in recognition of the best fitted Geographically Weighted Regression(GWR)model,encompassing eight significant contextual factors with their varied effects on crimes at different streets.Having discussed the impact of lockdown on crime rates,it was apparent that the land-use driven mobility change during lockdown was a fundamental reason for changes in crime.Overall,these research findings have provided evidence and practical suggestions for crime prevention to local governors and policy practitioners,through more optimal urban planning(e.g.Low Traffic Neighborhoods),proactive policing(e.g.in the listed top 10“Hot Streets”of crime),publicizing of laws and regulations,and installations of security infrastructures(e.g.CCTV cameras and traffic signals).