Organizations often use sentiment analysis-based systems or even resort to simple manual analysis to try to extract useful meaning from their customers’general digital“chatter”.Driven by the need for a more accurat...Organizations often use sentiment analysis-based systems or even resort to simple manual analysis to try to extract useful meaning from their customers’general digital“chatter”.Driven by the need for a more accurate way to qualitatively extract valuable product and brand-oriented consumer-generated texts,this paper experimentally tests the ability of an NLP-based analytics approach to extract information from highly unstructured texts.The results show that natural language processing outperforms sentiment analysis for detecting issues from social media data.Surprisingly,the experiment shows that sentiment analysis is not only better than manual analysis of social media data for the goal of supporting organizational decision-making,but may also be disadvantageous for such efforts.展开更多
In the effort to enhance cardiovascular diagnostics,deep learning-based heart sound classification presents a promising solution.This research introduces a novel preprocessing method:iterative k-means clustering combi...In the effort to enhance cardiovascular diagnostics,deep learning-based heart sound classification presents a promising solution.This research introduces a novel preprocessing method:iterative k-means clustering combined with silhouette score analysis,aimed at downsampling.This approach ensures optimal cluster formation and improves data quality for deep learning models.The process involves applying k-means clustering to the dataset,calculating the average silhouette score for each cluster,and selecting the clusterwith the highest score.We evaluated this method using 10-fold cross-validation across various transfer learningmodels fromdifferent families and architectures.The evaluation was conducted on four datasets:a binary dataset,an augmented binary dataset,amulticlass dataset,and an augmentedmulticlass dataset.All datasets were derived from the Heart Wave heart sounds dataset,a novelmulticlass dataset introduced by our research group.To increase dataset sizes and improve model training,data augmentation was performed using heartbeat cycle segmentation.Our findings highlight the significant impact of the proposed preprocessing approach on the HeartWave datasets.Across all datasets,model performance improved notably with the application of our method.In augmented multiclass classification,the MobileNetV2 model showed an average weighted F1-score improvement of 27.10%.In binary classification,ResNet50 demonstrated an average accuracy improvement of 8.70%,reaching 92.40%compared to its baseline performance.These results underscore the effectiveness of clustering with silhouette score analysis as a preprocessing step,significantly enhancing model accuracy and robustness.They also emphasize the critical role of preprocessing in addressing class imbalance and advancing precision medicine in cardiovascular diagnostics.展开更多
Purpose–The precast concrete slab track(PST)has advantages of fewer maintenance frequencies,better smooth rides and structural stability,which has been widely applied in urban rail transit.Precise positioning of prec...Purpose–The precast concrete slab track(PST)has advantages of fewer maintenance frequencies,better smooth rides and structural stability,which has been widely applied in urban rail transit.Precise positioning of precast concrete slab(PCS)is vital for keeping the initial track regularity.However,the cast-in-place process of the self-compacting concrete(SCC)filling layer generally causes a large deformation of PCS due to the water-hammer effect of flowing SCC,even cracking of PCS.Currently,the buoyancy characteristic and influencing factors of PCS during the SCC casting process have not been thoroughly studied in urban rail transit.Design/methodology/approach–In this work,a Computational Fluid Dynamics(CFD)model is established to calculate the buoyancy of PCS caused by the flowing SCC.The main influencing factors,including the inlet speed and flowability of SCC,have been analyzed and discussed.A new structural optimization scheme has been proposed for PST to reduce the buoyancy caused by the flowing SCC.Findings–The simulation and field test results showed that the buoyancy and deformation of PCS decreased obviously after adopting the new scheme.Originality/value–The findings of this study can provide guidance for the control of the deformation of PCS during the SCC construction process.展开更多
Magnesium and magnesium alloys,serving as crucial lightweight structural materials and hydrogen storage elements,find extensive applications in space technology,aviation,automotive,and magnesium-based hydrogen industr...Magnesium and magnesium alloys,serving as crucial lightweight structural materials and hydrogen storage elements,find extensive applications in space technology,aviation,automotive,and magnesium-based hydrogen industries.The global production of primary magnesium has reached approximately 1.2 million tons per year,with anticipated diversification in future applications and significant market demand.Nevertheless,approximately 80%of the world’s primary magnesium is still manufactured through the Pidgeon process,grappling with formidable issues including high energy consumption,massive carbon emission,significant resource depletion,and environmental pollution.The implementation of the relative vacuum method shows potential in breaking through technological challenges in the Pidgeon process,facilitating clean,low-carbon continuous magnesium smelting.This paper begins by introducing the principles of the relative vacuum method.Subsequently,it elucidates various innovative process routes,including relative vacuum ferrosilicon reduction,aluminum thermal reduction co-production of spinel,and aluminum thermal reduction co-production of calcium aluminate.Finally,and thermodynamic foundations of the relative vacuum,a quantitative analysis of the material,energy flows,carbon emission,and production cost for several new processes is conducted,comparing and analyzing them against the Pidgeon process.The study findings reveal that,with identical raw materials,the relative vacuum silicon thermal reduction process significantly decreases raw material consumption,energy consumption,and carbon dioxide emissions by 15.86%,30.89%,and 26.27%,respectively,compared to the Pidgeon process.The relative vacuum process,using magnesite as the raw material and aluminum as the reducing agent,has the lowest magnesium-to-feed ratio,at only 3.385.Additionally,its energy consumption and carbon dioxide emissions are the lowest,at 1.817 tce/t Mg and 7.782 t CO_(2)/t Mg,respectively.The energy consumption and carbon emissions of the relative vacuum magnesium smelting process co-producing calcium aluminate(12CaO·7Al_(2)O_(3),3CaO·Al_(2)O_(3),and CaO·Al_(2)O_(3))are highly correlated with the consumption of dolomite in the raw materials.When the reduction temperature is around 1473.15 K,the critical volume fraction of magnesium vapor for different processes varies within the range of 5%–40%.Production cost analysis shows that the relative vacuum primary magnesium smelting process has significant economic benefits.This paper offers essential data support and theoretical guidance for achieving energy efficiency,carbon reduction in magnesium smelting,and the industrial adoption of innovative processes.展开更多
Data-driven process monitoring is an effective approach to assure safe operation of modern manufacturing and energy systems,such as thermal power plants being studied in this work.Industrial processes are inherently d...Data-driven process monitoring is an effective approach to assure safe operation of modern manufacturing and energy systems,such as thermal power plants being studied in this work.Industrial processes are inherently dynamic and need to be monitored using dynamic algorithms.Mainstream dynamic algorithms rely on concatenating current measurement with past data.This work proposes a new,alternative dynamic process monitoring algorithm,using dot product feature analysis(DPFA).DPFA computes the dot product of consecutive samples,thus naturally capturing the process dynamics through temporal correlation.At the same time,DPFA's online computational complexity is lower than not just existing dynamic algorithms,but also classical static algorithms(e.g.,principal component analysis and slow feature analysis).The detectability of the new algorithm is analyzed for three types of faults typically seen in process systems:sensor bias,process fault and gain change fault.Through experiments with a numerical example and real data from a thermal power plant,the DPFA algorithm is shown to be superior to the state-of-the-art methods,in terms of better monitoring performance(fault detection rate and false alarm rate)and lower computational complexity.展开更多
This paper conducted a more comprehensive review and comparative analysis of the two heavy to blizzard processes that occurred in the Beijing area during December 13-15,2023,and February 20-21,2024,in terms of compreh...This paper conducted a more comprehensive review and comparative analysis of the two heavy to blizzard processes that occurred in the Beijing area during December 13-15,2023,and February 20-21,2024,in terms of comprehensive weather situation diagnosis,forecasting,and decision-making services,and summarized the meteorological service support experience of such heavy snow weather processes.It was found that both blizzard processes were jointly influenced by the 700 hPa southwesterly warm and humid jet stream and the near-surface easterly backflow;the numerical forecast was relatively accurate in the overall description of the snowfall process,and the forecast bias of the position of the 700 hPa southwesterly warm and humid jet stream determined the bias of the snowfall magnitude forecast at a certain point;when a deviation was found between the actual snowfall and the forecast,the cause should be analyzed in a timely manner,and the warning and forecast conclusions should be updated.With the full cooperation of relevant departments,it can greatly make up for the deviation of the early forecast snowfall amount,and ensure the safety and efficiency of people's travel.展开更多
The purpose of this paper is to identify the processes with the highest contribution to potential environmental impacts in the life cycle of the masonry of concrete blocks by evaluating their main emissions contributi...The purpose of this paper is to identify the processes with the highest contribution to potential environmental impacts in the life cycle of the masonry of concrete blocks by evaluating their main emissions contributing to impact categories and identifying hotspots for environmental improvements.The research is based on the Life Cycle Assessment(LCA)study of non-load-bearing masonry of concrete blocks performed by the authors.The processes those have demonstrated higher contribution to environmental impacts were identified in the Life Cycle Impact Assessment(LCIA)phase and a detailed analysis was carried out on the main substances derived from these processes.The highest potential impacts in the life cycle of the concrete blocks masonry can be attributed mainly to emissions coming from the production of Portland cement,which explains the peak of impact potential on the blocks production stage,but also the significant impact potential in the use of the blocks for masonry construction,due to the use of cement mortar.The results of this LCA study are part of a major research on the comparative analysis of different typologies of non-load-bearing external walls,which aims to contribute to the creation of a life cycle database of major building systems,to be used by the environmental certification systems of buildings.展开更多
Taking into account the characteristics of non-Newtonian fluids and the influence of latent heat of wax crystallization,this study establishes physical and mathematical models for the synergy of tubular heating and me...Taking into account the characteristics of non-Newtonian fluids and the influence of latent heat of wax crystallization,this study establishes physical and mathematical models for the synergy of tubular heating and mechanical stirring during the waxy crude oil heating process.Numerical calculations are conducted using the sliding grid technique and FVM.The focus of this study is on the impact of stirring rate(τ),horizontal deflection angle(θ1),vertical deflection angle(θ2),and stirring diameter(D)on the heating effect of crude oil.Our results show that asτincreases from 200 rpm to 500 rpm and D increases from 400 mm to 600 mm,there is an improvement in the average crude oil temperature and temperature uniformity.Additionally,heating efficiency increases by 0.5%and 1%,while the volume of the low-temperature region decreases by 57.01 m^(3) and 36.87 m3,respectively.Asθ1 andθ2 increase from 0°to 12°,the average crude oil temperature,temperature uniformity,and heating efficiency decrease,while the volume of the low-temperature region remains basically the same.Grey correlation analysis is used to rank the importance of stirring parameters in the following order:τ>θ1>θ2>D.Subsequently,multiple regression analysis is used to quantitatively describe the relationship between different stirring parameters and heat transfer evaluation indices through equations.Finally,based on entropy generation minimization,the stirring parameters with optimal heat transfer performance are obtained when τ=350 rpm,θ1=θ2=0°,and D=500 mm.展开更多
The nonlinear Schrodinger equation(NLSE) is a key tool for modeling wave propagation in nonlinear and dispersive media. This study focuses on the complex cubic NLSE with δ-potential,explored through the Brownian proc...The nonlinear Schrodinger equation(NLSE) is a key tool for modeling wave propagation in nonlinear and dispersive media. This study focuses on the complex cubic NLSE with δ-potential,explored through the Brownian process. The investigation begins with the derivation of stochastic solitary wave solutions using the modified exp(-Ψ(ξ)) expansion method. To illustrate the noise effects, 3D and 2D visualizations are displayed for different non-negative values of noise parameter under suitable parameter values. Additionally, qualitative analysis of both perturbed and unperturbed dynamical systems is conducted using bifurcation and chaos theory. In bifurcation analysis, we analyze the detailed parameter analysis near fixed points of the unperturbed system. An external periodic force is applied to perturb the system, leading to an investigation of its chaotic behavior. Chaos detection tools are employed to predict the behavior of the perturbed dynamical system, with results validated through visual representations.Multistability analysis is conducted under varying initial conditions to identify multiple stable states in the perturbed dynamical system, contributing to chaotic behavior. Also, sensitivity analysis of the Hamiltonian system is performed for different initial conditions. The novelty of this work lies in the significance of the obtained results, which have not been previously explored for the considered equation. These findings offer noteworthy insights into the behavior of the complex cubic NLSE with δ-potential and its applications in fields such as nonlinear optics, quantum mechanics and Bose–Einstein condensates.展开更多
The fracture volume is gradually changed with the depletion of fracture pressure during the production process.However,there are few flowback models available so far that can estimate the fracture volume loss using pr...The fracture volume is gradually changed with the depletion of fracture pressure during the production process.However,there are few flowback models available so far that can estimate the fracture volume loss using pressure transient and rate transient data.The initial flowback involves producing back the fracturing fuid after hydraulic fracturing,while the second flowback involves producing back the preloading fluid injected into the parent wells before fracturing of child wells.The main objective of this research is to compare the initial and second flowback data to capture the changes in fracture volume after production and preload processes.Such a comparison is useful for evaluating well performance and optimizing frac-turing operations.We construct rate-normalized pressure(RNP)versus material balance time(MBT)diagnostic plots using both initial and second flowback data(FB;and FBs,respectively)of six multi-fractured horizontal wells completed in Niobrara and Codell formations in DJ Basin.In general,the slope of RNP plot during the FB,period is higher than that during the FB;period,indicating a potential loss of fracture volume from the FB;to the FB,period.We estimate the changes in effective fracture volume(Ver)by analyzing the changes in the RNP slope and total compressibility between these two flowback periods.Ver during FB,is in general 3%-45%lower than that during FB:.We also compare the drive mechanisms for the two flowback periods by calculating the compaction-drive index(CDI),hydrocarbon-drive index(HDI),and water-drive index(WDI).The dominant drive mechanism during both flowback periods is CDI,but its contribution is reduced by 16%in the FB,period.This drop is generally compensated by a relatively higher HDI during this period.The loss of effective fracture volume might be attributed to the pressure depletion in fractures,which occurs during the production period and can extend 800 days.展开更多
In order to analysis the oxygen distribution in the adsorption bed during the hydrogen purification process from oxygen-containing feed gas and the safety of device operation, this article established a non-isothermal...In order to analysis the oxygen distribution in the adsorption bed during the hydrogen purification process from oxygen-containing feed gas and the safety of device operation, this article established a non-isothermal model for the pressure swing adsorption (PSA) separation process of 4-component (H_(2)/O_(2)/N_(2)/CH_(4)), and adopted a composite adsorption bed of activated carbon and molecular sieve. In this article, the oxygen distribution in the adsorption bed under different feed gas oxygen contents, different adsorption pressures, and different product hydrogen purity was studied for both vacuuming process and purging process. The study shows that during the process from the end of adsorption to the end of providing purging, the peak value of oxygen concentration in the adsorption bed gradually increases, with the highest value exceeding 30 times the oxygen content of the feed gas. Moreover, the concentration multiplier of oxygen in the adsorption bed increases with the increase of the adsorption pressure, decreases with the increase of the oxygen content in the feed gas, and increases with the decrease of the hydrogen product purity. When the oxygen content in the feed gas reaches 0.3% (vol), the peak value of oxygen concentration in the adsorption bed exceeds 10% (vol), which will make the front part of the oxygen concentration peak fall in an explosion limit range. As the decrease of product hydrogen content, the oxygen concentration peak in the adsorption bed will gradually move forward to the adsorption bed outlet, and even penetrate through the adsorption bed. And during the process of the oxygen concentration peak moving forward, the oxygen will enter the pipeline at the outlet of the adsorption bed, which will make the pipeline space of high-speed gas flow into an explosion range, bringing great risk to the device. The preferred option for safe operation of PSA for hydrogen purification from oxygen-containing feed gas is to deoxygenate the feed gas. When deoxygenation is not available, a lower adsorption pressure and a higher product hydrogen purity (greater than or equal to 99.9% (vol)) can be used to avoid the gas in the adsorption bed outlet pipeline being in the explosion range.展开更多
As global climate change problems become increasingly serious,the world urgently needs to take practical measures to deal with this environmental issue.In this sense,China’s carbon peaking and carbon neutrality goals...As global climate change problems become increasingly serious,the world urgently needs to take practical measures to deal with this environmental issue.In this sense,China’s carbon peaking and carbon neutrality goals endowed an ingenious solution.Various industries in China have actively responded to this policy call,and various enterprises have started to carry out the work of carbon emission reduction,especially in water supply industry.In order to reduce carbon emission,we must first calculate carbon emissions and understand the level of carbon emission.At present,the carbon emissions accounting of water supply industry is mostly carried out on the partial work of some individual units within the enterprise,and there is no accounting case for the whole process of water supply work.This work innovatively proposes a method to calculate the carbon emissions generated in the whole water supply procedure.The carbon emission in the whole water supply procedure originates from the leakage of water supply network and the maintenance of water supply network,and all the carbon emissions involved in these two aspects are calculated.Moreover,the key points of carbon emission reduction are analyzed according to the accounting results,and a potential carbon emission reduction scheme is proposed.The research can provide a reference for the overall carbon emission accounting strategies and the construction of carbon emission reduction plans in the future.展开更多
The transition of the Chinese iron and steel industry to ultralow emissions has accelerated the development of denitrification technologies.Considering the existing dual carbon targets,carbon emissions must be conside...The transition of the Chinese iron and steel industry to ultralow emissions has accelerated the development of denitrification technologies.Considering the existing dual carbon targets,carbon emissions must be considered as a critical indicator when comparing denitrification systems.Consequently,this study provided a comprehensive cost-benefit model for denitrification in the steel industry,encompassing additional carbon emissions resulting from the implementation of denitrification systems.Activated-carbon adsorption and selective catalytic reduction(SCR)systems are two efficient techniques for controlling NOx emissions during sintering.Based on thismodel,a cost-benefit analysis of these two typical systems was conducted,and the results indicated that the unit flue-gas abatement costs of SCR and activated-carbon adsorption systems were 0.00275 and 0.0126 CNY/m^(3),and the unit flue-gas abatement benefits were 0.0072 and 0.0179 CNY/m^(3),respectively.Additionally,the effect of operational characteristics on operating costs,including duration and material prices,was analyzed.When treating the flue gas,the two systems released 0.0020 and 0.0060 kg/m^(3) of carbon dioxide,respectively.The primary sources of carbon emissions from the SCR and activated-carbon adsorption systems are the production of reducing agents and system operations,respectively.Furthermore,considering the features of the activated carbon adsorption system for simultaneous desulfurization,a SCR-wet flue gas desulfurization(WFGD)technology route was developed for comparison with the activated carbon adsorption system.展开更多
Coal is a versatile energy resource and was a driver of the industrial revolution that transformed the economies of Europe and North America and the trajectory of civilization.In this work,a technoeconomic analysis wa...Coal is a versatile energy resource and was a driver of the industrial revolution that transformed the economies of Europe and North America and the trajectory of civilization.In this work,a technoeconomic analysis was performed for a coal-to-carbonfiber manufacture process developed at the University of Kentucky’s Center for Applied Energy Research.According to this process,coal,with decant oil as the solvent,was converted to mesophase pitch via solvent extraction,and the mesophase pitch was subsequently converted to carbon fiber.The total cost to produce carbon fibers from coal and decant oil via the solvent extraction process was estimated to be$11.50/kg for 50,000-tow pitch carbon fiber with a production volume of 3750 MT/year.The estimated carbon fiber cost was significantly lower than the current commercially available PAN-based carbon fiber price($20–$30/kg).With decant oil recycling rates of 50%and 70%in the solvent extraction process,the manufacturing cost of carbon fiber was estimated to be$9.90/kg and$9.50/kg of carbon fiber,respectively.A cradle-to-gate energy assessment revealed that carbon fiber derived from coal exhibited an embodied energy of 510 MJ/kg,significantly lower than that of conventionally produced carbon fiber from PAN.This notable difference is primarily attributed to the substantially higher conversion rate of coal-based mesophase pitch fibers into carbon fiber,surpassing PAN fibers by 1.6 times.These findings indicate that using coal for carbon fiber production through solvent extraction methods could offer a more energy-efficient and cost-competitive alternative to the traditional PAN based approach.展开更多
Kernel-based slow feature analysis(SFA)methods have been successfully applied in the industrial process fault detection field.However,kernel-based SFA methods have high computational complexity as dealing with nonline...Kernel-based slow feature analysis(SFA)methods have been successfully applied in the industrial process fault detection field.However,kernel-based SFA methods have high computational complexity as dealing with nonlinearity,leading to delays in detecting time-varying data features.Additionally,the uncertain kernel function and kernel parameters limit the ability of the extracted features to express process characteristics,resulting in poor fault detection performance.To alleviate the above problems,a novel randomized auto-regressive dynamic slow feature analysis(RRDSFA)method is proposed to simultaneously monitor the operating point deviations and process dynamic faults,enabling real-time monitoring of data features in industrial processes.Firstly,the proposed Random Fourier mappingbased method achieves more effective nonlinear transformation,contrasting with the current kernelbased RDSFA algorithm that may lead to significant computational complexity.Secondly,a randomized RDSFA model is developed to extract nonlinear dynamic slow features.Furthermore,a Bayesian inference-based overall fault monitoring model including all RRDSFA sub-models is developed to overcome the randomness of random Fourier mapping.Finally,the superiority and effectiveness of the proposed monitoring method are demonstrated through a numerical case and a simulation of continuous stirred tank reactor.展开更多
In order to attain good quality transfer function estimates from magnetotelluric field data(i.e.,smooth behavior and small uncertainties across all frequencies),we compare time series data processing with and without ...In order to attain good quality transfer function estimates from magnetotelluric field data(i.e.,smooth behavior and small uncertainties across all frequencies),we compare time series data processing with and without a multitaper approach for spectral estimation.There are several common ways to increase the reliability of the Fourier spectral estimation from experimental(noisy)data;for example to subdivide the experimental time series into segments,taper these segments(using single taper),perform the Fourier transform of the individual segments,and average the resulting spectra.展开更多
Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentime...Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentiment analysisin widely spoken languages such as English, Chinese, Arabic, Roman Arabic, and more, we come to grapplingwith resource-poor languages like Urdu literature which becomes a challenge. Urdu is a uniquely crafted language,characterized by a script that amalgamates elements from diverse languages, including Arabic, Parsi, Pashtu,Turkish, Punjabi, Saraiki, and more. As Urdu literature, characterized by distinct character sets and linguisticfeatures, presents an additional hurdle due to the lack of accessible datasets, rendering sentiment analysis aformidable undertaking. The limited availability of resources has fueled increased interest among researchers,prompting a deeper exploration into Urdu sentiment analysis. This research is dedicated to Urdu languagesentiment analysis, employing sophisticated deep learning models on an extensive dataset categorized into fivelabels: Positive, Negative, Neutral, Mixed, and Ambiguous. The primary objective is to discern sentiments andemotions within the Urdu language, despite the absence of well-curated datasets. To tackle this challenge, theinitial step involves the creation of a comprehensive Urdu dataset by aggregating data from various sources such asnewspapers, articles, and socialmedia comments. Subsequent to this data collection, a thorough process of cleaningand preprocessing is implemented to ensure the quality of the data. The study leverages two well-known deeplearningmodels, namely Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), for bothtraining and evaluating sentiment analysis performance. Additionally, the study explores hyperparameter tuning tooptimize the models’ efficacy. Evaluation metrics such as precision, recall, and the F1-score are employed to assessthe effectiveness of the models. The research findings reveal that RNN surpasses CNN in Urdu sentiment analysis,gaining a significantly higher accuracy rate of 91%. This result accentuates the exceptional performance of RNN,solidifying its status as a compelling option for conducting sentiment analysis tasks in the Urdu language.展开更多
The recent pandemic crisis has highlighted the importance of the availability and management of health data to respond quickly and effectively to health emergencies, while respecting the fundamental rights of every in...The recent pandemic crisis has highlighted the importance of the availability and management of health data to respond quickly and effectively to health emergencies, while respecting the fundamental rights of every individual. In this context, it is essential to find a balance between the protection of privacy and the safeguarding of public health, using tools that guarantee transparency and consent to the processing of data by the population. This work, starting from a pilot investigation conducted in the Polyclinic of Bari as part of the Horizon Europe Seeds project entitled “Multidisciplinary analysis of technological tracing models of contagion: the protection of rights in the management of health data”, has the objective of promoting greater patient awareness regarding the processing of their health data and the protection of privacy. The methodology used the PHICAT (Personal Health Information Competence Assessment Tool) as a tool and, through the administration of a questionnaire, the aim was to evaluate the patients’ ability to express their consent to the release and processing of health data. The results that emerged were analyzed in relation to the 4 domains in which the process is divided which allows evaluating the patients’ ability to express a conscious choice and, also, in relation to the socio-demographic and clinical characteristics of the patients themselves. This study can contribute to understanding patients’ ability to give their consent and improve information regarding the management of health data by increasing confidence in granting the use of their data for research and clinical management.展开更多
Gravitational wave detection is one of the most cutting-edge research areas in modern physics, with its success relying on advanced data analysis and signal processing techniques. This study provides a comprehensive r...Gravitational wave detection is one of the most cutting-edge research areas in modern physics, with its success relying on advanced data analysis and signal processing techniques. This study provides a comprehensive review of data analysis methods and signal processing techniques in gravitational wave detection. The research begins by introducing the characteristics of gravitational wave signals and the challenges faced in their detection, such as extremely low signal-to-noise ratios and complex noise backgrounds. It then systematically analyzes the application of time-frequency analysis methods in extracting transient gravitational wave signals, including wavelet transforms and Hilbert-Huang transforms. The study focuses on discussing the crucial role of matched filtering techniques in improving signal detection sensitivity and explores strategies for template bank optimization. Additionally, the research evaluates the potential of machine learning algorithms, especially deep learning networks, in rapidly identifying and classifying gravitational wave events. The study also analyzes the application of Bayesian inference methods in parameter estimation and model selection, as well as their advantages in handling uncertainties. However, the research also points out the challenges faced by current technologies, such as dealing with non-Gaussian noise and improving computational efficiency. To address these issues, the study proposes a hybrid analysis framework combining physical models and data-driven methods. Finally, the research looks ahead to the potential applications of quantum computing in future gravitational wave data analysis. This study provides a comprehensive theoretical foundation for the optimization and innovation of gravitational wave data analysis methods, contributing to the advancement of gravitational wave astronomy.展开更多
Sun T,Chen R,Liu J,Zhou Y.Current progress and future perspectives in total‐body PET imaging,part I:data processing and analysis.iRADIOLOGY.2024;2(2):173-90.On page 178,Section 3.2,the text reads:“Muller et al.[49]u...Sun T,Chen R,Liu J,Zhou Y.Current progress and future perspectives in total‐body PET imaging,part I:data processing and analysis.iRADIOLOGY.2024;2(2):173-90.On page 178,Section 3.2,the text reads:“Muller et al.[49]used deep learning to denoise dynamic PET data from a Quadra scanner and investigated…”This should be corrected to:“Muller et al.[49]used deep learning to denoise dynamic PET data from a PennPET Explorer scanner and investigated…”We apologize for this error.展开更多
文摘Organizations often use sentiment analysis-based systems or even resort to simple manual analysis to try to extract useful meaning from their customers’general digital“chatter”.Driven by the need for a more accurate way to qualitatively extract valuable product and brand-oriented consumer-generated texts,this paper experimentally tests the ability of an NLP-based analytics approach to extract information from highly unstructured texts.The results show that natural language processing outperforms sentiment analysis for detecting issues from social media data.Surprisingly,the experiment shows that sentiment analysis is not only better than manual analysis of social media data for the goal of supporting organizational decision-making,but may also be disadvantageous for such efforts.
基金supported by the Deanship of Scientific Research(DSR),King Abdulaziz University,Jeddah,under grant No.IPP:533-611-2025DSR technical and financial support.
文摘In the effort to enhance cardiovascular diagnostics,deep learning-based heart sound classification presents a promising solution.This research introduces a novel preprocessing method:iterative k-means clustering combined with silhouette score analysis,aimed at downsampling.This approach ensures optimal cluster formation and improves data quality for deep learning models.The process involves applying k-means clustering to the dataset,calculating the average silhouette score for each cluster,and selecting the clusterwith the highest score.We evaluated this method using 10-fold cross-validation across various transfer learningmodels fromdifferent families and architectures.The evaluation was conducted on four datasets:a binary dataset,an augmented binary dataset,amulticlass dataset,and an augmentedmulticlass dataset.All datasets were derived from the Heart Wave heart sounds dataset,a novelmulticlass dataset introduced by our research group.To increase dataset sizes and improve model training,data augmentation was performed using heartbeat cycle segmentation.Our findings highlight the significant impact of the proposed preprocessing approach on the HeartWave datasets.Across all datasets,model performance improved notably with the application of our method.In augmented multiclass classification,the MobileNetV2 model showed an average weighted F1-score improvement of 27.10%.In binary classification,ResNet50 demonstrated an average accuracy improvement of 8.70%,reaching 92.40%compared to its baseline performance.These results underscore the effectiveness of clustering with silhouette score analysis as a preprocessing step,significantly enhancing model accuracy and robustness.They also emphasize the critical role of preprocessing in addressing class imbalance and advancing precision medicine in cardiovascular diagnostics.
文摘Purpose–The precast concrete slab track(PST)has advantages of fewer maintenance frequencies,better smooth rides and structural stability,which has been widely applied in urban rail transit.Precise positioning of precast concrete slab(PCS)is vital for keeping the initial track regularity.However,the cast-in-place process of the self-compacting concrete(SCC)filling layer generally causes a large deformation of PCS due to the water-hammer effect of flowing SCC,even cracking of PCS.Currently,the buoyancy characteristic and influencing factors of PCS during the SCC casting process have not been thoroughly studied in urban rail transit.Design/methodology/approach–In this work,a Computational Fluid Dynamics(CFD)model is established to calculate the buoyancy of PCS caused by the flowing SCC.The main influencing factors,including the inlet speed and flowability of SCC,have been analyzed and discussed.A new structural optimization scheme has been proposed for PST to reduce the buoyancy caused by the flowing SCC.Findings–The simulation and field test results showed that the buoyancy and deformation of PCS decreased obviously after adopting the new scheme.Originality/value–The findings of this study can provide guidance for the control of the deformation of PCS during the SCC construction process.
基金supported by the China Postdoctoral Science Foundation(No.2023T160088)the Youth Fund of the National Natural Science Foundation of China(No.52304324).
文摘Magnesium and magnesium alloys,serving as crucial lightweight structural materials and hydrogen storage elements,find extensive applications in space technology,aviation,automotive,and magnesium-based hydrogen industries.The global production of primary magnesium has reached approximately 1.2 million tons per year,with anticipated diversification in future applications and significant market demand.Nevertheless,approximately 80%of the world’s primary magnesium is still manufactured through the Pidgeon process,grappling with formidable issues including high energy consumption,massive carbon emission,significant resource depletion,and environmental pollution.The implementation of the relative vacuum method shows potential in breaking through technological challenges in the Pidgeon process,facilitating clean,low-carbon continuous magnesium smelting.This paper begins by introducing the principles of the relative vacuum method.Subsequently,it elucidates various innovative process routes,including relative vacuum ferrosilicon reduction,aluminum thermal reduction co-production of spinel,and aluminum thermal reduction co-production of calcium aluminate.Finally,and thermodynamic foundations of the relative vacuum,a quantitative analysis of the material,energy flows,carbon emission,and production cost for several new processes is conducted,comparing and analyzing them against the Pidgeon process.The study findings reveal that,with identical raw materials,the relative vacuum silicon thermal reduction process significantly decreases raw material consumption,energy consumption,and carbon dioxide emissions by 15.86%,30.89%,and 26.27%,respectively,compared to the Pidgeon process.The relative vacuum process,using magnesite as the raw material and aluminum as the reducing agent,has the lowest magnesium-to-feed ratio,at only 3.385.Additionally,its energy consumption and carbon dioxide emissions are the lowest,at 1.817 tce/t Mg and 7.782 t CO_(2)/t Mg,respectively.The energy consumption and carbon emissions of the relative vacuum magnesium smelting process co-producing calcium aluminate(12CaO·7Al_(2)O_(3),3CaO·Al_(2)O_(3),and CaO·Al_(2)O_(3))are highly correlated with the consumption of dolomite in the raw materials.When the reduction temperature is around 1473.15 K,the critical volume fraction of magnesium vapor for different processes varies within the range of 5%–40%.Production cost analysis shows that the relative vacuum primary magnesium smelting process has significant economic benefits.This paper offers essential data support and theoretical guidance for achieving energy efficiency,carbon reduction in magnesium smelting,and the industrial adoption of innovative processes.
基金supported in part by the National Science Fund for Distinguished Young Scholars of China(62225303)the National Natural Science Fundation of China(62303039,62433004)+2 种基金the China Postdoctoral Science Foundation(BX20230034,2023M730190)the Fundamental Research Funds for the Central Universities(buctrc202201,QNTD2023-01)the High Performance Computing Platform,College of Information Science and Technology,Beijing University of Chemical Technology
文摘Data-driven process monitoring is an effective approach to assure safe operation of modern manufacturing and energy systems,such as thermal power plants being studied in this work.Industrial processes are inherently dynamic and need to be monitored using dynamic algorithms.Mainstream dynamic algorithms rely on concatenating current measurement with past data.This work proposes a new,alternative dynamic process monitoring algorithm,using dot product feature analysis(DPFA).DPFA computes the dot product of consecutive samples,thus naturally capturing the process dynamics through temporal correlation.At the same time,DPFA's online computational complexity is lower than not just existing dynamic algorithms,but also classical static algorithms(e.g.,principal component analysis and slow feature analysis).The detectability of the new algorithm is analyzed for three types of faults typically seen in process systems:sensor bias,process fault and gain change fault.Through experiments with a numerical example and real data from a thermal power plant,the DPFA algorithm is shown to be superior to the state-of-the-art methods,in terms of better monitoring performance(fault detection rate and false alarm rate)and lower computational complexity.
文摘This paper conducted a more comprehensive review and comparative analysis of the two heavy to blizzard processes that occurred in the Beijing area during December 13-15,2023,and February 20-21,2024,in terms of comprehensive weather situation diagnosis,forecasting,and decision-making services,and summarized the meteorological service support experience of such heavy snow weather processes.It was found that both blizzard processes were jointly influenced by the 700 hPa southwesterly warm and humid jet stream and the near-surface easterly backflow;the numerical forecast was relatively accurate in the overall description of the snowfall process,and the forecast bias of the position of the 700 hPa southwesterly warm and humid jet stream determined the bias of the snowfall magnitude forecast at a certain point;when a deviation was found between the actual snowfall and the forecast,the cause should be analyzed in a timely manner,and the warning and forecast conclusions should be updated.With the full cooperation of relevant departments,it can greatly make up for the deviation of the early forecast snowfall amount,and ensure the safety and efficiency of people's travel.
文摘The purpose of this paper is to identify the processes with the highest contribution to potential environmental impacts in the life cycle of the masonry of concrete blocks by evaluating their main emissions contributing to impact categories and identifying hotspots for environmental improvements.The research is based on the Life Cycle Assessment(LCA)study of non-load-bearing masonry of concrete blocks performed by the authors.The processes those have demonstrated higher contribution to environmental impacts were identified in the Life Cycle Impact Assessment(LCIA)phase and a detailed analysis was carried out on the main substances derived from these processes.The highest potential impacts in the life cycle of the concrete blocks masonry can be attributed mainly to emissions coming from the production of Portland cement,which explains the peak of impact potential on the blocks production stage,but also the significant impact potential in the use of the blocks for masonry construction,due to the use of cement mortar.The results of this LCA study are part of a major research on the comparative analysis of different typologies of non-load-bearing external walls,which aims to contribute to the creation of a life cycle database of major building systems,to be used by the environmental certification systems of buildings.
基金supported by the National Natural Science Foundation of China(Grant no.52304065)China Postdoctoral Science Foundation(Grant no.2022MD723759).
文摘Taking into account the characteristics of non-Newtonian fluids and the influence of latent heat of wax crystallization,this study establishes physical and mathematical models for the synergy of tubular heating and mechanical stirring during the waxy crude oil heating process.Numerical calculations are conducted using the sliding grid technique and FVM.The focus of this study is on the impact of stirring rate(τ),horizontal deflection angle(θ1),vertical deflection angle(θ2),and stirring diameter(D)on the heating effect of crude oil.Our results show that asτincreases from 200 rpm to 500 rpm and D increases from 400 mm to 600 mm,there is an improvement in the average crude oil temperature and temperature uniformity.Additionally,heating efficiency increases by 0.5%and 1%,while the volume of the low-temperature region decreases by 57.01 m^(3) and 36.87 m3,respectively.Asθ1 andθ2 increase from 0°to 12°,the average crude oil temperature,temperature uniformity,and heating efficiency decrease,while the volume of the low-temperature region remains basically the same.Grey correlation analysis is used to rank the importance of stirring parameters in the following order:τ>θ1>θ2>D.Subsequently,multiple regression analysis is used to quantitatively describe the relationship between different stirring parameters and heat transfer evaluation indices through equations.Finally,based on entropy generation minimization,the stirring parameters with optimal heat transfer performance are obtained when τ=350 rpm,θ1=θ2=0°,and D=500 mm.
基金Supporting Project under Grant No.RSP2025R472,King Saud University,Riyadh,Saudi Arabia。
文摘The nonlinear Schrodinger equation(NLSE) is a key tool for modeling wave propagation in nonlinear and dispersive media. This study focuses on the complex cubic NLSE with δ-potential,explored through the Brownian process. The investigation begins with the derivation of stochastic solitary wave solutions using the modified exp(-Ψ(ξ)) expansion method. To illustrate the noise effects, 3D and 2D visualizations are displayed for different non-negative values of noise parameter under suitable parameter values. Additionally, qualitative analysis of both perturbed and unperturbed dynamical systems is conducted using bifurcation and chaos theory. In bifurcation analysis, we analyze the detailed parameter analysis near fixed points of the unperturbed system. An external periodic force is applied to perturb the system, leading to an investigation of its chaotic behavior. Chaos detection tools are employed to predict the behavior of the perturbed dynamical system, with results validated through visual representations.Multistability analysis is conducted under varying initial conditions to identify multiple stable states in the perturbed dynamical system, contributing to chaotic behavior. Also, sensitivity analysis of the Hamiltonian system is performed for different initial conditions. The novelty of this work lies in the significance of the obtained results, which have not been previously explored for the considered equation. These findings offer noteworthy insights into the behavior of the complex cubic NLSE with δ-potential and its applications in fields such as nonlinear optics, quantum mechanics and Bose–Einstein condensates.
文摘The fracture volume is gradually changed with the depletion of fracture pressure during the production process.However,there are few flowback models available so far that can estimate the fracture volume loss using pressure transient and rate transient data.The initial flowback involves producing back the fracturing fuid after hydraulic fracturing,while the second flowback involves producing back the preloading fluid injected into the parent wells before fracturing of child wells.The main objective of this research is to compare the initial and second flowback data to capture the changes in fracture volume after production and preload processes.Such a comparison is useful for evaluating well performance and optimizing frac-turing operations.We construct rate-normalized pressure(RNP)versus material balance time(MBT)diagnostic plots using both initial and second flowback data(FB;and FBs,respectively)of six multi-fractured horizontal wells completed in Niobrara and Codell formations in DJ Basin.In general,the slope of RNP plot during the FB,period is higher than that during the FB;period,indicating a potential loss of fracture volume from the FB;to the FB,period.We estimate the changes in effective fracture volume(Ver)by analyzing the changes in the RNP slope and total compressibility between these two flowback periods.Ver during FB,is in general 3%-45%lower than that during FB:.We also compare the drive mechanisms for the two flowback periods by calculating the compaction-drive index(CDI),hydrocarbon-drive index(HDI),and water-drive index(WDI).The dominant drive mechanism during both flowback periods is CDI,but its contribution is reduced by 16%in the FB,period.This drop is generally compensated by a relatively higher HDI during this period.The loss of effective fracture volume might be attributed to the pressure depletion in fractures,which occurs during the production period and can extend 800 days.
基金support provided by the Sichuan Province Science and Technology Achievement Transformation Project (2023ZHCG0063).
文摘In order to analysis the oxygen distribution in the adsorption bed during the hydrogen purification process from oxygen-containing feed gas and the safety of device operation, this article established a non-isothermal model for the pressure swing adsorption (PSA) separation process of 4-component (H_(2)/O_(2)/N_(2)/CH_(4)), and adopted a composite adsorption bed of activated carbon and molecular sieve. In this article, the oxygen distribution in the adsorption bed under different feed gas oxygen contents, different adsorption pressures, and different product hydrogen purity was studied for both vacuuming process and purging process. The study shows that during the process from the end of adsorption to the end of providing purging, the peak value of oxygen concentration in the adsorption bed gradually increases, with the highest value exceeding 30 times the oxygen content of the feed gas. Moreover, the concentration multiplier of oxygen in the adsorption bed increases with the increase of the adsorption pressure, decreases with the increase of the oxygen content in the feed gas, and increases with the decrease of the hydrogen product purity. When the oxygen content in the feed gas reaches 0.3% (vol), the peak value of oxygen concentration in the adsorption bed exceeds 10% (vol), which will make the front part of the oxygen concentration peak fall in an explosion limit range. As the decrease of product hydrogen content, the oxygen concentration peak in the adsorption bed will gradually move forward to the adsorption bed outlet, and even penetrate through the adsorption bed. And during the process of the oxygen concentration peak moving forward, the oxygen will enter the pipeline at the outlet of the adsorption bed, which will make the pipeline space of high-speed gas flow into an explosion range, bringing great risk to the device. The preferred option for safe operation of PSA for hydrogen purification from oxygen-containing feed gas is to deoxygenate the feed gas. When deoxygenation is not available, a lower adsorption pressure and a higher product hydrogen purity (greater than or equal to 99.9% (vol)) can be used to avoid the gas in the adsorption bed outlet pipeline being in the explosion range.
基金supported by the TianjinWater Group Co.,Ltd.,China(No.2022KY-02).
文摘As global climate change problems become increasingly serious,the world urgently needs to take practical measures to deal with this environmental issue.In this sense,China’s carbon peaking and carbon neutrality goals endowed an ingenious solution.Various industries in China have actively responded to this policy call,and various enterprises have started to carry out the work of carbon emission reduction,especially in water supply industry.In order to reduce carbon emission,we must first calculate carbon emissions and understand the level of carbon emission.At present,the carbon emissions accounting of water supply industry is mostly carried out on the partial work of some individual units within the enterprise,and there is no accounting case for the whole process of water supply work.This work innovatively proposes a method to calculate the carbon emissions generated in the whole water supply procedure.The carbon emission in the whole water supply procedure originates from the leakage of water supply network and the maintenance of water supply network,and all the carbon emissions involved in these two aspects are calculated.Moreover,the key points of carbon emission reduction are analyzed according to the accounting results,and a potential carbon emission reduction scheme is proposed.The research can provide a reference for the overall carbon emission accounting strategies and the construction of carbon emission reduction plans in the future.
基金supported by the National Key Research and Development Program of China(No.2022YFC3703403)Zhejiang Provincial“LeadWild Goose”Research and Development Project(No.2022C03073).
文摘The transition of the Chinese iron and steel industry to ultralow emissions has accelerated the development of denitrification technologies.Considering the existing dual carbon targets,carbon emissions must be considered as a critical indicator when comparing denitrification systems.Consequently,this study provided a comprehensive cost-benefit model for denitrification in the steel industry,encompassing additional carbon emissions resulting from the implementation of denitrification systems.Activated-carbon adsorption and selective catalytic reduction(SCR)systems are two efficient techniques for controlling NOx emissions during sintering.Based on thismodel,a cost-benefit analysis of these two typical systems was conducted,and the results indicated that the unit flue-gas abatement costs of SCR and activated-carbon adsorption systems were 0.00275 and 0.0126 CNY/m^(3),and the unit flue-gas abatement benefits were 0.0072 and 0.0179 CNY/m^(3),respectively.Additionally,the effect of operational characteristics on operating costs,including duration and material prices,was analyzed.When treating the flue gas,the two systems released 0.0020 and 0.0060 kg/m^(3) of carbon dioxide,respectively.The primary sources of carbon emissions from the SCR and activated-carbon adsorption systems are the production of reducing agents and system operations,respectively.Furthermore,considering the features of the activated carbon adsorption system for simultaneous desulfurization,a SCR-wet flue gas desulfurization(WFGD)technology route was developed for comparison with the activated carbon adsorption system.
基金sponsored by the US Department of Energy Fossil Energy and Carbon Management Program,project FEAA157 under contract DE-AC05-00OR22725 with UTBattelle,LLC.
文摘Coal is a versatile energy resource and was a driver of the industrial revolution that transformed the economies of Europe and North America and the trajectory of civilization.In this work,a technoeconomic analysis was performed for a coal-to-carbonfiber manufacture process developed at the University of Kentucky’s Center for Applied Energy Research.According to this process,coal,with decant oil as the solvent,was converted to mesophase pitch via solvent extraction,and the mesophase pitch was subsequently converted to carbon fiber.The total cost to produce carbon fibers from coal and decant oil via the solvent extraction process was estimated to be$11.50/kg for 50,000-tow pitch carbon fiber with a production volume of 3750 MT/year.The estimated carbon fiber cost was significantly lower than the current commercially available PAN-based carbon fiber price($20–$30/kg).With decant oil recycling rates of 50%and 70%in the solvent extraction process,the manufacturing cost of carbon fiber was estimated to be$9.90/kg and$9.50/kg of carbon fiber,respectively.A cradle-to-gate energy assessment revealed that carbon fiber derived from coal exhibited an embodied energy of 510 MJ/kg,significantly lower than that of conventionally produced carbon fiber from PAN.This notable difference is primarily attributed to the substantially higher conversion rate of coal-based mesophase pitch fibers into carbon fiber,surpassing PAN fibers by 1.6 times.These findings indicate that using coal for carbon fiber production through solvent extraction methods could offer a more energy-efficient and cost-competitive alternative to the traditional PAN based approach.
基金supported by the Program of National Natural Science Foundation of China(U23A20329,62163036)Youth Academic and Technical Leaders Reserve Talent Training project(202105AC160094)Industrial Innovation Talent Special Project of Xingdian Talent Support Program(XDYC-CYCX-2022-0010).
文摘Kernel-based slow feature analysis(SFA)methods have been successfully applied in the industrial process fault detection field.However,kernel-based SFA methods have high computational complexity as dealing with nonlinearity,leading to delays in detecting time-varying data features.Additionally,the uncertain kernel function and kernel parameters limit the ability of the extracted features to express process characteristics,resulting in poor fault detection performance.To alleviate the above problems,a novel randomized auto-regressive dynamic slow feature analysis(RRDSFA)method is proposed to simultaneously monitor the operating point deviations and process dynamic faults,enabling real-time monitoring of data features in industrial processes.Firstly,the proposed Random Fourier mappingbased method achieves more effective nonlinear transformation,contrasting with the current kernelbased RDSFA algorithm that may lead to significant computational complexity.Secondly,a randomized RDSFA model is developed to extract nonlinear dynamic slow features.Furthermore,a Bayesian inference-based overall fault monitoring model including all RRDSFA sub-models is developed to overcome the randomness of random Fourier mapping.Finally,the superiority and effectiveness of the proposed monitoring method are demonstrated through a numerical case and a simulation of continuous stirred tank reactor.
文摘In order to attain good quality transfer function estimates from magnetotelluric field data(i.e.,smooth behavior and small uncertainties across all frequencies),we compare time series data processing with and without a multitaper approach for spectral estimation.There are several common ways to increase the reliability of the Fourier spectral estimation from experimental(noisy)data;for example to subdivide the experimental time series into segments,taper these segments(using single taper),perform the Fourier transform of the individual segments,and average the resulting spectra.
文摘Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentiment analysisin widely spoken languages such as English, Chinese, Arabic, Roman Arabic, and more, we come to grapplingwith resource-poor languages like Urdu literature which becomes a challenge. Urdu is a uniquely crafted language,characterized by a script that amalgamates elements from diverse languages, including Arabic, Parsi, Pashtu,Turkish, Punjabi, Saraiki, and more. As Urdu literature, characterized by distinct character sets and linguisticfeatures, presents an additional hurdle due to the lack of accessible datasets, rendering sentiment analysis aformidable undertaking. The limited availability of resources has fueled increased interest among researchers,prompting a deeper exploration into Urdu sentiment analysis. This research is dedicated to Urdu languagesentiment analysis, employing sophisticated deep learning models on an extensive dataset categorized into fivelabels: Positive, Negative, Neutral, Mixed, and Ambiguous. The primary objective is to discern sentiments andemotions within the Urdu language, despite the absence of well-curated datasets. To tackle this challenge, theinitial step involves the creation of a comprehensive Urdu dataset by aggregating data from various sources such asnewspapers, articles, and socialmedia comments. Subsequent to this data collection, a thorough process of cleaningand preprocessing is implemented to ensure the quality of the data. The study leverages two well-known deeplearningmodels, namely Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), for bothtraining and evaluating sentiment analysis performance. Additionally, the study explores hyperparameter tuning tooptimize the models’ efficacy. Evaluation metrics such as precision, recall, and the F1-score are employed to assessthe effectiveness of the models. The research findings reveal that RNN surpasses CNN in Urdu sentiment analysis,gaining a significantly higher accuracy rate of 91%. This result accentuates the exceptional performance of RNN,solidifying its status as a compelling option for conducting sentiment analysis tasks in the Urdu language.
文摘The recent pandemic crisis has highlighted the importance of the availability and management of health data to respond quickly and effectively to health emergencies, while respecting the fundamental rights of every individual. In this context, it is essential to find a balance between the protection of privacy and the safeguarding of public health, using tools that guarantee transparency and consent to the processing of data by the population. This work, starting from a pilot investigation conducted in the Polyclinic of Bari as part of the Horizon Europe Seeds project entitled “Multidisciplinary analysis of technological tracing models of contagion: the protection of rights in the management of health data”, has the objective of promoting greater patient awareness regarding the processing of their health data and the protection of privacy. The methodology used the PHICAT (Personal Health Information Competence Assessment Tool) as a tool and, through the administration of a questionnaire, the aim was to evaluate the patients’ ability to express their consent to the release and processing of health data. The results that emerged were analyzed in relation to the 4 domains in which the process is divided which allows evaluating the patients’ ability to express a conscious choice and, also, in relation to the socio-demographic and clinical characteristics of the patients themselves. This study can contribute to understanding patients’ ability to give their consent and improve information regarding the management of health data by increasing confidence in granting the use of their data for research and clinical management.
文摘Gravitational wave detection is one of the most cutting-edge research areas in modern physics, with its success relying on advanced data analysis and signal processing techniques. This study provides a comprehensive review of data analysis methods and signal processing techniques in gravitational wave detection. The research begins by introducing the characteristics of gravitational wave signals and the challenges faced in their detection, such as extremely low signal-to-noise ratios and complex noise backgrounds. It then systematically analyzes the application of time-frequency analysis methods in extracting transient gravitational wave signals, including wavelet transforms and Hilbert-Huang transforms. The study focuses on discussing the crucial role of matched filtering techniques in improving signal detection sensitivity and explores strategies for template bank optimization. Additionally, the research evaluates the potential of machine learning algorithms, especially deep learning networks, in rapidly identifying and classifying gravitational wave events. The study also analyzes the application of Bayesian inference methods in parameter estimation and model selection, as well as their advantages in handling uncertainties. However, the research also points out the challenges faced by current technologies, such as dealing with non-Gaussian noise and improving computational efficiency. To address these issues, the study proposes a hybrid analysis framework combining physical models and data-driven methods. Finally, the research looks ahead to the potential applications of quantum computing in future gravitational wave data analysis. This study provides a comprehensive theoretical foundation for the optimization and innovation of gravitational wave data analysis methods, contributing to the advancement of gravitational wave astronomy.
文摘Sun T,Chen R,Liu J,Zhou Y.Current progress and future perspectives in total‐body PET imaging,part I:data processing and analysis.iRADIOLOGY.2024;2(2):173-90.On page 178,Section 3.2,the text reads:“Muller et al.[49]used deep learning to denoise dynamic PET data from a Quadra scanner and investigated…”This should be corrected to:“Muller et al.[49]used deep learning to denoise dynamic PET data from a PennPET Explorer scanner and investigated…”We apologize for this error.