Since the emergence of Bitcoin,cryptocurrencies have grown significantly,not only in terms of capitalization but also in number.Consequently,the cryptocurrency market can be a conducive arena for investors,as it offer...Since the emergence of Bitcoin,cryptocurrencies have grown significantly,not only in terms of capitalization but also in number.Consequently,the cryptocurrency market can be a conducive arena for investors,as it offers many opportunities.However,it is difficult to understand.This study aims to describe,summarize,and segment the main trends of the entire cryptocurrency market in 2018,using data analysis tools.Accord-ingly,we propose a new clustering-based methodology that provides complementary views of the financial behavior of cryptocurrencies,and one that looks for associations between the clustering results,and other factors that are not involved in clustering.Particularly,the methodology involves applying three different partitional clustering algorithms,where each of them use a different representation for cryptocurrencies,namely,yearly mean,and standard deviation of the returns,distribution of returns that have not been applied to financial markets previously,and the time series of returns.Because each representation provides a different outlook of the market,we also examine the integration of the three clustering results,to obtain a fine-grained analysis of the main trends of the market.In conclusion,we analyze the association of the clustering results with other descriptive features of cryptocurrencies,including the age,technological attributes,and financial ratios derived from them.This will help to enhance the profiling of the clusters with additional descriptive insights,and to find associations with other variables.Consequently,this study describes the whole market based on graphical information,and a scalable methodology that can be reproduced by investors who want to understand the main trends in the market quickly,and those that look for cryptocurrencies with different financial performance.In our analysis of the 2018 and 2019 for extended period,we found that the market can be typically segmented in few clusters(five or less),and even considering the intersections,the 6 more populations account for 75%of the market.Regarding the associations between the clusters and descriptive features,we find associations between some clusters with volume,market capitalization,and some financial ratios,which could be explored in future research.展开更多
Empirical mode decomposition (EMD) is proposed to identify linear structure under non-stationary excitation,and non-white noise coefficient is introduced under the assumption of random signals consisting of white nois...Empirical mode decomposition (EMD) is proposed to identify linear structure under non-stationary excitation,and non-white noise coefficient is introduced under the assumption of random signals consisting of white noise and non-white noise signals. The cross-correlation function of response signal is decomposed into mode functions and residue by EMD method. The identification technique of the modal parameters of single freedom degree is applied to each mode function to obtain natural frequencies, damping ratios and mode shapes. The results of identification of the five-degree freedom linear system demonstrate that the proposed method is effective in identifying the parameters of linear structures under non-stationary ambient excitation.展开更多
Under consideration that the profiles of bands at close wavelengths are quite similar and the curvelets are good at capturing profiles, a junk band recovery algorithm for hyperspectral data based on curvelet transform...Under consideration that the profiles of bands at close wavelengths are quite similar and the curvelets are good at capturing profiles, a junk band recovery algorithm for hyperspectral data based on curvelet transform is proposed. Both the noisy bands and the noise-free bands are transformed via curvelet band by band. The high frequency coefficients in junk bands are replaced with linear interpolation of the high frequency coefficients in noise-flee bands, and the low frequency coefficients remain the same to keep the main spectral characteristics from being distorted. Jutak bands then are recovered after the inverse curvelet transform. The performance of this method is tested on the hyperspectral data cube obtained by airborne visible/infrared imaging spectrometer (AVIRIS). The experimental results show that the proposed method is superior to the traditional denoising method BayesShrink and the art-of-state Curvelet Shrinkage in both roots of mean square error (RMSE) and peak-signal-to-noise ratio (PSNR) of recovered bands.展开更多
In this paper a new method is developed to make a dynamic layout adjustmentand navigation for enterprise Geographic Information System (GIS) based on object mark recognition.The extraction of object mark images is bas...In this paper a new method is developed to make a dynamic layout adjustmentand navigation for enterprise Geographic Information System (GIS) based on object mark recognition.The extraction of object mark images is based on some morphological structural patterns, which aredescribed by morphological structural points, contour property, and other geometrical data in abinary image of enterprise geographic information map. Some pre-processing methods, contour smoothfollowing, linearization and extraction patterns of structural points, are introduced. If anyspecial object is selected to make a decision in a GIS map, the all information around it will beobtained. That is, we need to investigate similar object enterprises around selected region toanalyse whether it is necessary for establishing the object enterprise at that place. To furthernavigate GIS map, we need to move from one region to another. Each time a region is formed anddisplayed based on the user's focus. If a focus point of a map is selected, in terms of extractedobject mark image, a dynamic layout and navigation diagram is constructed. When the user changes thefocus (i. e. click a node in the navigation mode), a new sub-diagram is formed by dropping oldnodes and adding new nodes. The prototype system provides effective interfaces that support GISimage navigation, detailed local image/map viewing, and enterprise information browsing.展开更多
High Temperature Air Combustion(HTAC)based on regenerative theory has been used in developed countries in recent years,it has many advantages such as efficient recovery of waste heat,high temperature preheating air,lo...High Temperature Air Combustion(HTAC)based on regenerative theory has been used in developed countries in recent years,it has many advantages such as efficient recovery of waste heat,high temperature preheating air,low pollution discharge,and so on.This Technology can be used in various furnaces in mechanical,petroleum,chemical industry.To rebuild traditional radiant-tube combustion system with HTAC technology has become important.In the transformation process,The biggest difficulty encountered is that the stability of burner combustion and control system.Because the exhaust gas heat is absorbed by the regenerator,exhaust gas discharge can be controlled at a very low temperature to realize maximum waste heat recovery.At the same time,it improves the temperature uniformity and improve the heating intensity.Thermal efficiency of the device can reach more than 80%.And compared to the traditional air preheating,21.55%energy can be saved.Revamping on traditional radiant-tube combustion system is technically feasible,but a lot of problems will be involved since the rebuild work is on the old system,this article discusses on the main problem encountered in rebuild process in site.to optimize temperature control and obtain not so high exhaust gas temperature,digital combustion control system is necessary.This control loop consists of big loop and small loop,Big loop controls the load distribution of all burners in each heating zone.Small loop controls each heating zone burner's burning time.Compared performance of tradition radiant-tube heater with regenerative radiant-tube heater,result that regenerative radiant-tube heater have many advantage in consume fuel.Accordance with experience of replacing tradition radiant-tube heater with regenerative type,give a proposition in combustion control system,pilot burner,flame detection and prevent trouble to rebuild work of CAPL and CGL.It is recommended to use regenerative combustion technology in new annealing Line.Although the investment is 1/3 much more than the traditional combustion system,the energy saving effect is obvious and operating costs decreases.Revamping can be taken step by step according to different heating zones.Although taking a long time,it is safer and it influences the production less.Regenerative combustion burner revamping has become successful.However,the revamping work on different furnaces,particular on continuous annealing furnace with high request for temperature control,need further exploration and research.展开更多
Energy in its varied forms and applications has become the main driver of today’s modern society. However, recent changes in power demand and climatic changes (decarbonization policy) has awakened the need to rethink...Energy in its varied forms and applications has become the main driver of today’s modern society. However, recent changes in power demand and climatic changes (decarbonization policy) has awakened the need to rethink through the current energy generating and distribution system. This led to the exploration of other energy sources of which renewable energy (like thermal, solar and wind energy) is fast becoming an integral part of most energy system. However, this innovative and promising energy source is highly unreliable in maintaining a constant peak power that matches demand. Energy storage systems have thus been highlighted as a solution in managing such imbalances and maintaining the stability of supply. Energy storage technologies absorb and store energy, and release it on demand. This includes gravitational potential energy (pumped hydroelectric), chemical energy (batteries), kinetic energy (flywheels or compressed air), and energy in the form of electrical (capacitors) and magnetic fields. This paper provides a detailed and comprehensive overview of some of the state-of-the-art energy storage technologies, its evolution, classification, and comparison along with various area of applications. Also highlighted in this paper is a plethora of power electronic Interface technologies that plays a significant role in enabling optimum performance and utilization of energy storage systems in different areas of application.展开更多
This paper presents a comparative analysis of different parameters such as enthalpy, moderator temperature, moderator density, flow velocity, pressure, and fuel temperature profile at the fuel pin cell level of PWR. M...This paper presents a comparative analysis of different parameters such as enthalpy, moderator temperature, moderator density, flow velocity, pressure, and fuel temperature profile at the fuel pin cell level of PWR. Moreover, in this paper pitches to fuel pin radius ratio are varied from 2.3 to 4. The methods and implementation strategy are such that the coupled neutronic and thermal-hydraulic analysis is executed in a fully one dimensional (1D) manner. The thermal hydraulic is based on moderator/coolant mass and enthalpy equation together with one group diffusion equation for fuel pin. Modelling of fuel pin cell and subchannel is executed in two steps. First, the governing equations are derived assuming that all the parameters appearing in the equations are temperature independent. Fuel pin centerline temperature and radially averaged temperature equations are derived from Fourier laws of thermal conductivity. Finally, diffusion coefficient, fission cross-section and absorbing cross-section are evaluated with respect to the fuel pin temperature. The outcome will be helpful for further neutronics and thermal analysis of PWR. Thermal hydraulics parameter varies the maximum 30 percentage from the lowermost value.展开更多
The world’s energy industry is experiencing a significant transformation due to increased energy consumption, the rise in renewable energy usage, and the demand for sustainability. This review paper explores the pote...The world’s energy industry is experiencing a significant transformation due to increased energy consumption, the rise in renewable energy usage, and the demand for sustainability. This review paper explores the potential for transformation offered by Artificial Intelligence (AI) in improving energy infrastructure, specifically looking at how it can be used in managing smart grids, predicting maintenance needs, and integrating renewable energy sources. Machine learning (ML) and deep learning (DL) are crucial AI technologies that have become necessary for enhancing grid stability, reducing operational costs, and improving energy efficiency. AI-powered predictive maintenance has proven to lower unexpected downtime by 40%, while AI-based demand forecasting has reached prediction accuracy of 90%, allowing utilities to efficiently manage supply and demand. In addition, AI helps tackle the issues of fluctuating renewable energy by playing a key role in enhancing energy storage and distribution in nations like Denmark and the US. Moreover, cryptographic frameworks such as Elliptic Curve Cryptography (ECC) and Post-Quantum Cryptography (PQC) offer robust security measures to protect AI-driven energy systems. ECC provides lightweight, efficient encryption ideal for IoT-enabled grids, while PQC frameworks, like the SIKE algorithm, ensure long-term resilience against quantum computing threats, safeguarding critical infrastructure. Nevertheless, obstacles like limited data access, cybersecurity weaknesses, and financial limitations continue to hinder widespread AI implementation, especially in less developed areas. This review emphasizes the significance of adopting essential strategies such as smart grid development, public-private collaborations, strong regulatory frameworks, and standardized data-sharing protocols. It is essential to have strong implementation and monitoring systems, improved cybersecurity measures, and ongoing investment in AI research in order to fully harness AI’s ability to revolutionize energy systems. By tackling these obstacles, AI has the potential to significantly impact the development of a more enduring, productive, and flexible worldwide energy system, hastening the shift towards a renewable-focused energy landscape.展开更多
This paper reports 25 kinds of polyclonal or monoclonal antibodies by ABC immunohistochemical technique used for 253 cell smears by fine-needle aspiration. The results were,1. Immunohistochemical diagnosis were classi...This paper reports 25 kinds of polyclonal or monoclonal antibodies by ABC immunohistochemical technique used for 253 cell smears by fine-needle aspiration. The results were,1. Immunohistochemical diagnosis were classified into 136 metastatic cancers ( K12+ EMA+ CEA+ LCA-),92 lymphomas (LCA+ k12- EMA- CEA-), 4 mesenchymal tumors (Vimentin+), 3 melanomas (S-100+NSE+). 15 reactive proliferations (k+λ4+ CD+ CD8+) and 3 unspecified.2. The origin of 70 metastatic cancers were classified into 36 lung (HLC3-AB+), 4 gastrointestinal tract (MG7+), 8 thyroid (TGB+), 1 prostate (PSA+), 3 liver (AFP+) and 14 unknown. 3. Immunologic phenotype of 87 lymphomas wereclassified into 66 cases of B-cell, 4 T-cell, 3 hsitocyte, 7 Hodgkin' s diseases and 7 unclear. The above results suggest that immunohistochemlcal method may be used as a new method of diagnosing and differentiating epithelial and non-epithelial tumors, detecting primary focus of metastatic cncer, differentiating between reactive proliferation adn lymphome and specifying immunologic phenotype of lymphoma in cell smears of fine- needle aspiration.展开更多
Foraminifera are shell-bearing microorganisms that are commonly found in marine deposits on the seabed.They are important indicators in many analyses,are used in climate change research,monitoring marine environments,...Foraminifera are shell-bearing microorganisms that are commonly found in marine deposits on the seabed.They are important indicators in many analyses,are used in climate change research,monitoring marine environments,evolutionary studies,and are also frequently used in the oil and gas industry.Although some research has focused on automating the classification of foraminifera images,few have addressed the uncertainty in these classifications.Although foraminifera classification is not a safety-critical task,estimating uncertainty is crucial to avoid misclassifications that could overlook rare and ecologically significant species that are informative indicators of the environment in which they lived.Uncertainty estimation in deep learning has gained significant attention and many methods have been developed.However,evaluating the performance of these methods in practical settings remains a challenge.To create a benchmark for uncertainty estimation in the classification of foraminifera,we administered a multiple choice questionnaire containing classification tasks to four senior geologists.By analyzing their responses,we generated human-derived uncertainty estimates for a test set of 260 images of foraminifera and sediment grains.These uncertainty estimates served as a baseline for comparison when training neural networks in classification.We then trained multiple deep neural networks using a range of uncertainty quantification methods to classify and state the uncertainty about the classifications.The results of the deep learning uncertainty quantification methods were then analyzed and compared with the human benchmark,to see how the methods performed individually and how the methods aligned with humans.Our results show that human-level performance can be achieved with deep learning and that test-time data augmentation and ensembling can help improve both uncertainty estimation and classification performance.Our results also show that human uncertainty estimates are helpful indicators for detecting classification errors and that deep learning-based uncertainty estimates can improve calibration and classification accuracy.展开更多
Objective: To study the association between serum neuron-specific enolase (NSE) and the extent of brain damage and the outcome after acute traumatic brain injury (TBI). Methods: The release patterns of serum NSE in 78...Objective: To study the association between serum neuron-specific enolase (NSE) and the extent of brain damage and the outcome after acute traumatic brain injury (TBI). Methods: The release patterns of serum NSE in 78 patients after acute TBI were analyzed by using the enzyme linked immunosobent assay. The levels of NSE were compared with Glasgow coma scale, the category of brain injury and the outcome after 6 months of injury. Results: There were different NSE values in patients with minor (12.96 μg/L±2.39 μg/L), moderate (23.44 μg/L±5.33 μg/L) and severe brain injury (42.68 μg/L±4.57 μg/L). After severe TBI, the concentration of NSE in patients with epidural hematomas was 13.38 μg/L±4.01 μg/L, 24.03 μg/L±2.85 μg/L in brain contusion without surgical intervention group, 55.20 μg/L±6.35 μg/L in brain contusion with surgical intervention group, and 83.85 μg/L±15.82 μg/L in diffuse brain swelling group. There were close correlations between NSE values and Glasgow coma scale (r=-0.608, P<0.01) and the extent of brain injury (r=0.75, P<0.01). Patients with poor outcome had significantly higher initial and peak NSE values than those with good outcome (66.40 μg/L±9.46 μg/L, 94.24 μg/L±13.75 μg/L vs 32.16 μg/L±4.21 μg/L, 34.08 μg/L±4.40 μg/L, P<0.01, respectively). Initial NSE values were negatively related to the outcome (r=-0.501, P<0.01). Most patients with poor outcomes had persisting or secondary elevated NSE values. Conclusions: Serum NSE is one of the valuable neurobiochemical markers for assessment of the severity of brain injury and outcome prediction.展开更多
Inclusions of non-bound amino acids particularly methionine,lysine and threonine,together with the"ideal protein"concept have allowed nutritionists to formulate broiler diets with reduced crude protein(CP)an...Inclusions of non-bound amino acids particularly methionine,lysine and threonine,together with the"ideal protein"concept have allowed nutritionists to formulate broiler diets with reduced crude protein(CP)and increased nutrient density of notionally"essential"amino acids and energy content in recent decades,However,chicken-meat production has been projected to double between now and 2050,providing incentives to reduce dietary soybean meal inclusions further by tangibly reducing dietary CP and utilising a larger array of non-bound amino acids.Whilst relatively conservative decreases in dietary CP,in the order of 20 to 30 g/kg,do not negatively impact broiler performance,further decreases in CP typically compromise broiler performance with associated increases in carcass lipid deposition.Increases in carcass lipid deposition suggest changes occur in dietary energy balance,the mechanisms of which are still not fully understood but discourage the acceptance of diets with reductions in CP,Nevertheless,the groundwork has been laid to investigate both amino acid and non-amino acid limitations and propose facilitative strategies for adoption of tangible dietary CP reductions;consequently,these aspects are considered in detail in this review.Unsurprisingly,investigations into reduced dietary CP are epitomised by variability broiler performance due to the wide range of dietary specifications used and the many variables that should,or could,be considered in formulation of experimental diets.Thus,a holistic approach encompassing many factors influencing limitations to the adoption of tangibly reduced CP diets must be considered if they are to be successful in maintaining broiler performance without increasing carcass lipid deposition.展开更多
Background::The burden of human immunodeficiency virus(HIV)infection in people who use drugs(PWUD)is significant.We aimed to screen HIV infection among PWUD and describe their retention in HIV care.Besides,we also scr...Background::The burden of human immunodeficiency virus(HIV)infection in people who use drugs(PWUD)is significant.We aimed to screen HIV infection among PWUD and describe their retention in HIV care.Besides,we also screen for hepatitis C virus(HCV)infection among HIV-seropositive PWUD and describe their linkage to care.Methods::We conducted a prospective study in 529 PWUD who visited the"Ca?ada Real Galiana"(Madrid,Spain).The study period was from June 1,2017,to May 31,2018.HIV diagnosis was performed with a rapid antibody screening test at the point-of-care(POC)and HCV diagnosis with immunoassay and PCR tests on dried blood spot(DBS)in a central laboratory.Positive PWUD were referred to the hospital.We used the Chi-square or Fisher’s exact tests,as appropriate,to compare rates between groups.Results::Thirty-five(6.6%)participants were positive HIV antibodies,but 34 reported previous HIV diagnoses,and 27(76%)had prior antiretroviral therapy.Among patients with a positive HIV antibody test,we also found a higher prevalence of homeless(P<0.001)and injection drug use(PWID)(P<0.001),and more decades of drug use(P=0.002).All participants received HIV test results at the POC.Of the 35 HIV positives,28(80%)were retained in HIV medical care at the end of the HIV screening study(2018),and only 22(62.9%)at the end of 2020.Moreover,12/35(34.3%)were positive for the HCV RNA test.Of the latter,10/12(83.3%)were contacted to deliver the HCV results test(delivery time of 19 days),5/12(41.7%)had an appointment and were attended at the hospital and started HCV therapy,and only 4/12(33.3%)cleared HCV.Conclusions::We found almost no new HIV-infected PWUD,but their cascade of HIV care was low and remains a challenge in this population at risk.The high frequency of active hepatitis C in HIV-infected PWUD reflects the need for HCV screening and reinforcing the link to care.展开更多
基金Funding was provided by EIT Digital(Grant no 825215)European Cooperation in Science and Technology(COST Action 19130).
文摘Since the emergence of Bitcoin,cryptocurrencies have grown significantly,not only in terms of capitalization but also in number.Consequently,the cryptocurrency market can be a conducive arena for investors,as it offers many opportunities.However,it is difficult to understand.This study aims to describe,summarize,and segment the main trends of the entire cryptocurrency market in 2018,using data analysis tools.Accord-ingly,we propose a new clustering-based methodology that provides complementary views of the financial behavior of cryptocurrencies,and one that looks for associations between the clustering results,and other factors that are not involved in clustering.Particularly,the methodology involves applying three different partitional clustering algorithms,where each of them use a different representation for cryptocurrencies,namely,yearly mean,and standard deviation of the returns,distribution of returns that have not been applied to financial markets previously,and the time series of returns.Because each representation provides a different outlook of the market,we also examine the integration of the three clustering results,to obtain a fine-grained analysis of the main trends of the market.In conclusion,we analyze the association of the clustering results with other descriptive features of cryptocurrencies,including the age,technological attributes,and financial ratios derived from them.This will help to enhance the profiling of the clusters with additional descriptive insights,and to find associations with other variables.Consequently,this study describes the whole market based on graphical information,and a scalable methodology that can be reproduced by investors who want to understand the main trends in the market quickly,and those that look for cryptocurrencies with different financial performance.In our analysis of the 2018 and 2019 for extended period,we found that the market can be typically segmented in few clusters(five or less),and even considering the intersections,the 6 more populations account for 75%of the market.Regarding the associations between the clusters and descriptive features,we find associations between some clusters with volume,market capitalization,and some financial ratios,which could be explored in future research.
基金National Natural Science Foundation(No.19972016)for partly supporting this work
文摘Empirical mode decomposition (EMD) is proposed to identify linear structure under non-stationary excitation,and non-white noise coefficient is introduced under the assumption of random signals consisting of white noise and non-white noise signals. The cross-correlation function of response signal is decomposed into mode functions and residue by EMD method. The identification technique of the modal parameters of single freedom degree is applied to each mode function to obtain natural frequencies, damping ratios and mode shapes. The results of identification of the five-degree freedom linear system demonstrate that the proposed method is effective in identifying the parameters of linear structures under non-stationary ambient excitation.
基金Project(10871231) supported by the National Natural Science Foundation of China
文摘Under consideration that the profiles of bands at close wavelengths are quite similar and the curvelets are good at capturing profiles, a junk band recovery algorithm for hyperspectral data based on curvelet transform is proposed. Both the noisy bands and the noise-free bands are transformed via curvelet band by band. The high frequency coefficients in junk bands are replaced with linear interpolation of the high frequency coefficients in noise-flee bands, and the low frequency coefficients remain the same to keep the main spectral characteristics from being distorted. Jutak bands then are recovered after the inverse curvelet transform. The performance of this method is tested on the hyperspectral data cube obtained by airborne visible/infrared imaging spectrometer (AVIRIS). The experimental results show that the proposed method is superior to the traditional denoising method BayesShrink and the art-of-state Curvelet Shrinkage in both roots of mean square error (RMSE) and peak-signal-to-noise ratio (PSNR) of recovered bands.
基金an Australian Research Council SPIRT grant(C00107573).
文摘In this paper a new method is developed to make a dynamic layout adjustmentand navigation for enterprise Geographic Information System (GIS) based on object mark recognition.The extraction of object mark images is based on some morphological structural patterns, which aredescribed by morphological structural points, contour property, and other geometrical data in abinary image of enterprise geographic information map. Some pre-processing methods, contour smoothfollowing, linearization and extraction patterns of structural points, are introduced. If anyspecial object is selected to make a decision in a GIS map, the all information around it will beobtained. That is, we need to investigate similar object enterprises around selected region toanalyse whether it is necessary for establishing the object enterprise at that place. To furthernavigate GIS map, we need to move from one region to another. Each time a region is formed anddisplayed based on the user's focus. If a focus point of a map is selected, in terms of extractedobject mark image, a dynamic layout and navigation diagram is constructed. When the user changes thefocus (i. e. click a node in the navigation mode), a new sub-diagram is formed by dropping oldnodes and adding new nodes. The prototype system provides effective interfaces that support GISimage navigation, detailed local image/map viewing, and enterprise information browsing.
文摘High Temperature Air Combustion(HTAC)based on regenerative theory has been used in developed countries in recent years,it has many advantages such as efficient recovery of waste heat,high temperature preheating air,low pollution discharge,and so on.This Technology can be used in various furnaces in mechanical,petroleum,chemical industry.To rebuild traditional radiant-tube combustion system with HTAC technology has become important.In the transformation process,The biggest difficulty encountered is that the stability of burner combustion and control system.Because the exhaust gas heat is absorbed by the regenerator,exhaust gas discharge can be controlled at a very low temperature to realize maximum waste heat recovery.At the same time,it improves the temperature uniformity and improve the heating intensity.Thermal efficiency of the device can reach more than 80%.And compared to the traditional air preheating,21.55%energy can be saved.Revamping on traditional radiant-tube combustion system is technically feasible,but a lot of problems will be involved since the rebuild work is on the old system,this article discusses on the main problem encountered in rebuild process in site.to optimize temperature control and obtain not so high exhaust gas temperature,digital combustion control system is necessary.This control loop consists of big loop and small loop,Big loop controls the load distribution of all burners in each heating zone.Small loop controls each heating zone burner's burning time.Compared performance of tradition radiant-tube heater with regenerative radiant-tube heater,result that regenerative radiant-tube heater have many advantage in consume fuel.Accordance with experience of replacing tradition radiant-tube heater with regenerative type,give a proposition in combustion control system,pilot burner,flame detection and prevent trouble to rebuild work of CAPL and CGL.It is recommended to use regenerative combustion technology in new annealing Line.Although the investment is 1/3 much more than the traditional combustion system,the energy saving effect is obvious and operating costs decreases.Revamping can be taken step by step according to different heating zones.Although taking a long time,it is safer and it influences the production less.Regenerative combustion burner revamping has become successful.However,the revamping work on different furnaces,particular on continuous annealing furnace with high request for temperature control,need further exploration and research.
文摘Energy in its varied forms and applications has become the main driver of today’s modern society. However, recent changes in power demand and climatic changes (decarbonization policy) has awakened the need to rethink through the current energy generating and distribution system. This led to the exploration of other energy sources of which renewable energy (like thermal, solar and wind energy) is fast becoming an integral part of most energy system. However, this innovative and promising energy source is highly unreliable in maintaining a constant peak power that matches demand. Energy storage systems have thus been highlighted as a solution in managing such imbalances and maintaining the stability of supply. Energy storage technologies absorb and store energy, and release it on demand. This includes gravitational potential energy (pumped hydroelectric), chemical energy (batteries), kinetic energy (flywheels or compressed air), and energy in the form of electrical (capacitors) and magnetic fields. This paper provides a detailed and comprehensive overview of some of the state-of-the-art energy storage technologies, its evolution, classification, and comparison along with various area of applications. Also highlighted in this paper is a plethora of power electronic Interface technologies that plays a significant role in enabling optimum performance and utilization of energy storage systems in different areas of application.
文摘This paper presents a comparative analysis of different parameters such as enthalpy, moderator temperature, moderator density, flow velocity, pressure, and fuel temperature profile at the fuel pin cell level of PWR. Moreover, in this paper pitches to fuel pin radius ratio are varied from 2.3 to 4. The methods and implementation strategy are such that the coupled neutronic and thermal-hydraulic analysis is executed in a fully one dimensional (1D) manner. The thermal hydraulic is based on moderator/coolant mass and enthalpy equation together with one group diffusion equation for fuel pin. Modelling of fuel pin cell and subchannel is executed in two steps. First, the governing equations are derived assuming that all the parameters appearing in the equations are temperature independent. Fuel pin centerline temperature and radially averaged temperature equations are derived from Fourier laws of thermal conductivity. Finally, diffusion coefficient, fission cross-section and absorbing cross-section are evaluated with respect to the fuel pin temperature. The outcome will be helpful for further neutronics and thermal analysis of PWR. Thermal hydraulics parameter varies the maximum 30 percentage from the lowermost value.
文摘The world’s energy industry is experiencing a significant transformation due to increased energy consumption, the rise in renewable energy usage, and the demand for sustainability. This review paper explores the potential for transformation offered by Artificial Intelligence (AI) in improving energy infrastructure, specifically looking at how it can be used in managing smart grids, predicting maintenance needs, and integrating renewable energy sources. Machine learning (ML) and deep learning (DL) are crucial AI technologies that have become necessary for enhancing grid stability, reducing operational costs, and improving energy efficiency. AI-powered predictive maintenance has proven to lower unexpected downtime by 40%, while AI-based demand forecasting has reached prediction accuracy of 90%, allowing utilities to efficiently manage supply and demand. In addition, AI helps tackle the issues of fluctuating renewable energy by playing a key role in enhancing energy storage and distribution in nations like Denmark and the US. Moreover, cryptographic frameworks such as Elliptic Curve Cryptography (ECC) and Post-Quantum Cryptography (PQC) offer robust security measures to protect AI-driven energy systems. ECC provides lightweight, efficient encryption ideal for IoT-enabled grids, while PQC frameworks, like the SIKE algorithm, ensure long-term resilience against quantum computing threats, safeguarding critical infrastructure. Nevertheless, obstacles like limited data access, cybersecurity weaknesses, and financial limitations continue to hinder widespread AI implementation, especially in less developed areas. This review emphasizes the significance of adopting essential strategies such as smart grid development, public-private collaborations, strong regulatory frameworks, and standardized data-sharing protocols. It is essential to have strong implementation and monitoring systems, improved cybersecurity measures, and ongoing investment in AI research in order to fully harness AI’s ability to revolutionize energy systems. By tackling these obstacles, AI has the potential to significantly impact the development of a more enduring, productive, and flexible worldwide energy system, hastening the shift towards a renewable-focused energy landscape.
文摘This paper reports 25 kinds of polyclonal or monoclonal antibodies by ABC immunohistochemical technique used for 253 cell smears by fine-needle aspiration. The results were,1. Immunohistochemical diagnosis were classified into 136 metastatic cancers ( K12+ EMA+ CEA+ LCA-),92 lymphomas (LCA+ k12- EMA- CEA-), 4 mesenchymal tumors (Vimentin+), 3 melanomas (S-100+NSE+). 15 reactive proliferations (k+λ4+ CD+ CD8+) and 3 unspecified.2. The origin of 70 metastatic cancers were classified into 36 lung (HLC3-AB+), 4 gastrointestinal tract (MG7+), 8 thyroid (TGB+), 1 prostate (PSA+), 3 liver (AFP+) and 14 unknown. 3. Immunologic phenotype of 87 lymphomas wereclassified into 66 cases of B-cell, 4 T-cell, 3 hsitocyte, 7 Hodgkin' s diseases and 7 unclear. The above results suggest that immunohistochemlcal method may be used as a new method of diagnosing and differentiating epithelial and non-epithelial tumors, detecting primary focus of metastatic cncer, differentiating between reactive proliferation adn lymphome and specifying immunologic phenotype of lymphoma in cell smears of fine- needle aspiration.
基金funded by the Norwegian Research Council(IKTPLUSS-IKT og digital innovasjon,project no.332901).
文摘Foraminifera are shell-bearing microorganisms that are commonly found in marine deposits on the seabed.They are important indicators in many analyses,are used in climate change research,monitoring marine environments,evolutionary studies,and are also frequently used in the oil and gas industry.Although some research has focused on automating the classification of foraminifera images,few have addressed the uncertainty in these classifications.Although foraminifera classification is not a safety-critical task,estimating uncertainty is crucial to avoid misclassifications that could overlook rare and ecologically significant species that are informative indicators of the environment in which they lived.Uncertainty estimation in deep learning has gained significant attention and many methods have been developed.However,evaluating the performance of these methods in practical settings remains a challenge.To create a benchmark for uncertainty estimation in the classification of foraminifera,we administered a multiple choice questionnaire containing classification tasks to four senior geologists.By analyzing their responses,we generated human-derived uncertainty estimates for a test set of 260 images of foraminifera and sediment grains.These uncertainty estimates served as a baseline for comparison when training neural networks in classification.We then trained multiple deep neural networks using a range of uncertainty quantification methods to classify and state the uncertainty about the classifications.The results of the deep learning uncertainty quantification methods were then analyzed and compared with the human benchmark,to see how the methods performed individually and how the methods aligned with humans.Our results show that human-level performance can be achieved with deep learning and that test-time data augmentation and ensembling can help improve both uncertainty estimation and classification performance.Our results also show that human uncertainty estimates are helpful indicators for detecting classification errors and that deep learning-based uncertainty estimates can improve calibration and classification accuracy.
文摘Objective: To study the association between serum neuron-specific enolase (NSE) and the extent of brain damage and the outcome after acute traumatic brain injury (TBI). Methods: The release patterns of serum NSE in 78 patients after acute TBI were analyzed by using the enzyme linked immunosobent assay. The levels of NSE were compared with Glasgow coma scale, the category of brain injury and the outcome after 6 months of injury. Results: There were different NSE values in patients with minor (12.96 μg/L±2.39 μg/L), moderate (23.44 μg/L±5.33 μg/L) and severe brain injury (42.68 μg/L±4.57 μg/L). After severe TBI, the concentration of NSE in patients with epidural hematomas was 13.38 μg/L±4.01 μg/L, 24.03 μg/L±2.85 μg/L in brain contusion without surgical intervention group, 55.20 μg/L±6.35 μg/L in brain contusion with surgical intervention group, and 83.85 μg/L±15.82 μg/L in diffuse brain swelling group. There were close correlations between NSE values and Glasgow coma scale (r=-0.608, P<0.01) and the extent of brain injury (r=0.75, P<0.01). Patients with poor outcome had significantly higher initial and peak NSE values than those with good outcome (66.40 μg/L±9.46 μg/L, 94.24 μg/L±13.75 μg/L vs 32.16 μg/L±4.21 μg/L, 34.08 μg/L±4.40 μg/L, P<0.01, respectively). Initial NSE values were negatively related to the outcome (r=-0.501, P<0.01). Most patients with poor outcomes had persisting or secondary elevated NSE values. Conclusions: Serum NSE is one of the valuable neurobiochemical markers for assessment of the severity of brain injury and outcome prediction.
文摘Inclusions of non-bound amino acids particularly methionine,lysine and threonine,together with the"ideal protein"concept have allowed nutritionists to formulate broiler diets with reduced crude protein(CP)and increased nutrient density of notionally"essential"amino acids and energy content in recent decades,However,chicken-meat production has been projected to double between now and 2050,providing incentives to reduce dietary soybean meal inclusions further by tangibly reducing dietary CP and utilising a larger array of non-bound amino acids.Whilst relatively conservative decreases in dietary CP,in the order of 20 to 30 g/kg,do not negatively impact broiler performance,further decreases in CP typically compromise broiler performance with associated increases in carcass lipid deposition.Increases in carcass lipid deposition suggest changes occur in dietary energy balance,the mechanisms of which are still not fully understood but discourage the acceptance of diets with reductions in CP,Nevertheless,the groundwork has been laid to investigate both amino acid and non-amino acid limitations and propose facilitative strategies for adoption of tangible dietary CP reductions;consequently,these aspects are considered in detail in this review.Unsurprisingly,investigations into reduced dietary CP are epitomised by variability broiler performance due to the wide range of dietary specifications used and the many variables that should,or could,be considered in formulation of experimental diets.Thus,a holistic approach encompassing many factors influencing limitations to the adoption of tangibly reduced CP diets must be considered if they are to be successful in maintaining broiler performance without increasing carcass lipid deposition.
基金This work was funded by a research grant from Merck Sharpe&Dohme(Grant Number MISP ⅡS#54846)Instituto de Salud Carlos Ⅲ(ISCⅡGrant Numbers PI20CⅢ/00004,and RD16CⅢ/0002/0002 to SR).
文摘Background::The burden of human immunodeficiency virus(HIV)infection in people who use drugs(PWUD)is significant.We aimed to screen HIV infection among PWUD and describe their retention in HIV care.Besides,we also screen for hepatitis C virus(HCV)infection among HIV-seropositive PWUD and describe their linkage to care.Methods::We conducted a prospective study in 529 PWUD who visited the"Ca?ada Real Galiana"(Madrid,Spain).The study period was from June 1,2017,to May 31,2018.HIV diagnosis was performed with a rapid antibody screening test at the point-of-care(POC)and HCV diagnosis with immunoassay and PCR tests on dried blood spot(DBS)in a central laboratory.Positive PWUD were referred to the hospital.We used the Chi-square or Fisher’s exact tests,as appropriate,to compare rates between groups.Results::Thirty-five(6.6%)participants were positive HIV antibodies,but 34 reported previous HIV diagnoses,and 27(76%)had prior antiretroviral therapy.Among patients with a positive HIV antibody test,we also found a higher prevalence of homeless(P<0.001)and injection drug use(PWID)(P<0.001),and more decades of drug use(P=0.002).All participants received HIV test results at the POC.Of the 35 HIV positives,28(80%)were retained in HIV medical care at the end of the HIV screening study(2018),and only 22(62.9%)at the end of 2020.Moreover,12/35(34.3%)were positive for the HCV RNA test.Of the latter,10/12(83.3%)were contacted to deliver the HCV results test(delivery time of 19 days),5/12(41.7%)had an appointment and were attended at the hospital and started HCV therapy,and only 4/12(33.3%)cleared HCV.Conclusions::We found almost no new HIV-infected PWUD,but their cascade of HIV care was low and remains a challenge in this population at risk.The high frequency of active hepatitis C in HIV-infected PWUD reflects the need for HCV screening and reinforcing the link to care.