期刊文献+
共找到14篇文章
< 1 >
每页显示 20 50 100
Model Identification and Control of Electromagnetic Actuation in Continuous Casting Process With Improved Quality
1
作者 Isabela Birs Cristina Muresan +1 位作者 Dana Copot Clara Ionescu 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第1期203-215,共13页
This paper presents an original theoretical framework to model steel material properties in continuous casting line process. Specific properties arising from non-Newtonian dynamics are herein used to indicate the natu... This paper presents an original theoretical framework to model steel material properties in continuous casting line process. Specific properties arising from non-Newtonian dynamics are herein used to indicate the natural convergence of distributed parameter systems to fractional order transfer function models. Data driven identification from a real continuous casting line is used to identify model of the electromagnetic actuator device to control flow velocity of liquid steel. To ensure product specifications, a fractional order control is designed and validated on the system. A projection of the closed loop performance onto the quality assessment at end production line is also given in this paper. 展开更多
关键词 Electromagnetic actuator fractional order control fractional order system model non-Newtonian material
在线阅读 下载PDF
Automated Exudates Detection in Retinal Fundus Image Using Morphological Operator and Entropy Maximization Thresholding
2
作者 G. H. Kom B. C. Wouantsa Tindo +1 位作者 J. R. Mboupda Pone A. B. Tiedeu 《Journal of Biomedical Science and Engineering》 2019年第3期212-224,共13页
Blindness which is considered as degrading disabling disease is the final stage that occurs when a certain threshold of visual acuity is overlapped. It happens with vision deficiencies that are pathologic states due t... Blindness which is considered as degrading disabling disease is the final stage that occurs when a certain threshold of visual acuity is overlapped. It happens with vision deficiencies that are pathologic states due to many ocular diseases. Among them, diabetic retinopathy is nowadays a chronic disease that attacks most of diabetic patients. Early detection through automatic screening programs reduces considerably expansion of the disease. Exudates are one of the earliest signs. This paper presents an automated method for exudates detection in digital retinal fundus image. The first step consists of image enhancement. It focuses on histogram expansion and median filter. The difference between filtered image and his inverse reduces noise and removes background while preserving features and patterns related to the exudates. The second step refers to blood vessel removal by using morphological operators. In the last step, we compute the result image with an algorithm based on Entropy Maximization Thresholding to obtain two segmented regions (optical disk and exudates) which were highlighted in the second step. Finally, according to size criteria, we eliminate the other regions obtain the regions of interest related to exudates. Evaluations were done with retinal fundus image DIARETDB1 database. DIARETDB1 gathers high-quality medical images which have been verified by experts. It consists of around 89 colour fundus images of which 84 contain at least mild non-proliferative signs of the diabetic retinopathy. This tool provides a unified framework for benchmarking the methods, but also points out clear deficiencies in the current practice in the method development. Comparing to other recent methods available in literature, we found that the proposed algorithm accomplished better result in terms of sensibility (94.27%) and specificity (97.63%). 展开更多
关键词 Diabetic RETINOPATHY RETINAL FUNDUS Image EXUDATES Entropy MAXIMIZATION THRESHOLDING
暂未订购
Federated reinforcement learning for sustainable and cost-efficient energy management
3
作者 J.Sievers P.Henrich +4 位作者 M.Beichter R.Mikut V.Hagenmeyer T.Blank F.Simon 《Energy and AI》 2025年第3期88-104,共17页
Integrating renewable energy sources into the electricity grid introduces volatility and complexity,requiring advanced energy management systems.By optimizing the charging and discharging behavior of a building’s bat... Integrating renewable energy sources into the electricity grid introduces volatility and complexity,requiring advanced energy management systems.By optimizing the charging and discharging behavior of a building’s battery system,reinforcement learning effectively provides flexibility,managing volatile energy demand,dynamic pricing,and photovoltaic output to maximize rewards.However,the effectiveness of reinforcement learning is often hindered by limited access to training data due to privacy concerns,unstable training processes,and challenges in generalizing to different household conditions.In this study,we propose a novel federated framework for reinforcement learning in energy management systems.By enabling local model training on private data and aggregating only model parameters on a global server,this approach not only preserves privacy but also improves model generalization and robustness under varying household conditions,while decreasing electricity costs and emissions per building.For a comprehensive benchmark,we compare standard reinforcement learning with our federated approach and include mixed integer programming and rule-based systems.Among the reinforcement learning methods,deep deterministic policy gradient performed best on the Ausgrid dataset,with federated learning reducing costs by 5.01%and emissions by 4.60%.Federated learning also improved zero-shot performance for unseen buildings,reducing costs by 5.11%and emissions by 5.55%.Thus,our findings highlight the potential of federated reinforcement learning to enhance energy management systems by balancing privacy,sustainability,and efficiency. 展开更多
关键词 Reinforcement learning Federated learning Energy management Smart grid
在线阅读 下载PDF
Individual Software Expertise Formalization and Assessment from Project Management Tool Databases
4
作者 Traian-Radu Plosca Alexandru-Mihai Pescaru +1 位作者 Bianca-Valeria Rus Daniel-Ioan Curiac 《Computers, Materials & Continua》 2026年第1期389-411,共23页
Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods... Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods,based on reliable existing data stored in project management tools’datasets,automating this evaluation process becomes a natural step forward.In this context,our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems.For this,we mathematically formalize two categories of expertise:technology-specific expertise,which denotes the skills required for a particular technology,and general expertise,which encapsulates overall knowledge in the software industry.Afterward,we automatically classify the zones of expertise associated with each task a developer has worked on using Bidirectional Encoder Representations from Transformers(BERT)-like transformers to handle the unique characteristics of project tool datasets effectively.Finally,our method evaluates the proficiency of each software specialist across already completed projects from both technology-specific and general perspectives.The method was experimentally validated,yielding promising results. 展开更多
关键词 Expertise formalization transformer-based models natural language processing augmented data project management tool skill classification
在线阅读 下载PDF
Vessels Segmentation in Angiograms Using Convolutional Neural Network: A Deep Learning Based Approach
5
作者 Sanjiban Sekhar Roy Ching-Hsien Hsu +3 位作者 Akash Samaran Ranjan Goyal Arindam Pande Valentina E.Balas 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第7期241-255,共15页
Coronary arterydisease(CAD)has become a significant causeof heart attack,especially amongthose 40yearsoldor younger.There is a need to develop new technologies andmethods to deal with this disease.Many researchers hav... Coronary arterydisease(CAD)has become a significant causeof heart attack,especially amongthose 40yearsoldor younger.There is a need to develop new technologies andmethods to deal with this disease.Many researchers have proposed image processing-based solutions for CADdiagnosis,but achieving highly accurate results for angiogram segmentation is still a challenge.Several different types of angiograms are adopted for CAD diagnosis.This paper proposes an approach for image segmentation using ConvolutionNeuralNetworks(CNN)for diagnosing coronary artery disease to achieve state-of-the-art results.We have collected the 2D X-ray images from the hospital,and the proposed model has been applied to them.Image augmentation has been performed in this research as it’s the most significant task required to be initiated to increase the dataset’s size.Also,the images have been enhanced using noise removal techniques before being fed to the CNN model for segmentation to achieve high accuracy.As the output,different settings of the network architecture undoubtedly have achieved different accuracy,among which the highest accuracy of the model is 97.61%.Compared with the other models,these results have proven to be superior to this proposed method in achieving state-of-the-art results. 展开更多
关键词 ANGIOGRAM convolution neural network coronary artery disease diagnosis of CAD image segmentation
在线阅读 下载PDF
FPGA-Based Real-Time Simulation of Renewable Energy Source Power Converters
6
作者 Tamais Kokenyesi Istvan Varjasi 《Journal of Energy and Power Engineering》 2013年第1期168-177,共10页
Developing the control of modem power converters is a very expensive and time-consuming task. Time to market can take unacceptable long. FPGA-based real-time simulation of a power stage with analog measured signals ca... Developing the control of modem power converters is a very expensive and time-consuming task. Time to market can take unacceptable long. FPGA-based real-time simulation of a power stage with analog measured signals can reduce significantly the cost and time of testing a product. This new approach is known as HIL (hardware-in-the-loop) testing. A general power converter consists of two main parts: a power level (main circuit) and a digital controller unit, which is usually realized by using some kind of DSP. Testing the controller HW and SW is quite problematic: live tests with a completely assembled converter can be dangerous and expensive. A low-power model of the main circuit can be built under laboratory conditions, but it will have parameters (e.g. time constants and relative losses) differing from the ones of the original system. The solution is the HIL simulation of the main circuit. With this method the simulator can be completely transparent for the controller unit, unlike other computer based simulation methods The subject of this paper is to develop such a real-time simulator using FPGA. The modeled circuit is a three-phase inverter, which is widely used in power converters of renewable energy sources. 展开更多
关键词 HIL real-time simulation FPGA three-phase inverter solar converters.
在线阅读 下载PDF
Decision-focused fine-tuning of time series foundation models for dispatchable feeder optimization
7
作者 Maximilian Beichter Nils Friederich +7 位作者 Janik Pinter Dorina Werling Kaleb Phipps Sebastian Beichter Oliver Neumann Ralf Mikut Veit Hagenmeyer Benedikt Heidrich 《Energy and AI》 2025年第3期466-479,共14页
Time series foundation models provide a universal solution for generating forecasts to support optimization problems in energy systems.Those foundation models are typically trained in a prediction-focused manner to ma... Time series foundation models provide a universal solution for generating forecasts to support optimization problems in energy systems.Those foundation models are typically trained in a prediction-focused manner to maximize forecast quality.In contrast,decision-focused learning directly improves the resulting value of the forecast in downstream optimization rather than merely maximizing forecasting quality.The practical integration of forecast values into forecasting models is challenging,particularly when addressing complex applications with diverse instances,such as buildings.This becomes even more complicated when instances possess specific characteristics that require instance-specific,tailored predictions to increase the forecast value.To tackle this challenge,we use decision-focused fine-tuning within time series foundation models to offer a scalable and efficient solution for decision-focused learning applied to the dispatchable feeder optimization problem.To obtain more robust predictions for scarce building data,we use Moirai as a state-of-the-art foundation model,which offers robust and generalized results with few-shot parameter-efficient fine-tuning.Comparing the decision-focused fine-tuned Moirai with a state-of-the-art classical prediction-focused fine-tuning Moirai,we observe an improvement of 9.45%in Average Daily Total Costs. 展开更多
关键词 Deep learning Decision-focused learning OPTIMIZATION Dispatchable feeder optimization Time series foundation models Parameter efficient fine-tuning
在线阅读 下载PDF
Understanding electricity prices beyond the merit order principle using explainable AI 被引量:1
8
作者 Julius Trebbien Leonardo Rydin Gorjao +2 位作者 Aaron Praktiknjo Benjamin Schafer Dirk Witthaut 《Energy and AI》 2023年第3期149-159,共11页
Electricity prices in liberalized markets are determined by the supply and demand for electric power,which are in turn driven by various external influences that vary strongly in time.In perfect competition,the merit ... Electricity prices in liberalized markets are determined by the supply and demand for electric power,which are in turn driven by various external influences that vary strongly in time.In perfect competition,the merit order principle describes that dispatchable power plants enter the market in the order of their marginal costs to meet the residual load,i.e.the difference of load and renewable generation.Various market models are based on this principle when attempting to predict electricity prices,yet the principle is fraught with assumptions and simplifications and thus is limited in accurately predicting prices.In this article,we present an explainable machine learning model for the electricity prices on the German day-ahead market which foregoes of the aforementioned assumptions of the merit order principle.Our model is designed for an ex-post analysis of prices and builds on various external features.Using SHapley Additive exPlanation(SHAP)values we disentangle the role of the different features and quantify their importance from empiric data,and therein circumvent the limitations inherent to the merit order principle.We show that load,wind and solar generation are the central external features driving prices,as expected,wherein wind generation affects prices more than solar generation.Similarly,fuel prices also highly affect prices,and do so in a nontrivial manner.Moreover,large generation ramps are correlated with high prices due to the limited flexibility of nuclear and lignite plants.Overall,we offer a model that describes the influence of the main drivers of electricity prices in Germany,taking us a step beyond the limited merit order principle in explaining the drivers of electricity prices and their relation to each other. 展开更多
关键词 Electricity prices Merit order principle Explainable artificial intelligence Machine learning Fuel prices Energy market
在线阅读 下载PDF
Data analytics in the electricity sector – A quantitative and qualitative literature review
9
作者 Frederik vom Scheidt Hana Medinova +3 位作者 Nicole Ludwig Bent Richter Philipp Staudt Christof Weinhardt 《Energy and AI》 2020年第1期145-167,共23页
The rapid transformation of the electricity sector increases both the opportunities and the need for Data Analytics.In recent years,various new methods and fields of application have been emerging.As research is growi... The rapid transformation of the electricity sector increases both the opportunities and the need for Data Analytics.In recent years,various new methods and fields of application have been emerging.As research is growing and becoming more diverse and specialized,it is essential to integrate and structure the fragmented body of scientific work.We therefore conduct a systematic review of studies concerned with developing and applying Data Analytics methods in the context of the electricity value chain.First,we provide a quantitative high-level overview of the status quo of Data Analytics research,and show historical literature growth,leading countries in the field and the most intensive international collaborations.Then,we qualitatively review over 200 high-impact studies to present an in-depth analysis of the most prominent applications of Data Analytics in each of the electricity sector’s areas:generation,trading,transmission,distribution,and consumption.For each area,we review the state-of-the-art Data Analytics applications and methods.In addition,we discuss used data sets,feature selection methods,benchmark methods,evaluation metrics,and model complexity and run time.Summarizing the findings from the different areas,we identify best practices and what researchers in one area can learn from other areas.Finally,we highlight potential for future research. 展开更多
关键词 Data analytics ELECTRICITY Machine learning Generation PRICE Transmission Distribution CONSUMPTION
在线阅读 下载PDF
Uncertainty-aware particle segmentation for electron microscopy at varied length scales
10
作者 Luca Rettenberger Nathan J.Szymanski +4 位作者 Yan Zeng Jan Schuetzke Shilong Wang Gerbrand Ceder Markus Reischl 《npj Computational Materials》 CSCD 2024年第1期1961-1969,共9页
Electron microscopy is indispensable for examining the morphology and composition of solid materials at the sub-micron scale.To study the powder samples that are widely used in materials development,scanning electron ... Electron microscopy is indispensable for examining the morphology and composition of solid materials at the sub-micron scale.To study the powder samples that are widely used in materials development,scanning electron microscopes(SEMs)are increasingly used at the laboratory scale to generate large datasets with hundreds of images.Parsing these images to identify distinct particles and determine their morphology requires careful analysis,and automating this process remains challenging.In this work,we enhance the Mask R-CNN architecture to develop a method for automated segmentation of particles in SEM images.We address several challenges inherent to measurements,such as image blur and particle agglomeration.Moreover,our method accounts for prediction uncertainty when such issues prevent accurate segmentation of a particle.Recognizing that disparate length scales are often present in large datasets,we use this framework to create two models that are separately trained to handle images obtained at low or high magnification.By testing these models on a variety of inorganic samples,our approach to particle segmentation surpasses an established automated segmentation method and yields comparable results to the predictions of three domain experts,revealing comparable accuracy while requiring a fraction of the time.These findings highlight the potential of deep learning in advancing autonomous workflows for materials characterization. 展开更多
关键词 PARTICLE scales hundreds
原文传递
Hybrid multi-chip assembly of optical communication engines by in situ 3D nanolithography 被引量:12
11
作者 Matthias Blaicher Muhammad Rodlin Billah +16 位作者 Juned Kemal Tobias Hoose Pablo Marin-Palomo Andreas Hofmann Yasar Kutuvantavida Clemens Kieninger Philipp-Immanuel Dietrich Matthias Lauermann Stefan Wolf Ute Troppenz Martin Moehrle Florian Merget Sebastian Skacel Jeremy Witzens Sebastian Randel Wolfgang Freude Christian Koos 《Light(Science & Applications)》 SCIE EI CAS CSCD 2020年第1期1340-1350,共11页
Three-dimensional(3D)nano-printing of freeform optical waveguides,also referred to as photonic wire bonding,allows for efficient coupling between photonic chips and can greatly simplify optical system assembly.As a ke... Three-dimensional(3D)nano-printing of freeform optical waveguides,also referred to as photonic wire bonding,allows for efficient coupling between photonic chips and can greatly simplify optical system assembly.As a key advantage,the shape and the trajectory of photonic wire bonds can be adapted to the mode-field profiles and the positions of the chips,thereby offering an attractive alternative to conventional optical assembly techniques that rely on technically complex and costly high-precision alignment.However,while the fundamental advantages of the photonic wire bonding concept have been shown in proof-of-concept experiments,it has so far been unclear whether the technique can also be leveraged for practically relevant use cases with stringent reproducibility and reliability requirements.In this paper,we demonstrate optical communication engines that rely on photonic wire bonding for connecting arrays of silicon photonic modulators to InP lasers and single-mode fibres.In a first experiment,we show an eight-channel transmitter offering an aggregate line rate of 448 Gbit/s by low-complexity intensity modulation.A second experiment is dedicated to a four-channel coherent transmitter,operating at a net data rate of 732.7 Gbit/s-a record for coherent silicon photonic transmitters with co-packaged lasers.Using dedicated test chips,we further demonstrate automated mass production of photonic wire bonds with insertion losses of(0.7±0.15)dB,and we show their resilience in environmental-stability tests and at high optical power.These results might form the basis for simplified assembly of advanced photonic multi-chip systems that combine the distinct advantages of different integration platforms. 展开更多
关键词 communication LITHOGRAPHY BONDING
原文传递
Biophotonic sensors with integrated Si_(3)N_(4)-organic hybrid (SiNOH) lasers for point-of-care diagnostics 被引量:9
12
作者 Daria Kohler Gregor Schindler +5 位作者 Lothar Hahn Johannes Milvich Andreas Hofmann Kerstin Lange Wolfgang Freude Christian Koos 《Light(Science & Applications)》 SCIE EI CAS CSCD 2021年第4期648-659,共12页
Early and efficient disease diagnosis with low-cost point-of-care devices is gaining importance for personalized medicine and public health protection.Within this context,waveguide-(WG)-based optical biosensors on the... Early and efficient disease diagnosis with low-cost point-of-care devices is gaining importance for personalized medicine and public health protection.Within this context,waveguide-(WG)-based optical biosensors on the siliconnitride(Si_(3)N_(4))platform represent a particularly promising option,offering highly sensitive detection of indicative biomarkers in multiplexed sensor arrays operated by light in the visible-wavelength range.However,while passive Si_(3)N_(4)-based photonic circuits lend themselves to highly scalable mass production,the integration of low-cost light sources remains a challenge.In this paper,we demonstrate optical biosensors that combine Si_(3)N_(4)sensor circuits with hybrid on-chip organic lasers.These Si_(3)N_(4)-organic hybrid(SiNOH)lasers rely on a dye-doped cladding material that are deposited on top of a passive WG and that are optically pumped by an external light source.Fabrication of the devices is simple:The underlying Si_(3)N_(4)WGs are structured in a single lithography step,and the organic gain medium is subsequently applied by dispensing,spin-coating,or ink-jet printing processes.A highly parallel read-out of the optical sensor signals is accomplished with a simple camera.In our proof-of-concept experiment,we demonstrate the viability of the approach by detecting different concentrations of fibrinogen in phosphate-buffered saline solutions with a sensor-length(L-)-related sensitivity of S/L=0.16 rad nM^(-1)mm^(-1).To our knowledge,this is the first demonstration of an integrated optical circuit driven by a co-integrated low-cost organic light source.We expect that the versatility of the device concept,the simple operation principle,and the compatibility with cost-efficient mass production will make the concept a highly attractive option for applications in biophotonics and point-of-care diagnostics. 展开更多
关键词 PASSIVE WAVEGUIDE PUMPED
原文传递
Test-driven verification/validation of model transformations
13
作者 Lfiszlo LENGYEL Hassan CHARAF 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2015年第2期85-97,共13页
Why is it important to verify/validate model transformations? The motivation is to improve the quality of the trans- formations, and therefore the quality of the generated software artifacts. Verified/validated model... Why is it important to verify/validate model transformations? The motivation is to improve the quality of the trans- formations, and therefore the quality of the generated software artifacts. Verified/validated model transformations make it possible to ensure certain properties of the generated software artifacts. In this way, verification/validation methods can guarantee different requirements stated by the actual domain against the generated/modified/optimized software products. For example, a verified/ validated model transformation can ensure the preservation of certain properties during the model-to-model transformation. This paper emphasizes the necessity of methods that make model transformation verified/validated, discusses the different scenarios of model transformation verification and validation, and introduces the principles of a novel test-driven method for verifying/ validating model transformations. We provide a solution that makes it possible to automatically generate test input models for model transformations. Furthermore, we collect and discuss the actual open issues in the field of verification/validation of model transformations. 展开更多
关键词 Graph rewriting based model transformations Verification/validation Test-driven verification
原文传递
Validating neural networks for spectroscopic classification on a universal synthetic dataset
14
作者 Jan Schuetzke Nathan J.Szymanski Markus Reischl 《npj Computational Materials》 SCIE EI CSCD 2023年第1期1325-1336,共12页
To aid the development of machine learning models for automated spectroscopic data classification,we created a universal synthetic dataset for the validation of their performance.The dataset mimics the characteristic ... To aid the development of machine learning models for automated spectroscopic data classification,we created a universal synthetic dataset for the validation of their performance.The dataset mimics the characteristic appearance of experimental measurements from techniques such as X-ray diffraction,nuclear magnetic resonance,and Raman spectroscopy among others.We applied eight neural network architectures to classify artificial spectra,evaluating their ability to handle common experimental artifacts.While all models achieved over 98%accuracy on the synthetic dataset,misclassifications occurred when spectra had overlapping peaks or intensities.We found that non-linear activation functions,specifically ReLU in the fully-connected layers,were crucial for distinguishing between these classes,while adding more sophisticated components,such as residual blocks or normalization layers,provided no performance benefit.Based on these findings,we summarize key design principles for neural networks in spectroscopic data classification and publicly share all scripts used in this study. 展开更多
关键词 SPECTROSCOPIC adding classify
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部