This paper presents an original theoretical framework to model steel material properties in continuous casting line process. Specific properties arising from non-Newtonian dynamics are herein used to indicate the natu...This paper presents an original theoretical framework to model steel material properties in continuous casting line process. Specific properties arising from non-Newtonian dynamics are herein used to indicate the natural convergence of distributed parameter systems to fractional order transfer function models. Data driven identification from a real continuous casting line is used to identify model of the electromagnetic actuator device to control flow velocity of liquid steel. To ensure product specifications, a fractional order control is designed and validated on the system. A projection of the closed loop performance onto the quality assessment at end production line is also given in this paper.展开更多
Blindness which is considered as degrading disabling disease is the final stage that occurs when a certain threshold of visual acuity is overlapped. It happens with vision deficiencies that are pathologic states due t...Blindness which is considered as degrading disabling disease is the final stage that occurs when a certain threshold of visual acuity is overlapped. It happens with vision deficiencies that are pathologic states due to many ocular diseases. Among them, diabetic retinopathy is nowadays a chronic disease that attacks most of diabetic patients. Early detection through automatic screening programs reduces considerably expansion of the disease. Exudates are one of the earliest signs. This paper presents an automated method for exudates detection in digital retinal fundus image. The first step consists of image enhancement. It focuses on histogram expansion and median filter. The difference between filtered image and his inverse reduces noise and removes background while preserving features and patterns related to the exudates. The second step refers to blood vessel removal by using morphological operators. In the last step, we compute the result image with an algorithm based on Entropy Maximization Thresholding to obtain two segmented regions (optical disk and exudates) which were highlighted in the second step. Finally, according to size criteria, we eliminate the other regions obtain the regions of interest related to exudates. Evaluations were done with retinal fundus image DIARETDB1 database. DIARETDB1 gathers high-quality medical images which have been verified by experts. It consists of around 89 colour fundus images of which 84 contain at least mild non-proliferative signs of the diabetic retinopathy. This tool provides a unified framework for benchmarking the methods, but also points out clear deficiencies in the current practice in the method development. Comparing to other recent methods available in literature, we found that the proposed algorithm accomplished better result in terms of sensibility (94.27%) and specificity (97.63%).展开更多
Integrating renewable energy sources into the electricity grid introduces volatility and complexity,requiring advanced energy management systems.By optimizing the charging and discharging behavior of a building’s bat...Integrating renewable energy sources into the electricity grid introduces volatility and complexity,requiring advanced energy management systems.By optimizing the charging and discharging behavior of a building’s battery system,reinforcement learning effectively provides flexibility,managing volatile energy demand,dynamic pricing,and photovoltaic output to maximize rewards.However,the effectiveness of reinforcement learning is often hindered by limited access to training data due to privacy concerns,unstable training processes,and challenges in generalizing to different household conditions.In this study,we propose a novel federated framework for reinforcement learning in energy management systems.By enabling local model training on private data and aggregating only model parameters on a global server,this approach not only preserves privacy but also improves model generalization and robustness under varying household conditions,while decreasing electricity costs and emissions per building.For a comprehensive benchmark,we compare standard reinforcement learning with our federated approach and include mixed integer programming and rule-based systems.Among the reinforcement learning methods,deep deterministic policy gradient performed best on the Ausgrid dataset,with federated learning reducing costs by 5.01%and emissions by 4.60%.Federated learning also improved zero-shot performance for unseen buildings,reducing costs by 5.11%and emissions by 5.55%.Thus,our findings highlight the potential of federated reinforcement learning to enhance energy management systems by balancing privacy,sustainability,and efficiency.展开更多
Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods...Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods,based on reliable existing data stored in project management tools’datasets,automating this evaluation process becomes a natural step forward.In this context,our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems.For this,we mathematically formalize two categories of expertise:technology-specific expertise,which denotes the skills required for a particular technology,and general expertise,which encapsulates overall knowledge in the software industry.Afterward,we automatically classify the zones of expertise associated with each task a developer has worked on using Bidirectional Encoder Representations from Transformers(BERT)-like transformers to handle the unique characteristics of project tool datasets effectively.Finally,our method evaluates the proficiency of each software specialist across already completed projects from both technology-specific and general perspectives.The method was experimentally validated,yielding promising results.展开更多
Coronary arterydisease(CAD)has become a significant causeof heart attack,especially amongthose 40yearsoldor younger.There is a need to develop new technologies andmethods to deal with this disease.Many researchers hav...Coronary arterydisease(CAD)has become a significant causeof heart attack,especially amongthose 40yearsoldor younger.There is a need to develop new technologies andmethods to deal with this disease.Many researchers have proposed image processing-based solutions for CADdiagnosis,but achieving highly accurate results for angiogram segmentation is still a challenge.Several different types of angiograms are adopted for CAD diagnosis.This paper proposes an approach for image segmentation using ConvolutionNeuralNetworks(CNN)for diagnosing coronary artery disease to achieve state-of-the-art results.We have collected the 2D X-ray images from the hospital,and the proposed model has been applied to them.Image augmentation has been performed in this research as it’s the most significant task required to be initiated to increase the dataset’s size.Also,the images have been enhanced using noise removal techniques before being fed to the CNN model for segmentation to achieve high accuracy.As the output,different settings of the network architecture undoubtedly have achieved different accuracy,among which the highest accuracy of the model is 97.61%.Compared with the other models,these results have proven to be superior to this proposed method in achieving state-of-the-art results.展开更多
Developing the control of modem power converters is a very expensive and time-consuming task. Time to market can take unacceptable long. FPGA-based real-time simulation of a power stage with analog measured signals ca...Developing the control of modem power converters is a very expensive and time-consuming task. Time to market can take unacceptable long. FPGA-based real-time simulation of a power stage with analog measured signals can reduce significantly the cost and time of testing a product. This new approach is known as HIL (hardware-in-the-loop) testing. A general power converter consists of two main parts: a power level (main circuit) and a digital controller unit, which is usually realized by using some kind of DSP. Testing the controller HW and SW is quite problematic: live tests with a completely assembled converter can be dangerous and expensive. A low-power model of the main circuit can be built under laboratory conditions, but it will have parameters (e.g. time constants and relative losses) differing from the ones of the original system. The solution is the HIL simulation of the main circuit. With this method the simulator can be completely transparent for the controller unit, unlike other computer based simulation methods The subject of this paper is to develop such a real-time simulator using FPGA. The modeled circuit is a three-phase inverter, which is widely used in power converters of renewable energy sources.展开更多
Time series foundation models provide a universal solution for generating forecasts to support optimization problems in energy systems.Those foundation models are typically trained in a prediction-focused manner to ma...Time series foundation models provide a universal solution for generating forecasts to support optimization problems in energy systems.Those foundation models are typically trained in a prediction-focused manner to maximize forecast quality.In contrast,decision-focused learning directly improves the resulting value of the forecast in downstream optimization rather than merely maximizing forecasting quality.The practical integration of forecast values into forecasting models is challenging,particularly when addressing complex applications with diverse instances,such as buildings.This becomes even more complicated when instances possess specific characteristics that require instance-specific,tailored predictions to increase the forecast value.To tackle this challenge,we use decision-focused fine-tuning within time series foundation models to offer a scalable and efficient solution for decision-focused learning applied to the dispatchable feeder optimization problem.To obtain more robust predictions for scarce building data,we use Moirai as a state-of-the-art foundation model,which offers robust and generalized results with few-shot parameter-efficient fine-tuning.Comparing the decision-focused fine-tuned Moirai with a state-of-the-art classical prediction-focused fine-tuning Moirai,we observe an improvement of 9.45%in Average Daily Total Costs.展开更多
Electricity prices in liberalized markets are determined by the supply and demand for electric power,which are in turn driven by various external influences that vary strongly in time.In perfect competition,the merit ...Electricity prices in liberalized markets are determined by the supply and demand for electric power,which are in turn driven by various external influences that vary strongly in time.In perfect competition,the merit order principle describes that dispatchable power plants enter the market in the order of their marginal costs to meet the residual load,i.e.the difference of load and renewable generation.Various market models are based on this principle when attempting to predict electricity prices,yet the principle is fraught with assumptions and simplifications and thus is limited in accurately predicting prices.In this article,we present an explainable machine learning model for the electricity prices on the German day-ahead market which foregoes of the aforementioned assumptions of the merit order principle.Our model is designed for an ex-post analysis of prices and builds on various external features.Using SHapley Additive exPlanation(SHAP)values we disentangle the role of the different features and quantify their importance from empiric data,and therein circumvent the limitations inherent to the merit order principle.We show that load,wind and solar generation are the central external features driving prices,as expected,wherein wind generation affects prices more than solar generation.Similarly,fuel prices also highly affect prices,and do so in a nontrivial manner.Moreover,large generation ramps are correlated with high prices due to the limited flexibility of nuclear and lignite plants.Overall,we offer a model that describes the influence of the main drivers of electricity prices in Germany,taking us a step beyond the limited merit order principle in explaining the drivers of electricity prices and their relation to each other.展开更多
The rapid transformation of the electricity sector increases both the opportunities and the need for Data Analytics.In recent years,various new methods and fields of application have been emerging.As research is growi...The rapid transformation of the electricity sector increases both the opportunities and the need for Data Analytics.In recent years,various new methods and fields of application have been emerging.As research is growing and becoming more diverse and specialized,it is essential to integrate and structure the fragmented body of scientific work.We therefore conduct a systematic review of studies concerned with developing and applying Data Analytics methods in the context of the electricity value chain.First,we provide a quantitative high-level overview of the status quo of Data Analytics research,and show historical literature growth,leading countries in the field and the most intensive international collaborations.Then,we qualitatively review over 200 high-impact studies to present an in-depth analysis of the most prominent applications of Data Analytics in each of the electricity sector’s areas:generation,trading,transmission,distribution,and consumption.For each area,we review the state-of-the-art Data Analytics applications and methods.In addition,we discuss used data sets,feature selection methods,benchmark methods,evaluation metrics,and model complexity and run time.Summarizing the findings from the different areas,we identify best practices and what researchers in one area can learn from other areas.Finally,we highlight potential for future research.展开更多
Electron microscopy is indispensable for examining the morphology and composition of solid materials at the sub-micron scale.To study the powder samples that are widely used in materials development,scanning electron ...Electron microscopy is indispensable for examining the morphology and composition of solid materials at the sub-micron scale.To study the powder samples that are widely used in materials development,scanning electron microscopes(SEMs)are increasingly used at the laboratory scale to generate large datasets with hundreds of images.Parsing these images to identify distinct particles and determine their morphology requires careful analysis,and automating this process remains challenging.In this work,we enhance the Mask R-CNN architecture to develop a method for automated segmentation of particles in SEM images.We address several challenges inherent to measurements,such as image blur and particle agglomeration.Moreover,our method accounts for prediction uncertainty when such issues prevent accurate segmentation of a particle.Recognizing that disparate length scales are often present in large datasets,we use this framework to create two models that are separately trained to handle images obtained at low or high magnification.By testing these models on a variety of inorganic samples,our approach to particle segmentation surpasses an established automated segmentation method and yields comparable results to the predictions of three domain experts,revealing comparable accuracy while requiring a fraction of the time.These findings highlight the potential of deep learning in advancing autonomous workflows for materials characterization.展开更多
Three-dimensional(3D)nano-printing of freeform optical waveguides,also referred to as photonic wire bonding,allows for efficient coupling between photonic chips and can greatly simplify optical system assembly.As a ke...Three-dimensional(3D)nano-printing of freeform optical waveguides,also referred to as photonic wire bonding,allows for efficient coupling between photonic chips and can greatly simplify optical system assembly.As a key advantage,the shape and the trajectory of photonic wire bonds can be adapted to the mode-field profiles and the positions of the chips,thereby offering an attractive alternative to conventional optical assembly techniques that rely on technically complex and costly high-precision alignment.However,while the fundamental advantages of the photonic wire bonding concept have been shown in proof-of-concept experiments,it has so far been unclear whether the technique can also be leveraged for practically relevant use cases with stringent reproducibility and reliability requirements.In this paper,we demonstrate optical communication engines that rely on photonic wire bonding for connecting arrays of silicon photonic modulators to InP lasers and single-mode fibres.In a first experiment,we show an eight-channel transmitter offering an aggregate line rate of 448 Gbit/s by low-complexity intensity modulation.A second experiment is dedicated to a four-channel coherent transmitter,operating at a net data rate of 732.7 Gbit/s-a record for coherent silicon photonic transmitters with co-packaged lasers.Using dedicated test chips,we further demonstrate automated mass production of photonic wire bonds with insertion losses of(0.7±0.15)dB,and we show their resilience in environmental-stability tests and at high optical power.These results might form the basis for simplified assembly of advanced photonic multi-chip systems that combine the distinct advantages of different integration platforms.展开更多
Early and efficient disease diagnosis with low-cost point-of-care devices is gaining importance for personalized medicine and public health protection.Within this context,waveguide-(WG)-based optical biosensors on the...Early and efficient disease diagnosis with low-cost point-of-care devices is gaining importance for personalized medicine and public health protection.Within this context,waveguide-(WG)-based optical biosensors on the siliconnitride(Si_(3)N_(4))platform represent a particularly promising option,offering highly sensitive detection of indicative biomarkers in multiplexed sensor arrays operated by light in the visible-wavelength range.However,while passive Si_(3)N_(4)-based photonic circuits lend themselves to highly scalable mass production,the integration of low-cost light sources remains a challenge.In this paper,we demonstrate optical biosensors that combine Si_(3)N_(4)sensor circuits with hybrid on-chip organic lasers.These Si_(3)N_(4)-organic hybrid(SiNOH)lasers rely on a dye-doped cladding material that are deposited on top of a passive WG and that are optically pumped by an external light source.Fabrication of the devices is simple:The underlying Si_(3)N_(4)WGs are structured in a single lithography step,and the organic gain medium is subsequently applied by dispensing,spin-coating,or ink-jet printing processes.A highly parallel read-out of the optical sensor signals is accomplished with a simple camera.In our proof-of-concept experiment,we demonstrate the viability of the approach by detecting different concentrations of fibrinogen in phosphate-buffered saline solutions with a sensor-length(L-)-related sensitivity of S/L=0.16 rad nM^(-1)mm^(-1).To our knowledge,this is the first demonstration of an integrated optical circuit driven by a co-integrated low-cost organic light source.We expect that the versatility of the device concept,the simple operation principle,and the compatibility with cost-efficient mass production will make the concept a highly attractive option for applications in biophotonics and point-of-care diagnostics.展开更多
Why is it important to verify/validate model transformations? The motivation is to improve the quality of the trans- formations, and therefore the quality of the generated software artifacts. Verified/validated model...Why is it important to verify/validate model transformations? The motivation is to improve the quality of the trans- formations, and therefore the quality of the generated software artifacts. Verified/validated model transformations make it possible to ensure certain properties of the generated software artifacts. In this way, verification/validation methods can guarantee different requirements stated by the actual domain against the generated/modified/optimized software products. For example, a verified/ validated model transformation can ensure the preservation of certain properties during the model-to-model transformation. This paper emphasizes the necessity of methods that make model transformation verified/validated, discusses the different scenarios of model transformation verification and validation, and introduces the principles of a novel test-driven method for verifying/ validating model transformations. We provide a solution that makes it possible to automatically generate test input models for model transformations. Furthermore, we collect and discuss the actual open issues in the field of verification/validation of model transformations.展开更多
To aid the development of machine learning models for automated spectroscopic data classification,we created a universal synthetic dataset for the validation of their performance.The dataset mimics the characteristic ...To aid the development of machine learning models for automated spectroscopic data classification,we created a universal synthetic dataset for the validation of their performance.The dataset mimics the characteristic appearance of experimental measurements from techniques such as X-ray diffraction,nuclear magnetic resonance,and Raman spectroscopy among others.We applied eight neural network architectures to classify artificial spectra,evaluating their ability to handle common experimental artifacts.While all models achieved over 98%accuracy on the synthetic dataset,misclassifications occurred when spectra had overlapping peaks or intensities.We found that non-linear activation functions,specifically ReLU in the fully-connected layers,were crucial for distinguishing between these classes,while adding more sophisticated components,such as residual blocks or normalization layers,provided no performance benefit.Based on these findings,we summarize key design principles for neural networks in spectroscopic data classification and publicly share all scripts used in this study.展开更多
基金supported by Research Foundation Flanders(FWO)(1S04719N,12X6819N)partially supported by a grant of the Ministry of Research+2 种基金Innovation and DigitizationCNCS-UEFISCDIproject number PN-Ⅲ-P1-1.1-PD-2021-0204,within PNCDIⅢ。
文摘This paper presents an original theoretical framework to model steel material properties in continuous casting line process. Specific properties arising from non-Newtonian dynamics are herein used to indicate the natural convergence of distributed parameter systems to fractional order transfer function models. Data driven identification from a real continuous casting line is used to identify model of the electromagnetic actuator device to control flow velocity of liquid steel. To ensure product specifications, a fractional order control is designed and validated on the system. A projection of the closed loop performance onto the quality assessment at end production line is also given in this paper.
文摘Blindness which is considered as degrading disabling disease is the final stage that occurs when a certain threshold of visual acuity is overlapped. It happens with vision deficiencies that are pathologic states due to many ocular diseases. Among them, diabetic retinopathy is nowadays a chronic disease that attacks most of diabetic patients. Early detection through automatic screening programs reduces considerably expansion of the disease. Exudates are one of the earliest signs. This paper presents an automated method for exudates detection in digital retinal fundus image. The first step consists of image enhancement. It focuses on histogram expansion and median filter. The difference between filtered image and his inverse reduces noise and removes background while preserving features and patterns related to the exudates. The second step refers to blood vessel removal by using morphological operators. In the last step, we compute the result image with an algorithm based on Entropy Maximization Thresholding to obtain two segmented regions (optical disk and exudates) which were highlighted in the second step. Finally, according to size criteria, we eliminate the other regions obtain the regions of interest related to exudates. Evaluations were done with retinal fundus image DIARETDB1 database. DIARETDB1 gathers high-quality medical images which have been verified by experts. It consists of around 89 colour fundus images of which 84 contain at least mild non-proliferative signs of the diabetic retinopathy. This tool provides a unified framework for benchmarking the methods, but also points out clear deficiencies in the current practice in the method development. Comparing to other recent methods available in literature, we found that the proposed algorithm accomplished better result in terms of sensibility (94.27%) and specificity (97.63%).
基金support by the KIT-Publication Fund of the Karl-sruhe Institute of Technology,Germanyfunded by the German Research Foundation(DFG),Germany as part of the Research Training Group 2153:“Energy Status Data-Informatics Methods for its Collection,Analysis,and Exploitation”supported by the Helmholtz Association in the Program Energy System Design.
文摘Integrating renewable energy sources into the electricity grid introduces volatility and complexity,requiring advanced energy management systems.By optimizing the charging and discharging behavior of a building’s battery system,reinforcement learning effectively provides flexibility,managing volatile energy demand,dynamic pricing,and photovoltaic output to maximize rewards.However,the effectiveness of reinforcement learning is often hindered by limited access to training data due to privacy concerns,unstable training processes,and challenges in generalizing to different household conditions.In this study,we propose a novel federated framework for reinforcement learning in energy management systems.By enabling local model training on private data and aggregating only model parameters on a global server,this approach not only preserves privacy but also improves model generalization and robustness under varying household conditions,while decreasing electricity costs and emissions per building.For a comprehensive benchmark,we compare standard reinforcement learning with our federated approach and include mixed integer programming and rule-based systems.Among the reinforcement learning methods,deep deterministic policy gradient performed best on the Ausgrid dataset,with federated learning reducing costs by 5.01%and emissions by 4.60%.Federated learning also improved zero-shot performance for unseen buildings,reducing costs by 5.11%and emissions by 5.55%.Thus,our findings highlight the potential of federated reinforcement learning to enhance energy management systems by balancing privacy,sustainability,and efficiency.
基金supported by the project“Romanian Hub for Artificial Intelligence-HRIA”,Smart Growth,Digitization and Financial Instruments Program,2021–2027,MySMIS No.334906.
文摘Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods,based on reliable existing data stored in project management tools’datasets,automating this evaluation process becomes a natural step forward.In this context,our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems.For this,we mathematically formalize two categories of expertise:technology-specific expertise,which denotes the skills required for a particular technology,and general expertise,which encapsulates overall knowledge in the software industry.Afterward,we automatically classify the zones of expertise associated with each task a developer has worked on using Bidirectional Encoder Representations from Transformers(BERT)-like transformers to handle the unique characteristics of project tool datasets effectively.Finally,our method evaluates the proficiency of each software specialist across already completed projects from both technology-specific and general perspectives.The method was experimentally validated,yielding promising results.
文摘Coronary arterydisease(CAD)has become a significant causeof heart attack,especially amongthose 40yearsoldor younger.There is a need to develop new technologies andmethods to deal with this disease.Many researchers have proposed image processing-based solutions for CADdiagnosis,but achieving highly accurate results for angiogram segmentation is still a challenge.Several different types of angiograms are adopted for CAD diagnosis.This paper proposes an approach for image segmentation using ConvolutionNeuralNetworks(CNN)for diagnosing coronary artery disease to achieve state-of-the-art results.We have collected the 2D X-ray images from the hospital,and the proposed model has been applied to them.Image augmentation has been performed in this research as it’s the most significant task required to be initiated to increase the dataset’s size.Also,the images have been enhanced using noise removal techniques before being fed to the CNN model for segmentation to achieve high accuracy.As the output,different settings of the network architecture undoubtedly have achieved different accuracy,among which the highest accuracy of the model is 97.61%.Compared with the other models,these results have proven to be superior to this proposed method in achieving state-of-the-art results.
文摘Developing the control of modem power converters is a very expensive and time-consuming task. Time to market can take unacceptable long. FPGA-based real-time simulation of a power stage with analog measured signals can reduce significantly the cost and time of testing a product. This new approach is known as HIL (hardware-in-the-loop) testing. A general power converter consists of two main parts: a power level (main circuit) and a digital controller unit, which is usually realized by using some kind of DSP. Testing the controller HW and SW is quite problematic: live tests with a completely assembled converter can be dangerous and expensive. A low-power model of the main circuit can be built under laboratory conditions, but it will have parameters (e.g. time constants and relative losses) differing from the ones of the original system. The solution is the HIL simulation of the main circuit. With this method the simulator can be completely transparent for the controller unit, unlike other computer based simulation methods The subject of this paper is to develop such a real-time simulator using FPGA. The modeled circuit is a three-phase inverter, which is widely used in power converters of renewable energy sources.
基金funded by the Helmholtz Association’s Initiative and Networking Fund through Helmholtz AI,the Helmholtz Association under the Program“Energy System Design”the German Research Foundation(DFG)as part of the Research Training Group 2153“En-ergy Status Data:Informatics Methods for its Collection,Analysis and Exploitation”+1 种基金supported by the Helmholtz Association Initiative and Networking Fund on the HAICORE@KIT partitionsupport by the KIT-Publication Fund of the Karlsruhe Institute of Technology.
文摘Time series foundation models provide a universal solution for generating forecasts to support optimization problems in energy systems.Those foundation models are typically trained in a prediction-focused manner to maximize forecast quality.In contrast,decision-focused learning directly improves the resulting value of the forecast in downstream optimization rather than merely maximizing forecasting quality.The practical integration of forecast values into forecasting models is challenging,particularly when addressing complex applications with diverse instances,such as buildings.This becomes even more complicated when instances possess specific characteristics that require instance-specific,tailored predictions to increase the forecast value.To tackle this challenge,we use decision-focused fine-tuning within time series foundation models to offer a scalable and efficient solution for decision-focused learning applied to the dispatchable feeder optimization problem.To obtain more robust predictions for scarce building data,we use Moirai as a state-of-the-art foundation model,which offers robust and generalized results with few-shot parameter-efficient fine-tuning.Comparing the decision-focused fine-tuned Moirai with a state-of-the-art classical prediction-focused fine-tuning Moirai,we observe an improvement of 9.45%in Average Daily Total Costs.
文摘Electricity prices in liberalized markets are determined by the supply and demand for electric power,which are in turn driven by various external influences that vary strongly in time.In perfect competition,the merit order principle describes that dispatchable power plants enter the market in the order of their marginal costs to meet the residual load,i.e.the difference of load and renewable generation.Various market models are based on this principle when attempting to predict electricity prices,yet the principle is fraught with assumptions and simplifications and thus is limited in accurately predicting prices.In this article,we present an explainable machine learning model for the electricity prices on the German day-ahead market which foregoes of the aforementioned assumptions of the merit order principle.Our model is designed for an ex-post analysis of prices and builds on various external features.Using SHapley Additive exPlanation(SHAP)values we disentangle the role of the different features and quantify their importance from empiric data,and therein circumvent the limitations inherent to the merit order principle.We show that load,wind and solar generation are the central external features driving prices,as expected,wherein wind generation affects prices more than solar generation.Similarly,fuel prices also highly affect prices,and do so in a nontrivial manner.Moreover,large generation ramps are correlated with high prices due to the limited flexibility of nuclear and lignite plants.Overall,we offer a model that describes the influence of the main drivers of electricity prices in Germany,taking us a step beyond the limited merit order principle in explaining the drivers of electricity prices and their relation to each other.
文摘The rapid transformation of the electricity sector increases both the opportunities and the need for Data Analytics.In recent years,various new methods and fields of application have been emerging.As research is growing and becoming more diverse and specialized,it is essential to integrate and structure the fragmented body of scientific work.We therefore conduct a systematic review of studies concerned with developing and applying Data Analytics methods in the context of the electricity value chain.First,we provide a quantitative high-level overview of the status quo of Data Analytics research,and show historical literature growth,leading countries in the field and the most intensive international collaborations.Then,we qualitatively review over 200 high-impact studies to present an in-depth analysis of the most prominent applications of Data Analytics in each of the electricity sector’s areas:generation,trading,transmission,distribution,and consumption.For each area,we review the state-of-the-art Data Analytics applications and methods.In addition,we discuss used data sets,feature selection methods,benchmark methods,evaluation metrics,and model complexity and run time.Summarizing the findings from the different areas,we identify best practices and what researchers in one area can learn from other areas.Finally,we highlight potential for future research.
基金financed by the U.S.Department of Energy,Office of Science,Office of Basic Energy Sciences,Materials Sciences and Engineering Division under contract no.DE-AC02-05-CH11231(D2S2 programme,KCD2S2)funded by the Ministry of Science,Research and the Arts Baden-Württemberg and by the Federal Ministry of Education and Research.
文摘Electron microscopy is indispensable for examining the morphology and composition of solid materials at the sub-micron scale.To study the powder samples that are widely used in materials development,scanning electron microscopes(SEMs)are increasingly used at the laboratory scale to generate large datasets with hundreds of images.Parsing these images to identify distinct particles and determine their morphology requires careful analysis,and automating this process remains challenging.In this work,we enhance the Mask R-CNN architecture to develop a method for automated segmentation of particles in SEM images.We address several challenges inherent to measurements,such as image blur and particle agglomeration.Moreover,our method accounts for prediction uncertainty when such issues prevent accurate segmentation of a particle.Recognizing that disparate length scales are often present in large datasets,we use this framework to create two models that are separately trained to handle images obtained at low or high magnification.By testing these models on a variety of inorganic samples,our approach to particle segmentation surpasses an established automated segmentation method and yields comparable results to the predictions of three domain experts,revealing comparable accuracy while requiring a fraction of the time.These findings highlight the potential of deep learning in advancing autonomous workflows for materials characterization.
基金supported by the Bundesministerium fur Bildung und Forschung(BMBF)Projects PHOIBOS(Grant 13N1257)and SPIDER(Grant 01DR18014A)by the Deutsche Forschungsgemeinschaft(DFG,German Research Foundation)under Germany´s Excellence Strategy via the Excellence Cluster 3D Matter Made to Order(EXC-2082/1-390761711)+6 种基金by the Helmholtz International Research School for Teratronics(HIRST)by the European Research Council(ERC Consolidator Grant‘TeraSHAPE’,#773248)by the H2020 Photonic Packaging Pilot Line PIXAPP(#731954)by the EU-FP7 project BigPipesby the Alfried Krupp von Bohlen und Halbach Foundationby the Karlsruhe Nano-Micro Facility(KNMF)by the Deutsche Forschungsgemeinschaft(DFG)through CRC#1173(‘WavePheonmena’).
文摘Three-dimensional(3D)nano-printing of freeform optical waveguides,also referred to as photonic wire bonding,allows for efficient coupling between photonic chips and can greatly simplify optical system assembly.As a key advantage,the shape and the trajectory of photonic wire bonds can be adapted to the mode-field profiles and the positions of the chips,thereby offering an attractive alternative to conventional optical assembly techniques that rely on technically complex and costly high-precision alignment.However,while the fundamental advantages of the photonic wire bonding concept have been shown in proof-of-concept experiments,it has so far been unclear whether the technique can also be leveraged for practically relevant use cases with stringent reproducibility and reliability requirements.In this paper,we demonstrate optical communication engines that rely on photonic wire bonding for connecting arrays of silicon photonic modulators to InP lasers and single-mode fibres.In a first experiment,we show an eight-channel transmitter offering an aggregate line rate of 448 Gbit/s by low-complexity intensity modulation.A second experiment is dedicated to a four-channel coherent transmitter,operating at a net data rate of 732.7 Gbit/s-a record for coherent silicon photonic transmitters with co-packaged lasers.Using dedicated test chips,we further demonstrate automated mass production of photonic wire bonds with insertion losses of(0.7±0.15)dB,and we show their resilience in environmental-stability tests and at high optical power.These results might form the basis for simplified assembly of advanced photonic multi-chip systems that combine the distinct advantages of different integration platforms.
基金the Alfried Krupp von Bohlen und Halbach Foundation,by the Deutsche Forschungsgemeinschaft(DFG,German Research Foundation)under Germany's Excellence Strategy via the Excellence Cluster 3D Matter Made to Order(EXC-2082/1-390761711)by the European Research Council(ERC Consolidator Grant TeraSHAPE,#773248)by the Karlsruhe School of Optics and Photonics(KSOP)。
文摘Early and efficient disease diagnosis with low-cost point-of-care devices is gaining importance for personalized medicine and public health protection.Within this context,waveguide-(WG)-based optical biosensors on the siliconnitride(Si_(3)N_(4))platform represent a particularly promising option,offering highly sensitive detection of indicative biomarkers in multiplexed sensor arrays operated by light in the visible-wavelength range.However,while passive Si_(3)N_(4)-based photonic circuits lend themselves to highly scalable mass production,the integration of low-cost light sources remains a challenge.In this paper,we demonstrate optical biosensors that combine Si_(3)N_(4)sensor circuits with hybrid on-chip organic lasers.These Si_(3)N_(4)-organic hybrid(SiNOH)lasers rely on a dye-doped cladding material that are deposited on top of a passive WG and that are optically pumped by an external light source.Fabrication of the devices is simple:The underlying Si_(3)N_(4)WGs are structured in a single lithography step,and the organic gain medium is subsequently applied by dispensing,spin-coating,or ink-jet printing processes.A highly parallel read-out of the optical sensor signals is accomplished with a simple camera.In our proof-of-concept experiment,we demonstrate the viability of the approach by detecting different concentrations of fibrinogen in phosphate-buffered saline solutions with a sensor-length(L-)-related sensitivity of S/L=0.16 rad nM^(-1)mm^(-1).To our knowledge,this is the first demonstration of an integrated optical circuit driven by a co-integrated low-cost organic light source.We expect that the versatility of the device concept,the simple operation principle,and the compatibility with cost-efficient mass production will make the concept a highly attractive option for applications in biophotonics and point-of-care diagnostics.
基金Project partially supported by the European Union and the European Social Fund(No.TAMOP-4.2.2.C-11/1/KONV-2012-0013)
文摘Why is it important to verify/validate model transformations? The motivation is to improve the quality of the trans- formations, and therefore the quality of the generated software artifacts. Verified/validated model transformations make it possible to ensure certain properties of the generated software artifacts. In this way, verification/validation methods can guarantee different requirements stated by the actual domain against the generated/modified/optimized software products. For example, a verified/ validated model transformation can ensure the preservation of certain properties during the model-to-model transformation. This paper emphasizes the necessity of methods that make model transformation verified/validated, discusses the different scenarios of model transformation verification and validation, and introduces the principles of a novel test-driven method for verifying/ validating model transformations. We provide a solution that makes it possible to automatically generate test input models for model transformations. Furthermore, we collect and discuss the actual open issues in the field of verification/validation of model transformations.
基金N.J.S.was supported in part by the National Science Foundation Graduate Research Fellowship under grant#1752814.We also thank Gerbrand Ceder for the helpful discussion and invitation to UC Berkeley。
文摘To aid the development of machine learning models for automated spectroscopic data classification,we created a universal synthetic dataset for the validation of their performance.The dataset mimics the characteristic appearance of experimental measurements from techniques such as X-ray diffraction,nuclear magnetic resonance,and Raman spectroscopy among others.We applied eight neural network architectures to classify artificial spectra,evaluating their ability to handle common experimental artifacts.While all models achieved over 98%accuracy on the synthetic dataset,misclassifications occurred when spectra had overlapping peaks or intensities.We found that non-linear activation functions,specifically ReLU in the fully-connected layers,were crucial for distinguishing between these classes,while adding more sophisticated components,such as residual blocks or normalization layers,provided no performance benefit.Based on these findings,we summarize key design principles for neural networks in spectroscopic data classification and publicly share all scripts used in this study.