Introduction Deep learning(DL),as one of the most transformative technologies in artificial intelligence(AI),is undergoing a pivotal transition from laboratory research to industrial deployment.Advancing at an unprece...Introduction Deep learning(DL),as one of the most transformative technologies in artificial intelligence(AI),is undergoing a pivotal transition from laboratory research to industrial deployment.Advancing at an unprecedented pace,DL is transcending theoretical and application boundaries to penetrate emerging realworld scenarios such as industrial automation,urban management,and health monitoring,thereby driving a new wave of intelligent transformation.In August 2023,Goldman Sachs estimated that global AI investment will reach US$200 billion by 2025[1].However,the increasing complexity and dynamic nature of application scenarios expose critical challenges in traditional deep learning,including data heterogeneity,insufficient model generalization,computational resource constraints,and privacy-security trade-offs.The next generation of deep learning methodologies needs to achieve breakthroughs in multimodal fusion,lightweight design,interpretability enhancement,and cross-disciplinary collaborative optimization,in order to develop more efficient,robust,and practically valuable intelligent systems.展开更多
Fixtures are a critical element in machining operations as they are the interface between the part and the machine.These components are responsible for the precise part location on the machine table and for the proper...Fixtures are a critical element in machining operations as they are the interface between the part and the machine.These components are responsible for the precise part location on the machine table and for the proper dynamic stability maintenance during the manufacturing operations.Although these two features are deeply related,they are usually studied separately.On the one hand,diverse adaptable solutions have been developed for the clamping of different variable geometries.Parallelly,the stability of the part has been long studied to reduce the forced vibration and the chatter effects,especially on thin parts machining operations typically performed in the aeronautic field,such as the skin panels milling.The present work proposes a commitment between both features by the presentation of an innovative vacuum fixture based on the use of a vulcanized rubber layer.This solution presents high flexibility as it can be adapted to different geometries while providing a proper damping capacity due to the viscoelastic and elastoplastic behaviour of these compounds.Moreover,the sealing properties of these elastomers provide the perfect combination to transform a rubber layer into a flexible vacuum table.Therefore,in order to validate the suitability of this fixture,a test bench is manufactured and tested under uniaxial compression loads and under real finish milling conditions over AA2024 part samples.Finally,a roughness model is proposed and analysed in order to characterize the part vibration sources.展开更多
This paper provides stability analysis results for discretised time delay control(TDC)as implemented in a sampled data system with the standard form of zero-order hold.We first substantiate stability issues in discret...This paper provides stability analysis results for discretised time delay control(TDC)as implemented in a sampled data system with the standard form of zero-order hold.We first substantiate stability issues in discrete-time TDC using an example and propose sufficient stability criteria in the sense of Lyapunov.Important parameters significantly affecting the overall system stability are the sampling period,the desired trajectory and the selection of the reference model dynamics.展开更多
In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer t...In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer task offloading.For many resource-constrained devices,the computation of many types of tasks is not feasible because they cannot support such computations as they do not have enough available memory and processing capacity.In this scenario,it is worth considering transferring these tasks to resource-rich platforms,such as Edge Data Centers or remote cloud servers.For different reasons,it is more exciting and appropriate to download various tasks to specific download destinations depending on the properties and state of the environment and the nature of the functions.At the same time,establishing an optimal offloading policy,which ensures that all tasks are executed within the required latency and avoids excessive workload on specific computing centers is not easy.This study presents two alternatives to solve the offloading decision paradigm by introducing two well-known algorithms,Graph Neural Networks(GNN)and Deep Q-Network(DQN).It applies the alternatives on a well-known Edge Computing simulator called PureEdgeSimand compares them with the two defaultmethods,Trade-Off and Round Robin.Experiments showed that variants offer a slight improvement in task success rate and workload distribution.In terms of energy efficiency,they provided similar results.Finally,the success rates of different computing centers are tested,and the lack of capacity of remote cloud servers to respond to applications in real-time is demonstrated.These novel ways of finding a download strategy in a local networking environment are unique as they emulate the state and structure of the environment innovatively,considering the quality of its connections and constant updates.The download score defined in this research is a crucial feature for determining the quality of a download path in the GNN training process and has not previously been proposed.Simultaneously,the suitability of Reinforcement Learning(RL)techniques is demonstrated due to the dynamism of the network environment,considering all the key factors that affect the decision to offload a given task,including the actual state of all devices.展开更多
The progressive automation of transport will imply a new paradigm in mobility, which will profoundly affect people, logistics of goods, as well as other sectors dependent on transport. It is precise within this automa...The progressive automation of transport will imply a new paradigm in mobility, which will profoundly affect people, logistics of goods, as well as other sectors dependent on transport. It is precise within this automation where the development of new driving technologies is going to cause a great impact on the mobility of the near future, and that will have an effect on the economic, natural and social environment. It is therefore a primary issue at the global level, as it is reflected in the work programs of the European Commission in relation to the road transport [1] [2]. Thus, the size impact is caused by the following novelties and advantages: 1) Safety: Accidents reduction caused by human error;2) Efficiency increase in transportation, both in energy consumption and time;3) Comfort for users and professionals who will increase their operational availability to execute other more valuable tasks, both for them and enterprises;4) Social Inclusion: enabling mobility easily for everybody during more time;5) Accessibility, to get to city centers and other difficult reach places. It should be noted that the economic impact projected for automated driving for the years to come ranges up to €71 bn in 2030, when estimated global market for automated vehicles is 44 million vehicles, as is reflected in document Automated Driving Roadmap by ERTRAC [3], European Road Transport Research Advisory Council (http://www.ertrac.org/uploads/documentsearch/id38/ERTRAC_Automated-Driving-2015.pdf). As background that already anticipates these im-provements, the Advance Driver Assistance System (ADAs) have already showed the safety increase in the last ten years, but always maintain a leading role for the driver. Related to the efficiency increase, automated driving offers great opportunities for those companies where mobility is a key factor in operating costs, and affects the whole value chain. The project opportunity is consistent with ERTRAC vision, especially in applications focused on the urban environment [4], where it is expected a deployment of the technology of high level automation in an immediate future. This is possible by the potential to incorporate smart infrastructure to improve guidance and positioning, as well as lower speed, which eases its progressive deployment. The objective of AutoMOST is developing technologies for the automation of vehicles in urban transport and industrial applications, to increase significantly the efficiency, safety and environmental sustainability. More specifically, AutoMOST will allow the implementation of shared control systems (Dual-Mode) [5] for future automated vehicles that allow the services operate more efficiently and flexibly, in a context of intelligent and connected infrastructures.展开更多
Nowadays it is known that the thermomechanical schedules applied during hot rolling of flat products provide the steel with improved mechanical properties.In this work an optimisation tool,OptiLam (OptiLam v.1),based ...Nowadays it is known that the thermomechanical schedules applied during hot rolling of flat products provide the steel with improved mechanical properties.In this work an optimisation tool,OptiLam (OptiLam v.1),based on a predictive software and capable of generating optimised rolling schedules to obtain the desired mechanical properties in the final product is described.OptiLam includes some well-known metallurgical models which predict microstructural evolution during hot rolling and the transformation austenite/ferrite during the cooling.Furthermore,an optimisation algorithm,which is based on the gradient method,has been added,in order to design thermomechanical sequences when a specific final grain size is desired.OptiLam has been used to optimise rolling parameters,such as strain and temperature.Here,some of the results of the software validation performed by means of hot torsion tests are presented,showing also the functionality of the tool.Finally,the application of classical optimisation models,based on the gradient method,to hot rolling operations,is also discussed.展开更多
In the framework of the Santiago of Compostela Cathedral program, a multidisciplinary investigation of the porch of the glory was carried out between 2009 and 2011 to identify the main environmental risks and to devel...In the framework of the Santiago of Compostela Cathedral program, a multidisciplinary investigation of the porch of the glory was carried out between 2009 and 2011 to identify the main environmental risks and to develop a preventive conservation planto be integrated in the general management strategy of the Cathedral. The study included historic and archivist research, structural studies, mineralogical analyses, biological sampling, cleaning tests and microclimatic monitoring. The main weathering factors and the related damage processes were identified. Results have shown that the main responsible for the observed damage was the infiltration of rainwater through the roof, due to cracks in the structure of the Cathedral. Other environmental factors having a remarkable impact on the state of conservation of the polychrome and its substrate were the solar radiation, the thermo-hygrometric cycles, the particle deposition and the biological growth. Solutions were suggested to improve the environmental conditions, thus reducing further damage.展开更多
When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ...When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ML models to be trained on local devices without any need for centralized data transfer,thereby reducing both the exposure of sensitive data and the possibility of data interception by malicious third parties.This paradigm has gained momentum in the last few years,spurred by the plethora of real-world applications that have leveraged its ability to improve the efficiency of distributed learning and to accommodate numerous participants with their data sources.By virtue of FL,models can be learned from all such distributed data sources while preserving data privacy.The aim of this paper is to provide a practical tutorial on FL,including a short methodology and a systematic analysis of existing software frameworks.Furthermore,our tutorial provides exemplary cases of study from three complementary perspectives:i)Foundations of FL,describing the main components of FL,from key elements to FL categories;ii)Implementation guidelines and exemplary cases of study,by systematically examining the functionalities provided by existing software frameworks for FL deployment,devising a methodology to design a FL scenario,and providing exemplary cases of study with source code for different ML approaches;and iii)Trends,shortly reviewing a non-exhaustive list of research directions that are under active investigation in the current FL landscape.The ultimate purpose of this work is to establish itself as a referential work for researchers,developers,and data scientists willing to explore the capabilities of FL in practical applications.展开更多
Privacy-Preserving Computation(PPC)comprises the techniques,schemes and protocols which ensure privacy and confidentiality in the context of secure computation and data analysis.Most of the current PPC techniques rely...Privacy-Preserving Computation(PPC)comprises the techniques,schemes and protocols which ensure privacy and confidentiality in the context of secure computation and data analysis.Most of the current PPC techniques rely on the complexity of cryptographic operations,which are expected to be efficiently solved by quantum computers soon.This review explores how PPC can be built on top of quantum computing itself to alleviate these future threats.We analyze quantum proposals for Secure Multi-party Computation,Oblivious Transfer and Homomorphic Encryption from the last decade focusing on their maturity and the challenges they currently face.Our findings show a strong focus on purely theoretical works,but a rise on the experimental consideration of these techniques in the last 5 years.The applicability of these techniques to actual use cases is an underexplored aspect which could lead to the practical assessment of these techniques.展开更多
In this study,we introduced an innovative crop-conditional semantic segmentation architecture that seamlessly incorporates contextual metadata(crop information).This is achieved by merging the contextual information a...In this study,we introduced an innovative crop-conditional semantic segmentation architecture that seamlessly incorporates contextual metadata(crop information).This is achieved by merging the contextual information at a late layer stage,allowing the method to be integrated with any semantic segmentation architecture,including novel ones.To evaluate the effectiveness of this approach,we curated a challenging dataset of over 100,000 images captured in real-field conditions using mobile phones.This dataset includes various disease stages across 21 diseases and seven crops(wheat,barley,corn,rice,rape-seed,vinegrape,and cucumber),with the added complexity of multiple diseases coexisting in a single image.We demonstrate that incorporating contextual multi-crop information significantly enhances the performance of semantic segmentation models for plant disease detection.By leveraging crop-specific metadata,our approach achieves higher accuracy and better generalization across diverse crops(F1=0.68,r=0.75)compared to traditional methods(F1=0.24,r=0.68).Additionally,the adoption of a semi-supervised approach based on pseudo-labeling of single diseased plants,offers significant advantages for plant disease segmentation and quantification(F1=0.73,r=0.95).This method enhances the model's performance by leveraging both labeled and unlabeled data,reducing the dependency on extensive manual annotations,which are often time-consuming and costly.The deployment of this algorithm holds the potential to revolutionize the digitization of crop protection product testing,ensuring heightened repeatability while minimizing human subjectivity.By addressing the challenges of semantic segmentation and disease quantification,we contribute to more effective and precise phenotyping,ultimately supporting better crop management and protection strategies.展开更多
The use of image based and,recently,deep learning-based systems have provided good results in several applications.Greenhouse trials are key part in the process of developing and testing new herbicides and analyze the...The use of image based and,recently,deep learning-based systems have provided good results in several applications.Greenhouse trials are key part in the process of developing and testing new herbicides and analyze the response of the species to different products and doses in a controlled way.The assessment of the damage in the plant is daily done in all trials by visual evaluation by experts.This entails time consuming process and lack of repeatability.Greenhouse trials require new digital tools to reduce time consuming process and to endow the experts with more objective and repetitive methods for establishing the damage in the plants.To this end,a novel method is proposed composed by an initial segmentation of the plant species followed by a multibranch convolutional neural network to estimate the damage level.In this way,we overcome the need for costly and unaffordable pixelwise manual segmentation for damage symptoms and we make use of global damage estimation values provided by the experts.The algorithm has been deployed under real greenhouse trials conditions in a pilot study located in BASF in Germany and tested over four species(GLXMA,TRZAW,ECHCG,AMARE).The results show mean average error(MAE)values ranging from 5.20 for AMARE and 8.07 for ECHCG for the estimation of PDCU value,with correlation values(R^(2))higher than 0.85 in all situations,and up to 0.92 in AMARE.These results surpass the inter-rater variability of human experts demonstrating that the proposed automated method is appropriate for automatically assessing greenhouse damage trials.展开更多
Diamond like carbon(DLC)coatings typically present good self-lubricating tribological properties that could be of interest in sliding dielectric contacts in multiple electrical applications.In this work electro-tribol...Diamond like carbon(DLC)coatings typically present good self-lubricating tribological properties that could be of interest in sliding dielectric contacts in multiple electrical applications.In this work electro-tribological studies have been performed on several DLC coatings against aluminum in different humidity conditions,in which the coefficients of friction(CoFs)and electrical contact resistance(ECR)were continuously monitored.Results show that CoF and ECR data can be linked to the properties of the coatings(thickness,finishing,microstructure,residual stresses,and wettability)and the degradation modes of their tribological and electrical properties.Therefore,electro-tribological data can provide valuable information about the performance of dielectric coatings,the reasons behind it,and assist in the development of the coatings.ECR also shows potential for on-line monitoring of coated parts in operation.展开更多
Bone infections following open bone fracture or implant surgery remain a challenge in the orthopedics field.In order to avoid high doses of systemic drug administration,optimized local antibiotic release from scaffold...Bone infections following open bone fracture or implant surgery remain a challenge in the orthopedics field.In order to avoid high doses of systemic drug administration,optimized local antibiotic release from scaffolds is required.3D additive manufactured(AM)scaffolds made with biodegradable polymers are ideal to support bone healing in non-union scenarios and can be given antimicrobial properties by the incorporation of antibiotics.In this study,ciprofloxacin and gentamicin intercalated in the interlamellar spaces of magnesium aluminum layered double hydroxides(MgAl)andα-zirconium phosphates(ZrP),respectively,are dispersed within a thermoplastic polymer by melt compounding and subsequently processed via high temperature melt extrusion AM(~190◦C)into 3D scaffolds.The inorganic fillers enable a sustained antibiotics release through the polymer matrix,controlled by antibiotics counterions exchange or pH conditions.Importantly,both antibiotics retain their functionality after the manufacturing process at high temperatures,as verified by their activity against both Gram+and Gram-bacterial strains.Moreover,scaffolds loaded with filler-antibiotic do not impair human mesenchymal stromal cells osteogenic differentiation,allowing matrix mineralization and the expression of relevant osteogenic markers.Overall,these results suggest the possibility of fabricating dual functionality 3D scaffolds via high temperature melt extrusion for bone regeneration and infection prevention.展开更多
As the global population continues to grow,the enormous stress on our environment and resources is becoming impossible to ignore.A focus on producing and consuming as cheaply as possible has created an economy in whic...As the global population continues to grow,the enormous stress on our environment and resources is becoming impossible to ignore.A focus on producing and consuming as cheaply as possible has created an economy in which objects are briefly used and then discarded as waste,featuring a linear lifecycle that creates an enormous amount of waste.The alternative to the linear economy“take-make-waste”is called the“circular economy”.Under this paradigm,materials are recycled to build new products or components that are designed and built to promote their reuse and refurbishment.This assures the continuous(re-)exploitation of existing resources,reducing the extraction of new raw materials.However,customers often reject these reused or refurbished products under the suspicion that they do not meet the same usability,safety,or performance levels of new products.In this sense,trustworthy records of historical details of refurbished products could increase consumers’confidence in products and components of the“circular economy”,prioritizing trustworthiness,reliability,and transparency.This work presents a new certification tool based on blockchain technology to guarantee trusted,accurate,transparent,and traceable lifecycle information of products and their components and to generate trustworthy certificates to probe refurbished product historical details.This tool aims to enhance refurbished product visibility by creating the basis for making the circular economy a reality in any domain.展开更多
Performing accurate and automated semantic segmentation of vegetation is a first algorithmic step towards more complex models that can extract accurate biological information on crop health,weed presence and phenologi...Performing accurate and automated semantic segmentation of vegetation is a first algorithmic step towards more complex models that can extract accurate biological information on crop health,weed presence and phenological state,among others.Traditionally,models based on normalized difference vegetation index(NDVI),near infrared channel(NIR)or RGB have been a good indicator of vegetation presence.However,these methods are not suitable for accurately segmenting vegetation showing damage,which precludes their use for downstream phenotyping algorithms.In this paper,we propose a comprehensive method for robust vegetation segmentation in RGB images that can cope with damaged vegetation.The method consists of a first regression convolutional neural network to estimate a virtual NIR channel from an RGB image.Second,we compute two newly proposed vegetation indices from this estimated virtual NIR:the infrared-dark channel subtraction(IDCS)and infrared-dark channel ratio(IDCR)indices.Finally,both the RGB image and the estimated indices are fed into a semantic segmentation deep convolutional neural network to train a model to segment vegetation regardless of damage or condition.The model was tested on 84 plots containing thirteen vegetation species showing different degrees of damage and acquired over 28 days.The results show that the best segmentation is obtained when the input image is augmented with the proposed virtual NIR channel(F1=0:94)and with the proposed IDCR and IDCS vegetation indices(F1=0:95)derived from the estimated NIR channel,while the use of only the image or RGB indices lead to inferior performance(RGB(F1=0:90)NIR(F1=0:82)or NDVI(F1=0:89)channel).The proposed method provides an end-to-end land cover map segmentation method directly from simple RGB images and has been successfully validated in real field conditions.展开更多
Estimation of damage in plants is a key issue for crop protection.Currently,experts in the field manually assess the plots.This is a time-consuming task that can be automated thanks to the latest technology in compute...Estimation of damage in plants is a key issue for crop protection.Currently,experts in the field manually assess the plots.This is a time-consuming task that can be automated thanks to the latest technology in computer vision(CV).The use of image-based systems and recently deep learning-based systems have provided good results in several agricultural applications.These image-based applications outperform expert evaluation in controlled environments,and now they are being progressively included in non-controlled field applications.A novel solution based on deep learning techniques in combination with image processingmethods is proposed to tackle the estimate of plant damage in the field.The proposed solution is a two-stage algorithm.In a first stage,the single plants in the plots are detected by an object detection YOLO based model.Then a regression model is applied to estimate the damage of each individual plant.The solution has been developed and validated in oilseed rape plants to estimate the damage caused by flea beetle.The crop detection model achieves a mean precision average of 91%with a mAP@0.50 of 0.99 and a mAP@0.95 of 0.91 for oilseed rape specifically.The regression model to estimate up to 60%of damage degree in single plants achieves a MAE of 7.11,and R2 of 0.46 in comparison with manual evaluations done plant by plant by experts.Models are deployed in a docker,and with a REST API communication protocol they can be inferred directly for images acquired in the field from a mobile device.展开更多
基金supported in part by Guangdong Basic and Applied Basic Research Foundation under Grant 2024A1515012485in part by Shenzhen Fundamental Research Program under Grant JCYJ20220810112354002+4 种基金in part by Shenzhen Science and Technology Program under Grant KJZD20230923114111021in part by the Fund for Academic Innovation Teams and Research Platform of South-Central Minzu University under Grant XTZ24003 and Grant PTZ24001in part by the Knowledge Innovation Program of Wuhan-Basic Research through Project 2023010201010151in part by the Research Start-up Funds of South-Central Minzu University under Grant YZZ18006in part by the Spring Sunshine Program of Ministry of Education of the People’s Republic of China under Grant HZKY20220331.
文摘Introduction Deep learning(DL),as one of the most transformative technologies in artificial intelligence(AI),is undergoing a pivotal transition from laboratory research to industrial deployment.Advancing at an unprecedented pace,DL is transcending theoretical and application boundaries to penetrate emerging realworld scenarios such as industrial automation,urban management,and health monitoring,thereby driving a new wave of intelligent transformation.In August 2023,Goldman Sachs estimated that global AI investment will reach US$200 billion by 2025[1].However,the increasing complexity and dynamic nature of application scenarios expose critical challenges in traditional deep learning,including data heterogeneity,insufficient model generalization,computational resource constraints,and privacy-security trade-offs.The next generation of deep learning methodologies needs to achieve breakthroughs in multimodal fusion,lightweight design,interpretability enhancement,and cross-disciplinary collaborative optimization,in order to develop more efficient,robust,and practically valuable intelligent systems.
基金the Basque Government under the ELKARTEK Program(SMAR3NAK project,grant number KK-2019/00051)is gratefully acknowledged by the authors。
文摘Fixtures are a critical element in machining operations as they are the interface between the part and the machine.These components are responsible for the precise part location on the machine table and for the proper dynamic stability maintenance during the manufacturing operations.Although these two features are deeply related,they are usually studied separately.On the one hand,diverse adaptable solutions have been developed for the clamping of different variable geometries.Parallelly,the stability of the part has been long studied to reduce the forced vibration and the chatter effects,especially on thin parts machining operations typically performed in the aeronautic field,such as the skin panels milling.The present work proposes a commitment between both features by the presentation of an innovative vacuum fixture based on the use of a vulcanized rubber layer.This solution presents high flexibility as it can be adapted to different geometries while providing a proper damping capacity due to the viscoelastic and elastoplastic behaviour of these compounds.Moreover,the sealing properties of these elastomers provide the perfect combination to transform a rubber layer into a flexible vacuum table.Therefore,in order to validate the suitability of this fixture,a test bench is manufactured and tested under uniaxial compression loads and under real finish milling conditions over AA2024 part samples.Finally,a roughness model is proposed and analysed in order to characterize the part vibration sources.
文摘This paper provides stability analysis results for discretised time delay control(TDC)as implemented in a sampled data system with the standard form of zero-order hold.We first substantiate stability issues in discrete-time TDC using an example and propose sufficient stability criteria in the sense of Lyapunov.Important parameters significantly affecting the overall system stability are the sampling period,the desired trajectory and the selection of the reference model dynamics.
基金funding from TECNALIA,Basque Research and Technology Alliance(BRTA)supported by the project aOptimization of Deep Learning algorithms for Edge IoT devices for sensorization and control in Buildings and Infrastructures(EMBED)funded by the Gipuzkoa Provincial Council and approved under the 2023 call of the Guipuzcoan Network of Science,Technology and Innovation Program with File Number 2023-CIEN-000051-01.
文摘In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer task offloading.For many resource-constrained devices,the computation of many types of tasks is not feasible because they cannot support such computations as they do not have enough available memory and processing capacity.In this scenario,it is worth considering transferring these tasks to resource-rich platforms,such as Edge Data Centers or remote cloud servers.For different reasons,it is more exciting and appropriate to download various tasks to specific download destinations depending on the properties and state of the environment and the nature of the functions.At the same time,establishing an optimal offloading policy,which ensures that all tasks are executed within the required latency and avoids excessive workload on specific computing centers is not easy.This study presents two alternatives to solve the offloading decision paradigm by introducing two well-known algorithms,Graph Neural Networks(GNN)and Deep Q-Network(DQN).It applies the alternatives on a well-known Edge Computing simulator called PureEdgeSimand compares them with the two defaultmethods,Trade-Off and Round Robin.Experiments showed that variants offer a slight improvement in task success rate and workload distribution.In terms of energy efficiency,they provided similar results.Finally,the success rates of different computing centers are tested,and the lack of capacity of remote cloud servers to respond to applications in real-time is demonstrated.These novel ways of finding a download strategy in a local networking environment are unique as they emulate the state and structure of the environment innovatively,considering the quality of its connections and constant updates.The download score defined in this research is a crucial feature for determining the quality of a download path in the GNN training process and has not previously been proposed.Simultaneously,the suitability of Reinforcement Learning(RL)techniques is demonstrated due to the dynamism of the network environment,considering all the key factors that affect the decision to offload a given task,including the actual state of all devices.
文摘The progressive automation of transport will imply a new paradigm in mobility, which will profoundly affect people, logistics of goods, as well as other sectors dependent on transport. It is precise within this automation where the development of new driving technologies is going to cause a great impact on the mobility of the near future, and that will have an effect on the economic, natural and social environment. It is therefore a primary issue at the global level, as it is reflected in the work programs of the European Commission in relation to the road transport [1] [2]. Thus, the size impact is caused by the following novelties and advantages: 1) Safety: Accidents reduction caused by human error;2) Efficiency increase in transportation, both in energy consumption and time;3) Comfort for users and professionals who will increase their operational availability to execute other more valuable tasks, both for them and enterprises;4) Social Inclusion: enabling mobility easily for everybody during more time;5) Accessibility, to get to city centers and other difficult reach places. It should be noted that the economic impact projected for automated driving for the years to come ranges up to €71 bn in 2030, when estimated global market for automated vehicles is 44 million vehicles, as is reflected in document Automated Driving Roadmap by ERTRAC [3], European Road Transport Research Advisory Council (http://www.ertrac.org/uploads/documentsearch/id38/ERTRAC_Automated-Driving-2015.pdf). As background that already anticipates these im-provements, the Advance Driver Assistance System (ADAs) have already showed the safety increase in the last ten years, but always maintain a leading role for the driver. Related to the efficiency increase, automated driving offers great opportunities for those companies where mobility is a key factor in operating costs, and affects the whole value chain. The project opportunity is consistent with ERTRAC vision, especially in applications focused on the urban environment [4], where it is expected a deployment of the technology of high level automation in an immediate future. This is possible by the potential to incorporate smart infrastructure to improve guidance and positioning, as well as lower speed, which eases its progressive deployment. The objective of AutoMOST is developing technologies for the automation of vehicles in urban transport and industrial applications, to increase significantly the efficiency, safety and environmental sustainability. More specifically, AutoMOST will allow the implementation of shared control systems (Dual-Mode) [5] for future automated vehicles that allow the services operate more efficiently and flexibly, in a context of intelligent and connected infrastructures.
基金supported by the project "Quality improvement by metallurgical optimised stock temperature evolution in the reheating furnace including microstructure feedback from the rolling mill" (OPTHEAT RFSR-CT-2006-00007) of the Research Fund for Coal and Steel (RFCS) from the European Union
文摘Nowadays it is known that the thermomechanical schedules applied during hot rolling of flat products provide the steel with improved mechanical properties.In this work an optimisation tool,OptiLam (OptiLam v.1),based on a predictive software and capable of generating optimised rolling schedules to obtain the desired mechanical properties in the final product is described.OptiLam includes some well-known metallurgical models which predict microstructural evolution during hot rolling and the transformation austenite/ferrite during the cooling.Furthermore,an optimisation algorithm,which is based on the gradient method,has been added,in order to design thermomechanical sequences when a specific final grain size is desired.OptiLam has been used to optimise rolling parameters,such as strain and temperature.Here,some of the results of the software validation performed by means of hot torsion tests are presented,showing also the functionality of the tool.Finally,the application of classical optimisation models,based on the gradient method,to hot rolling operations,is also discussed.
文摘In the framework of the Santiago of Compostela Cathedral program, a multidisciplinary investigation of the porch of the glory was carried out between 2009 and 2011 to identify the main environmental risks and to develop a preventive conservation planto be integrated in the general management strategy of the Cathedral. The study included historic and archivist research, structural studies, mineralogical analyses, biological sampling, cleaning tests and microclimatic monitoring. The main weathering factors and the related damage processes were identified. Results have shown that the main responsible for the observed damage was the infiltration of rainwater through the roof, due to cracks in the structure of the Cathedral. Other environmental factors having a remarkable impact on the state of conservation of the polychrome and its substrate were the solar radiation, the thermo-hygrometric cycles, the particle deposition and the biological growth. Solutions were suggested to improve the environmental conditions, thus reducing further damage.
基金the R&D&I,Spain grants PID2020-119478GB-I00 and,PID2020-115832GB-I00 funded by MCIN/AEI/10.13039/501100011033.N.Rodríguez-Barroso was supported by the grant FPU18/04475 funded by MCIN/AEI/10.13039/501100011033 and by“ESF Investing in your future”Spain.J.Moyano was supported by a postdoctoral Juan de la Cierva Formación grant FJC2020-043823-I funded by MCIN/AEI/10.13039/501100011033 and by European Union NextGenerationEU/PRTR.J.Del Ser acknowledges funding support from the Spanish Centro para el Desarrollo Tecnológico Industrial(CDTI)through the AI4ES projectthe Department of Education of the Basque Government(consolidated research group MATHMODE,IT1456-22)。
文摘When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ML models to be trained on local devices without any need for centralized data transfer,thereby reducing both the exposure of sensitive data and the possibility of data interception by malicious third parties.This paradigm has gained momentum in the last few years,spurred by the plethora of real-world applications that have leveraged its ability to improve the efficiency of distributed learning and to accommodate numerous participants with their data sources.By virtue of FL,models can be learned from all such distributed data sources while preserving data privacy.The aim of this paper is to provide a practical tutorial on FL,including a short methodology and a systematic analysis of existing software frameworks.Furthermore,our tutorial provides exemplary cases of study from three complementary perspectives:i)Foundations of FL,describing the main components of FL,from key elements to FL categories;ii)Implementation guidelines and exemplary cases of study,by systematically examining the functionalities provided by existing software frameworks for FL deployment,devising a methodology to design a FL scenario,and providing exemplary cases of study with source code for different ML approaches;and iii)Trends,shortly reviewing a non-exhaustive list of research directions that are under active investigation in the current FL landscape.The ultimate purpose of this work is to establish itself as a referential work for researchers,developers,and data scientists willing to explore the capabilities of FL in practical applications.
基金supported by the Basque Government through the ELKARTEK program for Research and Innovation,under the BRTAQUANTUM project(Grant Agreement No.KK-2022/00041)。
文摘Privacy-Preserving Computation(PPC)comprises the techniques,schemes and protocols which ensure privacy and confidentiality in the context of secure computation and data analysis.Most of the current PPC techniques rely on the complexity of cryptographic operations,which are expected to be efficiently solved by quantum computers soon.This review explores how PPC can be built on top of quantum computing itself to alleviate these future threats.We analyze quantum proposals for Secure Multi-party Computation,Oblivious Transfer and Homomorphic Encryption from the last decade focusing on their maturity and the challenges they currently face.Our findings show a strong focus on purely theoretical works,but a rise on the experimental consideration of these techniques in the last 5 years.The applicability of these techniques to actual use cases is an underexplored aspect which could lead to the practical assessment of these techniques.
基金Some authors have received support by the Elkartek Programme,Basque Government(Spain)(SMART-EYE(KK-2023/00021).
文摘In this study,we introduced an innovative crop-conditional semantic segmentation architecture that seamlessly incorporates contextual metadata(crop information).This is achieved by merging the contextual information at a late layer stage,allowing the method to be integrated with any semantic segmentation architecture,including novel ones.To evaluate the effectiveness of this approach,we curated a challenging dataset of over 100,000 images captured in real-field conditions using mobile phones.This dataset includes various disease stages across 21 diseases and seven crops(wheat,barley,corn,rice,rape-seed,vinegrape,and cucumber),with the added complexity of multiple diseases coexisting in a single image.We demonstrate that incorporating contextual multi-crop information significantly enhances the performance of semantic segmentation models for plant disease detection.By leveraging crop-specific metadata,our approach achieves higher accuracy and better generalization across diverse crops(F1=0.68,r=0.75)compared to traditional methods(F1=0.24,r=0.68).Additionally,the adoption of a semi-supervised approach based on pseudo-labeling of single diseased plants,offers significant advantages for plant disease segmentation and quantification(F1=0.73,r=0.95).This method enhances the model's performance by leveraging both labeled and unlabeled data,reducing the dependency on extensive manual annotations,which are often time-consuming and costly.The deployment of this algorithm holds the potential to revolutionize the digitization of crop protection product testing,ensuring heightened repeatability while minimizing human subjectivity.By addressing the challenges of semantic segmentation and disease quantification,we contribute to more effective and precise phenotyping,ultimately supporting better crop management and protection strategies.
文摘The use of image based and,recently,deep learning-based systems have provided good results in several applications.Greenhouse trials are key part in the process of developing and testing new herbicides and analyze the response of the species to different products and doses in a controlled way.The assessment of the damage in the plant is daily done in all trials by visual evaluation by experts.This entails time consuming process and lack of repeatability.Greenhouse trials require new digital tools to reduce time consuming process and to endow the experts with more objective and repetitive methods for establishing the damage in the plants.To this end,a novel method is proposed composed by an initial segmentation of the plant species followed by a multibranch convolutional neural network to estimate the damage level.In this way,we overcome the need for costly and unaffordable pixelwise manual segmentation for damage symptoms and we make use of global damage estimation values provided by the experts.The algorithm has been deployed under real greenhouse trials conditions in a pilot study located in BASF in Germany and tested over four species(GLXMA,TRZAW,ECHCG,AMARE).The results show mean average error(MAE)values ranging from 5.20 for AMARE and 8.07 for ECHCG for the estimation of PDCU value,with correlation values(R^(2))higher than 0.85 in all situations,and up to 0.92 in AMARE.These results surpass the inter-rater variability of human experts demonstrating that the proposed automated method is appropriate for automatically assessing greenhouse damage trials.
文摘Diamond like carbon(DLC)coatings typically present good self-lubricating tribological properties that could be of interest in sliding dielectric contacts in multiple electrical applications.In this work electro-tribological studies have been performed on several DLC coatings against aluminum in different humidity conditions,in which the coefficients of friction(CoFs)and electrical contact resistance(ECR)were continuously monitored.Results show that CoF and ECR data can be linked to the properties of the coatings(thickness,finishing,microstructure,residual stresses,and wettability)and the degradation modes of their tribological and electrical properties.Therefore,electro-tribological data can provide valuable information about the performance of dielectric coatings,the reasons behind it,and assist in the development of the coatings.ECR also shows potential for on-line monitoring of coated parts in operation.
基金the FAST project funded under the H2020-NMP-PILOTS-2015 scheme(GA n.685825)for financial support.Some of the materials used in this work were provided by the Texas A&M Health Science Center College of Medicine Institute for Regenerative Medicine at Scott&White through a grant from NCRR of the NIH(Grant#P40RR017447).
文摘Bone infections following open bone fracture or implant surgery remain a challenge in the orthopedics field.In order to avoid high doses of systemic drug administration,optimized local antibiotic release from scaffolds is required.3D additive manufactured(AM)scaffolds made with biodegradable polymers are ideal to support bone healing in non-union scenarios and can be given antimicrobial properties by the incorporation of antibiotics.In this study,ciprofloxacin and gentamicin intercalated in the interlamellar spaces of magnesium aluminum layered double hydroxides(MgAl)andα-zirconium phosphates(ZrP),respectively,are dispersed within a thermoplastic polymer by melt compounding and subsequently processed via high temperature melt extrusion AM(~190◦C)into 3D scaffolds.The inorganic fillers enable a sustained antibiotics release through the polymer matrix,controlled by antibiotics counterions exchange or pH conditions.Importantly,both antibiotics retain their functionality after the manufacturing process at high temperatures,as verified by their activity against both Gram+and Gram-bacterial strains.Moreover,scaffolds loaded with filler-antibiotic do not impair human mesenchymal stromal cells osteogenic differentiation,allowing matrix mineralization and the expression of relevant osteogenic markers.Overall,these results suggest the possibility of fabricating dual functionality 3D scaffolds via high temperature melt extrusion for bone regeneration and infection prevention.
文摘As the global population continues to grow,the enormous stress on our environment and resources is becoming impossible to ignore.A focus on producing and consuming as cheaply as possible has created an economy in which objects are briefly used and then discarded as waste,featuring a linear lifecycle that creates an enormous amount of waste.The alternative to the linear economy“take-make-waste”is called the“circular economy”.Under this paradigm,materials are recycled to build new products or components that are designed and built to promote their reuse and refurbishment.This assures the continuous(re-)exploitation of existing resources,reducing the extraction of new raw materials.However,customers often reject these reused or refurbished products under the suspicion that they do not meet the same usability,safety,or performance levels of new products.In this sense,trustworthy records of historical details of refurbished products could increase consumers’confidence in products and components of the“circular economy”,prioritizing trustworthiness,reliability,and transparency.This work presents a new certification tool based on blockchain technology to guarantee trusted,accurate,transparent,and traceable lifecycle information of products and their components and to generate trustworthy certificates to probe refurbished product historical details.This tool aims to enhance refurbished product visibility by creating the basis for making the circular economy a reality in any domain.
文摘Performing accurate and automated semantic segmentation of vegetation is a first algorithmic step towards more complex models that can extract accurate biological information on crop health,weed presence and phenological state,among others.Traditionally,models based on normalized difference vegetation index(NDVI),near infrared channel(NIR)or RGB have been a good indicator of vegetation presence.However,these methods are not suitable for accurately segmenting vegetation showing damage,which precludes their use for downstream phenotyping algorithms.In this paper,we propose a comprehensive method for robust vegetation segmentation in RGB images that can cope with damaged vegetation.The method consists of a first regression convolutional neural network to estimate a virtual NIR channel from an RGB image.Second,we compute two newly proposed vegetation indices from this estimated virtual NIR:the infrared-dark channel subtraction(IDCS)and infrared-dark channel ratio(IDCR)indices.Finally,both the RGB image and the estimated indices are fed into a semantic segmentation deep convolutional neural network to train a model to segment vegetation regardless of damage or condition.The model was tested on 84 plots containing thirteen vegetation species showing different degrees of damage and acquired over 28 days.The results show that the best segmentation is obtained when the input image is augmented with the proposed virtual NIR channel(F1=0:94)and with the proposed IDCR and IDCS vegetation indices(F1=0:95)derived from the estimated NIR channel,while the use of only the image or RGB indices lead to inferior performance(RGB(F1=0:90)NIR(F1=0:82)or NDVI(F1=0:89)channel).The proposed method provides an end-to-end land cover map segmentation method directly from simple RGB images and has been successfully validated in real field conditions.
文摘Estimation of damage in plants is a key issue for crop protection.Currently,experts in the field manually assess the plots.This is a time-consuming task that can be automated thanks to the latest technology in computer vision(CV).The use of image-based systems and recently deep learning-based systems have provided good results in several agricultural applications.These image-based applications outperform expert evaluation in controlled environments,and now they are being progressively included in non-controlled field applications.A novel solution based on deep learning techniques in combination with image processingmethods is proposed to tackle the estimate of plant damage in the field.The proposed solution is a two-stage algorithm.In a first stage,the single plants in the plots are detected by an object detection YOLO based model.Then a regression model is applied to estimate the damage of each individual plant.The solution has been developed and validated in oilseed rape plants to estimate the damage caused by flea beetle.The crop detection model achieves a mean precision average of 91%with a mAP@0.50 of 0.99 and a mAP@0.95 of 0.91 for oilseed rape specifically.The regression model to estimate up to 60%of damage degree in single plants achieves a MAE of 7.11,and R2 of 0.46 in comparison with manual evaluations done plant by plant by experts.Models are deployed in a docker,and with a REST API communication protocol they can be inferred directly for images acquired in the field from a mobile device.