The study was conducted to identify indoor air quality and the level of thermal comfort in various selected locations in Faculty of Engineering and Built Environment (FKAB), University Kebangsaan Malaysia (UKM) wi...The study was conducted to identify indoor air quality and the level of thermal comfort in various selected locations in Faculty of Engineering and Built Environment (FKAB), University Kebangsaan Malaysia (UKM) with built-up area of 250,936 fie. The indoor air quality and thermal comfort were measured at various selected locations by using indoor air quality equipment (Thermal Comfort SERI). The thermal comfort assessments are based on Malaysian Code of Practice Indoor Air Quality 2005 and Moderate Thermal Environments-Determination of the PMV and PPD indices specification of the condition for thermal comfort (ISO7730:1994) From the data analysis, the FKAB building is considered inadequately vented space. The concentration of CO2 for all sampling area evaluated exceeds the recommended concentration (〉 1000 ppm). The ventilation system used in FKAB building is designed by delivering fix amount of fresh air into building from external building without consideration on the number of occupants. This common ventilation design will increase the amount of CO2 dramatically all day long and these reflect the inefficiency of energy used. The faculty needs to be equipped with a comprehensive energy management system that can allow detailed documentation of continuous performance of all energy system and consumption in the building.展开更多
Magnesium(Mg)-based bioresorbable stents represent a potentially groundbreaking advancement in cardiovascular therapy;offering tem-porary vessel support and complete biodegradability—addressing limitations of traditi...Magnesium(Mg)-based bioresorbable stents represent a potentially groundbreaking advancement in cardiovascular therapy;offering tem-porary vessel support and complete biodegradability—addressing limitations of traditional stents like in-stent restenosis and long-term com-plications.However,challenges such as rapid corrosion and suboptimal endothelialisation have hindered their clinical adoption.This review highlights the latest breakthroughs in surface modification,alloying,and coating strategies to enhance the mechanical integrity,corrosion resistance,and biocompatibility of Mg-based stents.Key surface engineering techniques,including polymer and bioactive coatings,are ex-amined for their role in promoting endothelial healing and minimising inflammatory responses.Future directions are proposed,focusing on personalised stent designs to optimize efficacy and long-term outcomes,positioning Mg-based stents as a transformative solution in interventional cardiology.展开更多
Fatigue failure continues to be a significant challenge in designing structural and mechanical components subjected to repeated and complex loading.While earlier studies mainly examined material properties and how str...Fatigue failure continues to be a significant challenge in designing structural and mechanical components subjected to repeated and complex loading.While earlier studies mainly examined material properties and how stress affects lifespan,this review offers the first comprehensive,multiscale comparison of strategies that optimize geometry to improve fatigue performance.This includes everything from microscopic features like the shape of graphite nodules to large-scale design elements such as fillets,notches,and overall structural layouts.We analyze and combine various methods,including topology and shape optimization,the ability of additive manufacturing to finetune internal geometries,and reliability-based design approaches.A key new contribution is our proposal of a standard way to evaluate geometry-focused fatigue design,allowing for consistent comparison and encouraging validation across different fields.Furthermore,we highlight important areas for future research,such as incorporating manufacturing flaws,using multiscale models,and integrating machine learning techniques.This work is the first to provide a broad geometric viewpoint in fatigue engineering,laying the groundwork for future design methods that are driven by data and centered on reliability.展开更多
Cardiac tissue engineering aims to efficiently replace or repair injured heart tissue using scaffolds,relevant cells,or their combination.While the combination of scaffolds and relevant cells holds the potential to ra...Cardiac tissue engineering aims to efficiently replace or repair injured heart tissue using scaffolds,relevant cells,or their combination.While the combination of scaffolds and relevant cells holds the potential to rapidly remuscularize the heart,thereby avoiding the slow process of cell recruitment,the proper ex vivo cellularization of a scaffold poses a substantial challenge.First,proper diffusion of nutrients and oxygen should be provided to the cell-seeded scaffold.Second,to generate a functional tissue construct,cells can benefit from physiological-like conditions.To meet these challenges,we developed a modular bioreactor for the dynamic cellularization of full-thickness cardiac scaffolds under synchronized mechanical and electrical stimuli.In this unique bioreactor system,we designed a cyclic mechanical load that mimics the left ventricle volume inflation,thus achieving a steady stimulus,as well as an electrical stimulus with an action potential profile to mirror the cells’microenvironment and electrical stimuli in the heart.These mechanical and electrical stimuli were synchronized according to cardiac physiology and regulated by constant feedback.When applied to a seeded thick porcine cardiac extracellular matrix(pcECM)scaffold,these stimuli improved the proliferation of mesenchymal stem/stromal cells(MSCs)and induced the formation of a dense tissue-like structure near the scaffold’s surface.Most importantly,after 35 d of cultivation,the MSCs presented the early cardiac progenitor markers Connexin-43 andα-actinin,which were absent in the control cells.Overall,this research developed a new bioreactor system for cellularizing cardiac scaffolds under cardiac-like conditions,aiming to restore a sustainable dynamic living tissue that can bear the essential cardiac excitation–contraction coupling.展开更多
This study examines the dynamic response of two adjacent 9-and 20-story benchmark steel buildings subjected to six near-fault earthquake records.Two-dimensional numerical models were employed to account for the comple...This study examines the dynamic response of two adjacent 9-and 20-story benchmark steel buildings subjected to six near-fault earthquake records.Two-dimensional numerical models were employed to account for the complexities of structure-soil-structure interaction(SSSI).The research focuses on the separation gap between the buildings and the effects of pounding while considering Fixed Base(FB)and SSSI models,evaluated according to UBC 94 and ASCE 7-16 seismic codes.Key findings reveal that pounding occurs with the UBC 94 separation gap when earthquake frequency aligns with system frequency,leading to increased column stresses in the 9-story building.In contrast,the ASCE 7-16 standard effectively prevents pounding in both the FB and SSSI models.Additionally,drifts and displacements of lower floors in SSSI models exceed the allowable limits of ASCE 7-16,underscoring the impact of soil-structure interaction on seismic response.展开更多
Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Re...Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Recent approaches such as Distilling Step-by-Step(DSbS)introduce explanation supervision,yet they apply it in a uniform manner that may not fully exploit the different learning dynamics of prediction and explanation.In this work,we propose a task-structured curriculum learning(TSCL)framework that structures training into three sequential phases:(i)prediction-only,to establish stable feature representations;(ii)joint prediction-explanation,to align task outputs with rationale generation;and(iii)explanation-only,to refine the quality of rationales.This design provides a simple but effective modification to DSbS,requiring no architectural changes and adding negligible training cost.We justify the phase scheduling with ablation studies and convergence analysis,showing that an initial prediction-heavy stage followed by a balanced joint phase improves both stability and explanation alignment.Extensive experiments on five datasets(e-SNLI,ANLI,CommonsenseQA,SVAMP,and MedNLI)demonstrate that TSCL consistently outperforms strong baselines,achieving gains of+1.7-2.6 points in accuracy and 0.8-1.2 in ROUGE-L,corresponding to relative error reductions of up to 21%.Beyond lexical metrics,human evaluation and ERASERstyle faithfulness diagnostics confirm that TSCL produces more faithful and informative explanations.Comparative training curves further reveal faster convergence and lower variance across seeds.Efficiency analysis shows less than 3%overhead in wall-clock training time and no additional inference cost,making the approach practical for realworld deployment.This study demonstrates that a simple task-structured curriculum can significantly improve the effectiveness of knowledge distillation.By separating and sequencing objectives,TSCL achieves a better balance between accuracy,stability,and explanation quality.The framework generalizes across domains,including medical NLI,and offers a principled recipe for future applications in multimodal reasoning and reinforcement learning.展开更多
Root-zone temperature(RZT)strongly affects plant growth,nutrient uptake and tolerance to environmental stress,making its regulation a key challenge in greenhouse cultivation in cold climates.This study aimed to assess...Root-zone temperature(RZT)strongly affects plant growth,nutrient uptake and tolerance to environmental stress,making its regulation a key challenge in greenhouse cultivation in cold climates.This study aimed to assess the potential of passive techniques,namely black polyethylene mulch and row covers,for modifying RZT dynamics in lettuce(Lactuca sativa L.)production and to evaluate the predictive performance of the eXtreme Gradient Boosting(XGBoost)algorithm.Experiments were conducted in Iğdır,Türkiye,over a 61-day period,with soil temperature continuously monitored at depths of 1-30 cm under mulched and non-mulched conditions,alongside measurements of greenhouse air temperature both with and without row covers.The application of row covers increased internal air temperature by 5.8℃,while mulching raised RZT by 0.6-1.3℃,with effects diminishing at deeper layers.XGBoost modeling achieved high predictive accuracy,with RMSE values of 0.150-0.189◦C and R^(2)values above 0.99,and feature-importance analysis indicated that neighboring soil depths were the strongest predictors of RZT.These findings show that integrating row covers and mulching can stabilize the root-zone microclimate without active heating.The XGBoost model provides a robust tool for forecasting soil temperature and supports sustainable greenhouse production in cold regions.展开更多
Arid and semi-arid ecosystems are prone to extensive fires due to specific climatic conditions,sparse vegetation cover,and high density of fine fuels.Understanding the flammability characteristics of land covers is es...Arid and semi-arid ecosystems are prone to extensive fires due to specific climatic conditions,sparse vegetation cover,and high density of fine fuels.Understanding the flammability characteristics of land covers is essential for fire management and designing land restoration programs in arid and semi-arid ecosystems.This study provided a new approach to evaluate the flammability of shrublands and woodlands using flammability indices(FIs)including time to ignition(TI),duration of combustion(DC),and flame height(FH)of plant species and their relative frequencies in the Dalfard Basin of southeastern Iran.The results showed that there was a significant difference in FIs between land covers.Shrublands had higher flammability potential compared with woodlands.Plant moisture content had a negative relationship with TI(P<0.010)and no significant relationship with DC and FH(P>0.050).Artemisia spp.,Astragalus gossypinus Fischer,Amygdalus scoparia Spach,and Cymbopogon jwarancusa(Jones)Schult.had the highest FI.Tree species such as Rhazya stricta Decne.,and Pistacia atlantica Desf.showed greater resistance to fire.Using principal component analysis,the relationship between species and FIs was examined,and TI of wet fuel was the most important FI in relation to species.Structural equation model showed that life form(P<0.001)was the most important flammability driver.Precipitation(P<0.010)and legume species(P<0.010)were significantly related to the flammability in arid land.This study emphasizes the importance of managing high-risk species and using resistant species in vegetation restoration and shows that combining species FIs with their abundance is an effective tool for assessing fire risk and fuel management at the plant community scale.展开更多
Nonlinear static procedures are widely adopted in structural engineering practice for seismic performance assessment due to their simplicity and computational efficiency.However,their reliability depends heavily on ho...Nonlinear static procedures are widely adopted in structural engineering practice for seismic performance assessment due to their simplicity and computational efficiency.However,their reliability depends heavily on how the nonlinear behaviour of structural components is represented.The recent earthquakes in Albania(2019)and Türkiye(2023)have underscored the need for accurate assessment techniques,particularly for older reinforced concrete buildings with poor detailing.This study quantifies the discrepancies between default and user-defined component modelling in pushover analysis of pre-modern reinforced concrete structures,analysing two representative low-and mid-rise reinforced concrete frame buildings.The lumped plasticity approach incorporates moment-rotation relationships derived from actual member properties and reinforcement configurations,while the distributed plasticity approach uses software-generated default properties based on modern codes.Results show that the distributed plasticity models systematically overestimate both the strength and the deformation capacity by up to 35%compared to lumped plasticity models,especially in buildings with poor detailing and low concrete strength.These findings demonstrate that default software procedures,widely used in practice but not validated for pre-modern structures,produce dangerously unconservative seismic performance estimates.The study provides quantitative evidence of the critical need for tailored modelling strategies that reflect the actual conditions of the existing building stock.展开更多
In the quest to enhance energy efficiency and reduce environmental impact in the transportation sector,the recovery of waste heat from diesel engines has become a critical area of focus.This study provided an exhausti...In the quest to enhance energy efficiency and reduce environmental impact in the transportation sector,the recovery of waste heat from diesel engines has become a critical area of focus.This study provided an exhaustive thermodynamic analysis optimizing Organic Rankine Cycle(ORC)systems forwaste heat recovery fromdiesel engines.Thestudy assessed the performance of five candidateworking fluids—R11,R123,R113,R245fa,and R141b—under a range of operating conditions,specifically varying overheat temperatures and evaporation pressures.The results indicated that the choice of working fluid substantially influences the system’s exergetic efficiency,net output power,and thermal efficiency.R245fa showed an outstanding net output power of 30.39 kW at high overheat conditions,outperforming R11,which is significant for high-temperature waste heat recovery.At lower temperatures,R11 and R113 demonstrated higher exergetic efficiencies,with R11 reaching a peak exergetic efficiency of 7.4%at an evaporation pressure of 10 bar and an overheat of 10℃.The study also revealed that controlling the overheat and optimizing the evaporation pressure are crucial for enhancing the net output power of the ORC system.Specifically,at an evaporation pressure of 30 bar and an overheat of 0℃,R113 exhibited the lowest exergetic destruction of 544.5 kJ/kg,making it a suitable choice for minimizing irreversible losses.These findings are instrumental for understanding the performance of ORC systems in waste heat recovery applications and offer valuable insights for the design and operation of more efficient and environmentally friendly diesel engine systems.展开更多
Breast cancer screening programs rely heavily on mammography for early detection;however,diagnostic performance is strongly affected by inter-reader variability,breast density,and the limitations of conven-tional comp...Breast cancer screening programs rely heavily on mammography for early detection;however,diagnostic performance is strongly affected by inter-reader variability,breast density,and the limitations of conven-tional computer-aided detection systems.Recent advances in deep learning have enabled more robust and scalable solutions for large-scale screening,yet a systematic comparison of modern object detection architectures on nationally representative datasets remains limited.This study presents a comprehensive quantitative comparison of prominent deep learning–based object detection architectures for Artificial Intelligence-assisted mammography analysis using the MammosighTR dataset,developed within the Turkish National Breast Cancer Screening Program.The dataset comprises 12,740 patient cases collected between 2016 and 2022,annotated with BI-RADS categories,breast density levels,and lesion localization labels.A total of 31 models were evaluated,including One-Stage,Two-Stage,and Transformer-based architectures,under a unified experimental framework at both patient and breast levels.The results demonstrate that Two-Stage architectures consistently outperform One-Stage models,achieving approximately 2%–4%higher Macro F1-Scores and more balanced precision–recall trade-offs,with Double-Head R-CNN and Dynamic R-CNN yielding the highest overall performance(Macro F1≈0.84–0.86).This advantage is primarily attributed to the region proposal mechanism and improved class balance inherent to Two-Stage designs.One-Stage detectors exhibited higher sensitivity and faster inference,reaching Recall values above 0.88,but experienced minor reductions in Precision and overall accuracy(≈1%–2%)compared with Two-Stage models.Among Transformer-based architectures,Deformable DEtection TRansformer demonstrated strong robustness and consistency across datasets,achieving Macro F1-Scores comparable to CNN-based detectors(≈0.83–0.85)while exhibiting minimal performance degradation under distributional shifts.Breast density–based analysis revealed increased misclassification rates in medium-density categories(types B and C),whereas Transformer-based architectures maintained more stable performance in high-density type D tissue.These findings quantitatively confirm that both architectural design and tissue characteristics play a decisive role in diagnostic accuracy.Overall,the study provides a reproducible benchmark and highlights the potential of hybrid approaches that combine the accuracy of Two-Stage detectors with the contextual modeling capability of Transformer architectures for clinically reliable breast cancer screening systems.展开更多
Accurate short-term electricity price forecasts are essential for market participants to optimize bidding strategies,hedge risk and plan generation schedules.By leveraging advanced data analytics and machine learning ...Accurate short-term electricity price forecasts are essential for market participants to optimize bidding strategies,hedge risk and plan generation schedules.By leveraging advanced data analytics and machine learning methods,accurate and reliable price forecasts can be achieved.This study forecasts day-ahead prices in Türkiye’s electricity market using eXtreme Gradient Boosting(XGBoost).We benchmark XGBoost against four alternatives—Support Vector Machines(SVM),Long Short-Term Memory(LSTM),Random Forest(RF),and Gradient Boosting(GBM)—using 8760 hourly observations from 2023 provided by Energy Exchange Istanbul(EXIST).All models were trained on an identical chronological 80/20 train–test split,with hyperparameters tuned via 5-fold cross-validation on the training set.XGBoost achieved the best performance(Mean Absolute Error(MAE)=144.8 TRY/MWh,Root Mean Square Error(RMSE)=201.8 TRY/MWh,coefficient of determination(R^(2))=0.923)while training in 94 s.To enhance interpretability and identify key drivers,we employed Shapley Additive Explanations(SHAP),which highlighted a strong association between higher prices and increased natural-gas-based generation.The results provide a clear performance benchmark and practical guidance for selecting forecasting approaches in day-ahead electricity markets.展开更多
Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel a...Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications.展开更多
The research findings on the ground motion and liquefaction potential analyses during the 2018 Great Indonesia Earthquake(M_(w)7.5)are significant and crucial.The earthquake triggered soil-structure damage due to liqu...The research findings on the ground motion and liquefaction potential analyses during the 2018 Great Indonesia Earthquake(M_(w)7.5)are significant and crucial.The earthquake triggered soil-structure damage due to liquefaction.This study,which thoroughly investigated four sites at Palu,was conducted by performing a comprehensive ground motion parameter analysis.The ground motion characteristics were presented and justified,particularly for the most impacted direction.Ground motion predictions were analysed to define the spectral accelerations,and matching spectral accelerations were conducted to produce ground motions for each site.Non-linear seismic ground response analysis based on the hyperbolic model of pressure pressure-dependent was performed to investigate cyclic soil behaviour.The results revealed that ground motion is crucial in significant soil damage,and the earthquake energy could trigger deep liquefaction.As the most significant ground motion,the vertical ground motion is essential in determining deep liquefaction.The discussion on the impact of liquefaction based on the results of the numerical analysis is presented.Significant ground motion with a longer duration could have a substantial impact on deep liquefaction in the study area.These findings depict how the 2018 Indonesia Earthquake(M_(w)7.5)triggered a mega-liquefaction in Palu City.The results could enhance the understanding of the importance of seismic hazard assessment.It is recommended that site investigation and soil improvement should be planned to counteract liquefaction damage before construction.This study also suggests conducting seismic hazard assessments for city development to minimise the potential disaster impact in the study area.展开更多
ThePigeon-InspiredOptimization(PIO)algorithmconstitutes ametaheuristic method derived fromthe homing behaviour of pigeons.Initially formulated for three-dimensional path planning in unmanned aerial vehicles(UAVs),the ...ThePigeon-InspiredOptimization(PIO)algorithmconstitutes ametaheuristic method derived fromthe homing behaviour of pigeons.Initially formulated for three-dimensional path planning in unmanned aerial vehicles(UAVs),the algorithmhas attracted considerable academic and industrial interest owing to its effective balance between exploration and exploitation,coupled with advantages in real-time performance and robustness.Nevertheless,as applications have diversified,limitations in convergence precision and a tendency toward premature convergence have become increasingly evident,highlighting a need for improvement.This reviewsystematically outlines the developmental trajectory of the PIO algorithm,with a particular focus on its core applications in UAV navigation,multi-objective formulations,and a spectrum of variantmodels that have emerged in recent years.It offers a structured analysis of the foundational principles underlying the PIO.It conducts a comparative assessment of various performance-enhanced versions,including hybrid models that integrate mechanisms from other optimization paradigms.Additionally,the strengths andweaknesses of distinct PIOvariants are critically examined frommultiple perspectives,including intrinsic algorithmic characteristics,suitability for specific application scenarios,objective function design,and the rigor of the statistical evaluation methodologies employed in empirical studies.Finally,this paper identifies principal challenges within current PIO research and proposes several prospective research directions.Future work should focus on mitigating premature convergence by refining the two-phase search structure and adjusting the exponential decrease of individual numbers during the landmark operator.Enhancing parameter adaptation strategies,potentially using reinforcement learning for dynamic tuning,and advancing theoretical analyses on convergence and complexity are also critical.Further applications should be explored in constrained path planning,Neural Architecture Search(NAS),and other real-worldmulti-objective problems.For Multi-objective PIO(MPIO),key improvements include controlling the growth of the external archive and designing more effective selection mechanisms to maintain convergence efficiency.These efforts are expected to strengthen both the theoretical foundation and practical versatility of PIO and its variants.展开更多
Accurate precipitation estimation in semiarid,topographically complicated areas is critical for water resource management and climate risk monitoring.This work provides a detailed,multi-scale evaluation of four major ...Accurate precipitation estimation in semiarid,topographically complicated areas is critical for water resource management and climate risk monitoring.This work provides a detailed,multi-scale evaluation of four major satellite precipitation products(CHIRPS,PERSIANN-CDR,IMERG-F v07,and GSMaP)over Isfahan province,Iran,over a 9-year period(2015-2023).The performance of these products was benchmarked against a dense network of 98 rain gauges using a suite of continuous and categorical statistical metrics,following a two-stage quality control protocol to remove outliers and false alarms.The results revealed that the performance of all products improves with temporal aggregation.At the daily level,GSMaP performed marginally better,although all products were linked with considerable uncertainty.At the monthly and annual levels,the GPM-era products(IMERG and GSMaP)clearly beat the other two,establishing themselves as dependable tools for long-term hydro-climatological studies.Error analysis revealed that topography is the dominant regulating factor,creating a systematic elevationdependent bias,largely characterized by underestimation from most products in high-elevation areas,though the PERSIANN-CDR product exhibited a contrasting overestimation tendency.Finally,the findings highlight the importance of implementing local,elevation-dependent calibration before deploying these products in hydrological modeling.展开更多
In dry-coupled ultrasonic thickness measurement,thick rubber layers introduce high-amplitude parasitic echoes that obscure defect signals and degrade thickness accuracy.Existing methods struggle to resolve overlap-pin...In dry-coupled ultrasonic thickness measurement,thick rubber layers introduce high-amplitude parasitic echoes that obscure defect signals and degrade thickness accuracy.Existing methods struggle to resolve overlap-ping echoes under variable coupling conditions and non-stationary noise.This study proposes a novel dual-criterion framework integrating energy contribution and statistical impulsivity metrics to isolate specimen re-flections from coupling-layer interference.By decomposing A-scan signals into Intrinsic Mode Functions(IMFs),the framework employs energy contribution thresholds(>85%)and kurtosis indices(>3)to autonomously select IMFs containing valid specimen echoes.Hybrid time-frequency thresholding further suppresses interference through amplitude filtering and spectral focusing.Experimental results demonstrate the framework’s robustness,achieving 92.3%thickness accuracy for 5 mm steel specimens with 5 mm rubber coupling,outperforming conventional methods by up to 18.7%.The dual-criterion approach reduces operator dependency by 37%and maintainsΔT<0.03 mm under surface roughness up to 6.3μm,offering a practical solution for industrial nondestructive testing with thick dry-coupled interfaces.展开更多
This paper studies wireless vehicular communication(VehCom)in intelligent transportation systems using an orthogonal frequency division multiplexing with index modulation(OFDM-IM).In the concept of IM,data is transmit...This paper studies wireless vehicular communication(VehCom)in intelligent transportation systems using an orthogonal frequency division multiplexing with index modulation(OFDM-IM).In the concept of IM,data is transmitted not only through the modulated symbols but also via the indices of the active subcarriers.In contrast to the original OFDM,OFDM-IM activates only non-zero subcarriers,increasing energy efficiency.However,the pilotassisted channel estimation(CE)method is a significant challenge in OFDM-IM,where the desired pilot subcarrier interval is related to the OFDM-IM subblock length.This paper proposes a walsh-scattered pilot-assisted CE for OFDM-IM VehCom.The optimum walsh-scattered pilot assignment is proposed to improve the transmission efficiency.Furthermore,a space-time block code with a high transmit diversity gain is employed for OFDM-IM VehCom to enhance VehCom's signal quality.The results show that the proposed method performs higher CE accuracy and better bit-error rate with significant spectral and energy efficiencies than conventional methods.展开更多
Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstruc...Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstructions,and substantial computational demands,especially in complex forest terrains.To address these challenges,this study proposes a novel forest fire detection model utilizing audio classification and machine learning.We developed an audio-based pipeline using real-world environmental sound recordings.Sounds were converted into Mel-spectrograms and classified via a Convolutional Neural Network(CNN),enabling the capture of distinctive fire acoustic signatures(e.g.,crackling,roaring)that are minimally impacted by visual or weather conditions.Internet of Things(IoT)sound sensors were crucial for generating complex environmental parameters to optimize feature extraction.The CNN model achieved high performance in stratified 5-fold cross-validation(92.4%±1.6 accuracy,91.2%±1.8 F1-score)and on test data(94.93%accuracy,93.04%F1-score),with 98.44%precision and 88.32%recall,demonstrating reliability across environmental conditions.These results indicate that the audio-based approach not only improves detection reliability but also markedly reduces computational overhead compared to traditional image-based methods.The findings suggest that acoustic sensing integrated with machine learning offers a powerful,low-cost,and efficient solution for real-time forest fire monitoring in complex,dynamic environments.展开更多
Rockfall hazards pose significant risks to both cultural heritage and populated areas,necessitating comprehensive assessment methodologies.Despite extensive research on rockfalls,only a small number of studies have di...Rockfall hazards pose significant risks to both cultural heritage and populated areas,necessitating comprehensive assessment methodologies.Despite extensive research on rockfalls,only a small number of studies have directly compared empirical methods with modelling approaches.This study investigated rockfalls in five settlements within the Cappadocia region of Türkiye,employing both empirical methods and advanced three-dimensional(3D)probabilistic modeling.The energy line angle approach was applied to identify rockfall propagation zones,while high-resolution digital surface models derived from unmanned aerial vehicle(UAV)imagery facilitated detailed 3D rockfall simulations.Cappadocia’s unique geological setting—comprising alternating layers of ignimbrites and weaker fluviolacustrine deposits—renders it highly susceptible to rockfalls intensified by wetting-drying and freeze-thaw cycles.Results indicate that rockfall propagation characteristics vary markedly between settlements:Göre and Tatlarin exhibit shorter runout distances due to basalt-dominated slopes,whereas Akköy,SoğanlıandŞahinefendi display longer trajectories associated with welded ignimbrites.Empirical cone propagation analyses correspond broadly with field observations,but variations in energy line angles(23°-33°)highlight the necessity for site-specific calibration.Comparative evaluations demonstrate that 3D probabilistic modeling better captures local-scale block dynamics and identifies high-risk areas affected by topographic and structural features such as rockfall ditches.These findings emphasize the importance of integrating empirical and 3D approaches to improve hazard zoning,optimize mitigation structures and guide the protection of Cappadocia’s unique cultural heritage landscape.展开更多
文摘The study was conducted to identify indoor air quality and the level of thermal comfort in various selected locations in Faculty of Engineering and Built Environment (FKAB), University Kebangsaan Malaysia (UKM) with built-up area of 250,936 fie. The indoor air quality and thermal comfort were measured at various selected locations by using indoor air quality equipment (Thermal Comfort SERI). The thermal comfort assessments are based on Malaysian Code of Practice Indoor Air Quality 2005 and Moderate Thermal Environments-Determination of the PMV and PPD indices specification of the condition for thermal comfort (ISO7730:1994) From the data analysis, the FKAB building is considered inadequately vented space. The concentration of CO2 for all sampling area evaluated exceeds the recommended concentration (〉 1000 ppm). The ventilation system used in FKAB building is designed by delivering fix amount of fresh air into building from external building without consideration on the number of occupants. This common ventilation design will increase the amount of CO2 dramatically all day long and these reflect the inefficiency of energy used. The faculty needs to be equipped with a comprehensive energy management system that can allow detailed documentation of continuous performance of all energy system and consumption in the building.
文摘Magnesium(Mg)-based bioresorbable stents represent a potentially groundbreaking advancement in cardiovascular therapy;offering tem-porary vessel support and complete biodegradability—addressing limitations of traditional stents like in-stent restenosis and long-term com-plications.However,challenges such as rapid corrosion and suboptimal endothelialisation have hindered their clinical adoption.This review highlights the latest breakthroughs in surface modification,alloying,and coating strategies to enhance the mechanical integrity,corrosion resistance,and biocompatibility of Mg-based stents.Key surface engineering techniques,including polymer and bioactive coatings,are ex-amined for their role in promoting endothelial healing and minimising inflammatory responses.Future directions are proposed,focusing on personalised stent designs to optimize efficacy and long-term outcomes,positioning Mg-based stents as a transformative solution in interventional cardiology.
文摘Fatigue failure continues to be a significant challenge in designing structural and mechanical components subjected to repeated and complex loading.While earlier studies mainly examined material properties and how stress affects lifespan,this review offers the first comprehensive,multiscale comparison of strategies that optimize geometry to improve fatigue performance.This includes everything from microscopic features like the shape of graphite nodules to large-scale design elements such as fillets,notches,and overall structural layouts.We analyze and combine various methods,including topology and shape optimization,the ability of additive manufacturing to finetune internal geometries,and reliability-based design approaches.A key new contribution is our proposal of a standard way to evaluate geometry-focused fatigue design,allowing for consistent comparison and encouraging validation across different fields.Furthermore,we highlight important areas for future research,such as incorporating manufacturing flaws,using multiscale models,and integrating machine learning techniques.This work is the first to provide a broad geometric viewpoint in fatigue engineering,laying the groundwork for future design methods that are driven by data and centered on reliability.
基金funded by the Israeli Ministry of Innovation,Science and Technology(Grant No.3-11873)the Israel Science Foundation(Grant No.1563/10)+1 种基金the Randy L.and Melvin R.Berlin Family Research Center for Regenerative Medicinethe Gurwin Family Foundation.
文摘Cardiac tissue engineering aims to efficiently replace or repair injured heart tissue using scaffolds,relevant cells,or their combination.While the combination of scaffolds and relevant cells holds the potential to rapidly remuscularize the heart,thereby avoiding the slow process of cell recruitment,the proper ex vivo cellularization of a scaffold poses a substantial challenge.First,proper diffusion of nutrients and oxygen should be provided to the cell-seeded scaffold.Second,to generate a functional tissue construct,cells can benefit from physiological-like conditions.To meet these challenges,we developed a modular bioreactor for the dynamic cellularization of full-thickness cardiac scaffolds under synchronized mechanical and electrical stimuli.In this unique bioreactor system,we designed a cyclic mechanical load that mimics the left ventricle volume inflation,thus achieving a steady stimulus,as well as an electrical stimulus with an action potential profile to mirror the cells’microenvironment and electrical stimuli in the heart.These mechanical and electrical stimuli were synchronized according to cardiac physiology and regulated by constant feedback.When applied to a seeded thick porcine cardiac extracellular matrix(pcECM)scaffold,these stimuli improved the proliferation of mesenchymal stem/stromal cells(MSCs)and induced the formation of a dense tissue-like structure near the scaffold’s surface.Most importantly,after 35 d of cultivation,the MSCs presented the early cardiac progenitor markers Connexin-43 andα-actinin,which were absent in the control cells.Overall,this research developed a new bioreactor system for cellularizing cardiac scaffolds under cardiac-like conditions,aiming to restore a sustainable dynamic living tissue that can bear the essential cardiac excitation–contraction coupling.
文摘This study examines the dynamic response of two adjacent 9-and 20-story benchmark steel buildings subjected to six near-fault earthquake records.Two-dimensional numerical models were employed to account for the complexities of structure-soil-structure interaction(SSSI).The research focuses on the separation gap between the buildings and the effects of pounding while considering Fixed Base(FB)and SSSI models,evaluated according to UBC 94 and ASCE 7-16 seismic codes.Key findings reveal that pounding occurs with the UBC 94 separation gap when earthquake frequency aligns with system frequency,leading to increased column stresses in the 9-story building.In contrast,the ASCE 7-16 standard effectively prevents pounding in both the FB and SSSI models.Additionally,drifts and displacements of lower floors in SSSI models exceed the allowable limits of ASCE 7-16,underscoring the impact of soil-structure interaction on seismic response.
文摘Knowledge distillation has become a standard technique for compressing large language models into efficient student models,but existing methods often struggle to balance prediction accuracy with explanation quality.Recent approaches such as Distilling Step-by-Step(DSbS)introduce explanation supervision,yet they apply it in a uniform manner that may not fully exploit the different learning dynamics of prediction and explanation.In this work,we propose a task-structured curriculum learning(TSCL)framework that structures training into three sequential phases:(i)prediction-only,to establish stable feature representations;(ii)joint prediction-explanation,to align task outputs with rationale generation;and(iii)explanation-only,to refine the quality of rationales.This design provides a simple but effective modification to DSbS,requiring no architectural changes and adding negligible training cost.We justify the phase scheduling with ablation studies and convergence analysis,showing that an initial prediction-heavy stage followed by a balanced joint phase improves both stability and explanation alignment.Extensive experiments on five datasets(e-SNLI,ANLI,CommonsenseQA,SVAMP,and MedNLI)demonstrate that TSCL consistently outperforms strong baselines,achieving gains of+1.7-2.6 points in accuracy and 0.8-1.2 in ROUGE-L,corresponding to relative error reductions of up to 21%.Beyond lexical metrics,human evaluation and ERASERstyle faithfulness diagnostics confirm that TSCL produces more faithful and informative explanations.Comparative training curves further reveal faster convergence and lower variance across seeds.Efficiency analysis shows less than 3%overhead in wall-clock training time and no additional inference cost,making the approach practical for realworld deployment.This study demonstrates that a simple task-structured curriculum can significantly improve the effectiveness of knowledge distillation.By separating and sequencing objectives,TSCL achieves a better balance between accuracy,stability,and explanation quality.The framework generalizes across domains,including medical NLI,and offers a principled recipe for future applications in multimodal reasoning and reinforcement learning.
文摘Root-zone temperature(RZT)strongly affects plant growth,nutrient uptake and tolerance to environmental stress,making its regulation a key challenge in greenhouse cultivation in cold climates.This study aimed to assess the potential of passive techniques,namely black polyethylene mulch and row covers,for modifying RZT dynamics in lettuce(Lactuca sativa L.)production and to evaluate the predictive performance of the eXtreme Gradient Boosting(XGBoost)algorithm.Experiments were conducted in Iğdır,Türkiye,over a 61-day period,with soil temperature continuously monitored at depths of 1-30 cm under mulched and non-mulched conditions,alongside measurements of greenhouse air temperature both with and without row covers.The application of row covers increased internal air temperature by 5.8℃,while mulching raised RZT by 0.6-1.3℃,with effects diminishing at deeper layers.XGBoost modeling achieved high predictive accuracy,with RMSE values of 0.150-0.189◦C and R^(2)values above 0.99,and feature-importance analysis indicated that neighboring soil depths were the strongest predictors of RZT.These findings show that integrating row covers and mulching can stabilize the root-zone microclimate without active heating.The XGBoost model provides a robust tool for forecasting soil temperature and supports sustainable greenhouse production in cold regions.
文摘Arid and semi-arid ecosystems are prone to extensive fires due to specific climatic conditions,sparse vegetation cover,and high density of fine fuels.Understanding the flammability characteristics of land covers is essential for fire management and designing land restoration programs in arid and semi-arid ecosystems.This study provided a new approach to evaluate the flammability of shrublands and woodlands using flammability indices(FIs)including time to ignition(TI),duration of combustion(DC),and flame height(FH)of plant species and their relative frequencies in the Dalfard Basin of southeastern Iran.The results showed that there was a significant difference in FIs between land covers.Shrublands had higher flammability potential compared with woodlands.Plant moisture content had a negative relationship with TI(P<0.010)and no significant relationship with DC and FH(P>0.050).Artemisia spp.,Astragalus gossypinus Fischer,Amygdalus scoparia Spach,and Cymbopogon jwarancusa(Jones)Schult.had the highest FI.Tree species such as Rhazya stricta Decne.,and Pistacia atlantica Desf.showed greater resistance to fire.Using principal component analysis,the relationship between species and FIs was examined,and TI of wet fuel was the most important FI in relation to species.Structural equation model showed that life form(P<0.001)was the most important flammability driver.Precipitation(P<0.010)and legume species(P<0.010)were significantly related to the flammability in arid land.This study emphasizes the importance of managing high-risk species and using resistant species in vegetation restoration and shows that combining species FIs with their abundance is an effective tool for assessing fire risk and fuel management at the plant community scale.
文摘Nonlinear static procedures are widely adopted in structural engineering practice for seismic performance assessment due to their simplicity and computational efficiency.However,their reliability depends heavily on how the nonlinear behaviour of structural components is represented.The recent earthquakes in Albania(2019)and Türkiye(2023)have underscored the need for accurate assessment techniques,particularly for older reinforced concrete buildings with poor detailing.This study quantifies the discrepancies between default and user-defined component modelling in pushover analysis of pre-modern reinforced concrete structures,analysing two representative low-and mid-rise reinforced concrete frame buildings.The lumped plasticity approach incorporates moment-rotation relationships derived from actual member properties and reinforcement configurations,while the distributed plasticity approach uses software-generated default properties based on modern codes.Results show that the distributed plasticity models systematically overestimate both the strength and the deformation capacity by up to 35%compared to lumped plasticity models,especially in buildings with poor detailing and low concrete strength.These findings demonstrate that default software procedures,widely used in practice but not validated for pre-modern structures,produce dangerously unconservative seismic performance estimates.The study provides quantitative evidence of the critical need for tailored modelling strategies that reflect the actual conditions of the existing building stock.
基金funded by the Huaiyin Institute of Technology—Institute of Smart Energy.
文摘In the quest to enhance energy efficiency and reduce environmental impact in the transportation sector,the recovery of waste heat from diesel engines has become a critical area of focus.This study provided an exhaustive thermodynamic analysis optimizing Organic Rankine Cycle(ORC)systems forwaste heat recovery fromdiesel engines.Thestudy assessed the performance of five candidateworking fluids—R11,R123,R113,R245fa,and R141b—under a range of operating conditions,specifically varying overheat temperatures and evaporation pressures.The results indicated that the choice of working fluid substantially influences the system’s exergetic efficiency,net output power,and thermal efficiency.R245fa showed an outstanding net output power of 30.39 kW at high overheat conditions,outperforming R11,which is significant for high-temperature waste heat recovery.At lower temperatures,R11 and R113 demonstrated higher exergetic efficiencies,with R11 reaching a peak exergetic efficiency of 7.4%at an evaporation pressure of 10 bar and an overheat of 10℃.The study also revealed that controlling the overheat and optimizing the evaporation pressure are crucial for enhancing the net output power of the ORC system.Specifically,at an evaporation pressure of 30 bar and an overheat of 0℃,R113 exhibited the lowest exergetic destruction of 544.5 kJ/kg,making it a suitable choice for minimizing irreversible losses.These findings are instrumental for understanding the performance of ORC systems in waste heat recovery applications and offer valuable insights for the design and operation of more efficient and environmentally friendly diesel engine systems.
文摘Breast cancer screening programs rely heavily on mammography for early detection;however,diagnostic performance is strongly affected by inter-reader variability,breast density,and the limitations of conven-tional computer-aided detection systems.Recent advances in deep learning have enabled more robust and scalable solutions for large-scale screening,yet a systematic comparison of modern object detection architectures on nationally representative datasets remains limited.This study presents a comprehensive quantitative comparison of prominent deep learning–based object detection architectures for Artificial Intelligence-assisted mammography analysis using the MammosighTR dataset,developed within the Turkish National Breast Cancer Screening Program.The dataset comprises 12,740 patient cases collected between 2016 and 2022,annotated with BI-RADS categories,breast density levels,and lesion localization labels.A total of 31 models were evaluated,including One-Stage,Two-Stage,and Transformer-based architectures,under a unified experimental framework at both patient and breast levels.The results demonstrate that Two-Stage architectures consistently outperform One-Stage models,achieving approximately 2%–4%higher Macro F1-Scores and more balanced precision–recall trade-offs,with Double-Head R-CNN and Dynamic R-CNN yielding the highest overall performance(Macro F1≈0.84–0.86).This advantage is primarily attributed to the region proposal mechanism and improved class balance inherent to Two-Stage designs.One-Stage detectors exhibited higher sensitivity and faster inference,reaching Recall values above 0.88,but experienced minor reductions in Precision and overall accuracy(≈1%–2%)compared with Two-Stage models.Among Transformer-based architectures,Deformable DEtection TRansformer demonstrated strong robustness and consistency across datasets,achieving Macro F1-Scores comparable to CNN-based detectors(≈0.83–0.85)while exhibiting minimal performance degradation under distributional shifts.Breast density–based analysis revealed increased misclassification rates in medium-density categories(types B and C),whereas Transformer-based architectures maintained more stable performance in high-density type D tissue.These findings quantitatively confirm that both architectural design and tissue characteristics play a decisive role in diagnostic accuracy.Overall,the study provides a reproducible benchmark and highlights the potential of hybrid approaches that combine the accuracy of Two-Stage detectors with the contextual modeling capability of Transformer architectures for clinically reliable breast cancer screening systems.
文摘Accurate short-term electricity price forecasts are essential for market participants to optimize bidding strategies,hedge risk and plan generation schedules.By leveraging advanced data analytics and machine learning methods,accurate and reliable price forecasts can be achieved.This study forecasts day-ahead prices in Türkiye’s electricity market using eXtreme Gradient Boosting(XGBoost).We benchmark XGBoost against four alternatives—Support Vector Machines(SVM),Long Short-Term Memory(LSTM),Random Forest(RF),and Gradient Boosting(GBM)—using 8760 hourly observations from 2023 provided by Energy Exchange Istanbul(EXIST).All models were trained on an identical chronological 80/20 train–test split,with hyperparameters tuned via 5-fold cross-validation on the training set.XGBoost achieved the best performance(Mean Absolute Error(MAE)=144.8 TRY/MWh,Root Mean Square Error(RMSE)=201.8 TRY/MWh,coefficient of determination(R^(2))=0.923)while training in 94 s.To enhance interpretability and identify key drivers,we employed Shapley Additive Explanations(SHAP),which highlighted a strong association between higher prices and increased natural-gas-based generation.The results provide a clear performance benchmark and practical guidance for selecting forecasting approaches in day-ahead electricity markets.
文摘Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications.
基金The World Class Professor(WCP)Program of the Directorate of Resources,Directorate General of Higher Education,Ministry of Education and Culture in 2023 supports this studythe JAPAN-ASEAN Science and Technology Innovation Platform(JASTIP-WP4)+3 种基金the University of Bengkulu's International Collaboration Research Fund(2183/UN30.15/LT/2019)for partial fundingthe C2F Fund for Postdoctoral Fellowship from Chulalongkorn Universitythe Thailand Science Research and Innovation Fund Chulalongkorn University(DISF68210001)the National Research Council of Thailand(N42A670572)。
文摘The research findings on the ground motion and liquefaction potential analyses during the 2018 Great Indonesia Earthquake(M_(w)7.5)are significant and crucial.The earthquake triggered soil-structure damage due to liquefaction.This study,which thoroughly investigated four sites at Palu,was conducted by performing a comprehensive ground motion parameter analysis.The ground motion characteristics were presented and justified,particularly for the most impacted direction.Ground motion predictions were analysed to define the spectral accelerations,and matching spectral accelerations were conducted to produce ground motions for each site.Non-linear seismic ground response analysis based on the hyperbolic model of pressure pressure-dependent was performed to investigate cyclic soil behaviour.The results revealed that ground motion is crucial in significant soil damage,and the earthquake energy could trigger deep liquefaction.As the most significant ground motion,the vertical ground motion is essential in determining deep liquefaction.The discussion on the impact of liquefaction based on the results of the numerical analysis is presented.Significant ground motion with a longer duration could have a substantial impact on deep liquefaction in the study area.These findings depict how the 2018 Indonesia Earthquake(M_(w)7.5)triggered a mega-liquefaction in Palu City.The results could enhance the understanding of the importance of seismic hazard assessment.It is recommended that site investigation and soil improvement should be planned to counteract liquefaction damage before construction.This study also suggests conducting seismic hazard assessments for city development to minimise the potential disaster impact in the study area.
基金supported by the National Natural Science Foundation of China under grant number 62066016the Natural Science Foundation of Hunan Province of China under grant number 2024JJ7395+2 种基金International and Regional Science and Technology Cooperation and Exchange Program of the Hunan Association for Science and Technology under grant number 025SKX-KJ-04Hunan Provincial Postgraduate Research Innovation Project under grant numberCX20251611Liye Qin Bamboo Slips Research Special Project of JishouUniversity 25LYY03.
文摘ThePigeon-InspiredOptimization(PIO)algorithmconstitutes ametaheuristic method derived fromthe homing behaviour of pigeons.Initially formulated for three-dimensional path planning in unmanned aerial vehicles(UAVs),the algorithmhas attracted considerable academic and industrial interest owing to its effective balance between exploration and exploitation,coupled with advantages in real-time performance and robustness.Nevertheless,as applications have diversified,limitations in convergence precision and a tendency toward premature convergence have become increasingly evident,highlighting a need for improvement.This reviewsystematically outlines the developmental trajectory of the PIO algorithm,with a particular focus on its core applications in UAV navigation,multi-objective formulations,and a spectrum of variantmodels that have emerged in recent years.It offers a structured analysis of the foundational principles underlying the PIO.It conducts a comparative assessment of various performance-enhanced versions,including hybrid models that integrate mechanisms from other optimization paradigms.Additionally,the strengths andweaknesses of distinct PIOvariants are critically examined frommultiple perspectives,including intrinsic algorithmic characteristics,suitability for specific application scenarios,objective function design,and the rigor of the statistical evaluation methodologies employed in empirical studies.Finally,this paper identifies principal challenges within current PIO research and proposes several prospective research directions.Future work should focus on mitigating premature convergence by refining the two-phase search structure and adjusting the exponential decrease of individual numbers during the landmark operator.Enhancing parameter adaptation strategies,potentially using reinforcement learning for dynamic tuning,and advancing theoretical analyses on convergence and complexity are also critical.Further applications should be explored in constrained path planning,Neural Architecture Search(NAS),and other real-worldmulti-objective problems.For Multi-objective PIO(MPIO),key improvements include controlling the growth of the external archive and designing more effective selection mechanisms to maintain convergence efficiency.These efforts are expected to strengthen both the theoretical foundation and practical versatility of PIO and its variants.
文摘Accurate precipitation estimation in semiarid,topographically complicated areas is critical for water resource management and climate risk monitoring.This work provides a detailed,multi-scale evaluation of four major satellite precipitation products(CHIRPS,PERSIANN-CDR,IMERG-F v07,and GSMaP)over Isfahan province,Iran,over a 9-year period(2015-2023).The performance of these products was benchmarked against a dense network of 98 rain gauges using a suite of continuous and categorical statistical metrics,following a two-stage quality control protocol to remove outliers and false alarms.The results revealed that the performance of all products improves with temporal aggregation.At the daily level,GSMaP performed marginally better,although all products were linked with considerable uncertainty.At the monthly and annual levels,the GPM-era products(IMERG and GSMaP)clearly beat the other two,establishing themselves as dependable tools for long-term hydro-climatological studies.Error analysis revealed that topography is the dominant regulating factor,creating a systematic elevationdependent bias,largely characterized by underestimation from most products in high-elevation areas,though the PERSIANN-CDR product exhibited a contrasting overestimation tendency.Finally,the findings highlight the importance of implementing local,elevation-dependent calibration before deploying these products in hydrological modeling.
基金funded by the National Natural Science Foundation of China,grant number U24A20135Inner Mongolia Natural Science Foundation major project,grant number 2023ZD12+7 种基金Inner Mongolia Autonomous Region key research and development and achievement transformation plan project,grant number 2023YFHH0090Natural Science Foundation of Inner Mongolia,grant number 2022MS05006Inner Mongolia Autonomous Region Talent Development FundUniversity basic research business expenses,grant number 2023RCTD012University basic research business expenses,grant number 2023QNJS075Postgraduate Research Innovation Program and of Inner Mongolia Autonomous Region,grant number KC2024053BUniversity basic research business expenses,grant number 2024YXXS012National Key Laboratory of Special Vehicle Design and Manufacturing Integration Technology,grant number GZ2023KF012.
文摘In dry-coupled ultrasonic thickness measurement,thick rubber layers introduce high-amplitude parasitic echoes that obscure defect signals and degrade thickness accuracy.Existing methods struggle to resolve overlap-ping echoes under variable coupling conditions and non-stationary noise.This study proposes a novel dual-criterion framework integrating energy contribution and statistical impulsivity metrics to isolate specimen re-flections from coupling-layer interference.By decomposing A-scan signals into Intrinsic Mode Functions(IMFs),the framework employs energy contribution thresholds(>85%)and kurtosis indices(>3)to autonomously select IMFs containing valid specimen echoes.Hybrid time-frequency thresholding further suppresses interference through amplitude filtering and spectral focusing.Experimental results demonstrate the framework’s robustness,achieving 92.3%thickness accuracy for 5 mm steel specimens with 5 mm rubber coupling,outperforming conventional methods by up to 18.7%.The dual-criterion approach reduces operator dependency by 37%and maintainsΔT<0.03 mm under surface roughness up to 6.3μm,offering a practical solution for industrial nondestructive testing with thick dry-coupled interfaces.
文摘This paper studies wireless vehicular communication(VehCom)in intelligent transportation systems using an orthogonal frequency division multiplexing with index modulation(OFDM-IM).In the concept of IM,data is transmitted not only through the modulated symbols but also via the indices of the active subcarriers.In contrast to the original OFDM,OFDM-IM activates only non-zero subcarriers,increasing energy efficiency.However,the pilotassisted channel estimation(CE)method is a significant challenge in OFDM-IM,where the desired pilot subcarrier interval is related to the OFDM-IM subblock length.This paper proposes a walsh-scattered pilot-assisted CE for OFDM-IM VehCom.The optimum walsh-scattered pilot assignment is proposed to improve the transmission efficiency.Furthermore,a space-time block code with a high transmit diversity gain is employed for OFDM-IM VehCom to enhance VehCom's signal quality.The results show that the proposed method performs higher CE accuracy and better bit-error rate with significant spectral and energy efficiencies than conventional methods.
基金funded by the Directorate of Research and Community Service,Directorate General of Research and Development,Ministry of Higher Education,Science and Technologyin accordance with the Implementation Contract for the Operational Assistance Program for State Universities,Research Program Number:109/C3/DT.05.00/PL/2025.
文摘Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstructions,and substantial computational demands,especially in complex forest terrains.To address these challenges,this study proposes a novel forest fire detection model utilizing audio classification and machine learning.We developed an audio-based pipeline using real-world environmental sound recordings.Sounds were converted into Mel-spectrograms and classified via a Convolutional Neural Network(CNN),enabling the capture of distinctive fire acoustic signatures(e.g.,crackling,roaring)that are minimally impacted by visual or weather conditions.Internet of Things(IoT)sound sensors were crucial for generating complex environmental parameters to optimize feature extraction.The CNN model achieved high performance in stratified 5-fold cross-validation(92.4%±1.6 accuracy,91.2%±1.8 F1-score)and on test data(94.93%accuracy,93.04%F1-score),with 98.44%precision and 88.32%recall,demonstrating reliability across environmental conditions.These results indicate that the audio-based approach not only improves detection reliability but also markedly reduces computational overhead compared to traditional image-based methods.The findings suggest that acoustic sensing integrated with machine learning offers a powerful,low-cost,and efficient solution for real-time forest fire monitoring in complex,dynamic environments.
基金financially supported by The Scientific and Technological Research Council of Türkiye(T??B1TAK)with the project number 121C420。
文摘Rockfall hazards pose significant risks to both cultural heritage and populated areas,necessitating comprehensive assessment methodologies.Despite extensive research on rockfalls,only a small number of studies have directly compared empirical methods with modelling approaches.This study investigated rockfalls in five settlements within the Cappadocia region of Türkiye,employing both empirical methods and advanced three-dimensional(3D)probabilistic modeling.The energy line angle approach was applied to identify rockfall propagation zones,while high-resolution digital surface models derived from unmanned aerial vehicle(UAV)imagery facilitated detailed 3D rockfall simulations.Cappadocia’s unique geological setting—comprising alternating layers of ignimbrites and weaker fluviolacustrine deposits—renders it highly susceptible to rockfalls intensified by wetting-drying and freeze-thaw cycles.Results indicate that rockfall propagation characteristics vary markedly between settlements:Göre and Tatlarin exhibit shorter runout distances due to basalt-dominated slopes,whereas Akköy,SoğanlıandŞahinefendi display longer trajectories associated with welded ignimbrites.Empirical cone propagation analyses correspond broadly with field observations,but variations in energy line angles(23°-33°)highlight the necessity for site-specific calibration.Comparative evaluations demonstrate that 3D probabilistic modeling better captures local-scale block dynamics and identifies high-risk areas affected by topographic and structural features such as rockfall ditches.These findings emphasize the importance of integrating empirical and 3D approaches to improve hazard zoning,optimize mitigation structures and guide the protection of Cappadocia’s unique cultural heritage landscape.