Weather forecasts from numerical weather prediction models play a central role in solar energy forecasting,where a cascade of physics-based models is used in a model chain approach to convert forecasts of solar irradi...Weather forecasts from numerical weather prediction models play a central role in solar energy forecasting,where a cascade of physics-based models is used in a model chain approach to convert forecasts of solar irradiance to solar power production.Ensemble simulations from such weather models aim to quantify uncertainty in the future development of the weather,and can be used to propagate this uncertainty through the model chain to generate probabilistic solar energy predictions.However,ensemble prediction systems are known to exhibit systematic errors,and thus require post-processing to obtain accurate and reliable probabilistic forecasts.The overarching aim of our study is to systematically evaluate different strategies to apply post-processing in model chain approaches with a specific focus on solar energy:not applying any post-processing at all;post-processing only the irradiance predictions before the conversion;post-processing only the solar power predictions obtained from the model chain;or applying post-processing in both steps.In a case study based on a benchmark dataset for the Jacumba solar plant in the U.S.,we develop statistical and machine learning methods for postprocessing ensemble predictions of global horizontal irradiance(GHI)and solar power generation.Further,we propose a neural-network-based model for direct solar power forecasting that bypasses the model chain.Our results indicate that postprocessing substantially improves the solar power generation forecasts,in particular when post-processing is applied to the power predictions.The machine learning methods for post-processing slightly outperform the statistical methods,and the direct forecasting approach performs comparably to the post-processing strategies.展开更多
In the present computational fluid dynamics (CFD) community, post-processing is regarded as a procedure to view parameter distribution, detect characteristic structure and reveal physical mechanism of fluid flow bas...In the present computational fluid dynamics (CFD) community, post-processing is regarded as a procedure to view parameter distribution, detect characteristic structure and reveal physical mechanism of fluid flow based on computational or experimental results. Field plots by contours, iso-surfaces, streamlines, vectors and others are traditional post-processing techniques. While the shock wave, as one important and critical flow structure in many aerodynamic problems, can hardly be detected or distinguished in a direct way using these traditional methods, due to possible confusions with other similar discontinuous flow structures like slip line, contact discontinuity, etc. Therefore, method for automatic detection of shock wave in post-processing is of great importance for both academic research and engineering applications. In this paper, the current status of methodologies developed for shock wave detection and implementations in post-processing platform are reviewed, as well as discussions on advantages and limitations of the existing methods and proposals for further studies of shock wave detection method. We also develop an advanced post-processing software, with improved shock detection.展开更多
Quantum random number generators adopting single negligible dead time of avalanche photodiodes (APDs) photon detection have been restricted due to the non- We propose a new approach based on an APD array to improve...Quantum random number generators adopting single negligible dead time of avalanche photodiodes (APDs) photon detection have been restricted due to the non- We propose a new approach based on an APD array to improve the generation rate of random numbers significantly. This method compares the detectors' responses to consecutive optical pulses and generates the random sequence. We implement a demonstration experiment to show its simplicity, compactness and scalability. The generated numbers are proved to be unbiased, post-processing free, ready to use, and their randomness is verified by using the national institute of standard technology statistical test suite. The random bit generation efficiency is as high as 32.8% and the potential generation rate adopting the 32× 32 APD array is up to tens of Gbits/s.展开更多
The travel time data collection method is used to assist the congestion management. The use of traditional sensors (e.g. inductive loops, AVI sensors) or more recent Bluetooth sensors installed on major roads for coll...The travel time data collection method is used to assist the congestion management. The use of traditional sensors (e.g. inductive loops, AVI sensors) or more recent Bluetooth sensors installed on major roads for collecting data is not sufficient because of their limited coverage and expensive costs for installation and maintenance. Application of the Global Positioning Systems (GPS) in travel time and delay data collections is proven to be efficient in terms of accuracy, level of details for the data and required data collection of man-power. While data collection automation is improved by the GPS technique, human errors can easily find their way through the post-processing phase, and therefore data post-processing remains a challenge especially in case of big projects with high amount of data. This paper introduces a stand-alone post-processing tool called GPS Calculator, which provides an easy-to-use environment to carry out data post-processing. This is a Visual Basic application that processes the data files obtained in the field and integrates them into Geographic Information Systems (GIS) for analysis and representation. The results show that this tool obtains similar results to the currently used data post-processing method, reduces the post-processing effort, and also eliminates the need for the second person during the data collection.展开更多
When castings become complicated and the demands for precision of numerical simulation become higher,the numerical data of casting numerical simulation become more massive.On a general personal computer,these massive ...When castings become complicated and the demands for precision of numerical simulation become higher,the numerical data of casting numerical simulation become more massive.On a general personal computer,these massive numerical data may probably exceed the capacity of available memory,resulting in failure of rendering.Based on the out-of-core technique,this paper proposes a method to effectively utilize external storage and reduce memory usage dramatically,so as to solve the problem of insufficient memory for massive data rendering on general personal computers.Based on this method,a new postprocessor is developed.It is capable to illustrate filling and solidification processes of casting,as well as thermal stess.The new post-processor also provides fast interaction to simulation results.Theoretical analysis as well as several practical examples prove that the memory usage and loading time of the post-processor are independent of the size of the relevant files,but the proportion of the number of cells on surface.Meanwhile,the speed of rendering and fetching of value from the mouse is appreciable,and the demands of real-time and interaction are satisfied.展开更多
In order to carry out numerical simulation using geologic structural data obtained from Landmark(seismic interpretation system), underground geological structures are abstracted into mechanical models which can reflec...In order to carry out numerical simulation using geologic structural data obtained from Landmark(seismic interpretation system), underground geological structures are abstracted into mechanical models which can reflect actual situations and facilitate their computation and analyses.Given the importance of model building, further processing methods about traditional seismic interpretation results from Landmark should be studied and the processed result can then be directly used in numerical simulation computations.Through this data conversion procedure, Landmark and FLAC(the international general stress software) are seamlessly connected.Thus, the format conversion between the two systems and the pre-and post-processing in simulation computation is realized.A practical application indicates that this method has many advantages such as simple operation, high accuracy of the element subdivision and high speed, which may definitely satisfy the actual needs of floor grid cutting.展开更多
To improve the ability of detecting underwater targets under strong wideband interference environment,an efficient method of line spectrum extraction is proposed,which fully utilizes the feature of the target spectrum...To improve the ability of detecting underwater targets under strong wideband interference environment,an efficient method of line spectrum extraction is proposed,which fully utilizes the feature of the target spectrum that the high intense and stable line spectrum is superimposed on the wide continuous spectrum.This method modifies the traditional beam forming algorithm by calculating and fusing the beam forming results at multi-frequency band and multi-azimuth interval,showing an excellent way to extract the line spectrum when the interference and the target are not in the same azimuth interval simultaneously.Statistical efficiency of the estimated azimuth variance and corresponding power of the line spectrum band depends on the line spectra ratio(LSR)of the line spectrum.The change laws of the output signal to noise ratio(SNR)with the LSR,the input SNR,the integration time and the filtering bandwidth of different algorithms bring the selection principle of the critical LSR.As the basis,the detection gain of wideband energy integration and the narrowband line spectrum algorithm are theoretically analyzed.The simulation detection gain demonstrates a good match with the theoretical model.The application conditions of all methods are verified by the receiver operating characteristic(ROC)curve and experimental data from Qiandao Lake.In fact,combining the two methods for target detection reduces the missed detection rate.The proposed post-processing method in2-dimension with the Kalman filter in the time dimension and the background equalization algorithm in the azimuth dimension makes use of the strong correlation between adjacent frames,could further remove background fluctuation and improve the display effect.展开更多
The present study aims to improve the efficiency of typical procedures used for post-processing flow field data by applying a neural-network technology.Assuming a problem of aircraft design as the workhorse,a regressi...The present study aims to improve the efficiency of typical procedures used for post-processing flow field data by applying a neural-network technology.Assuming a problem of aircraft design as the workhorse,a regression calculation model for processing the flow data of a FCN-VGG19 aircraft is elaborated based on VGGNet(Visual Geometry Group Net)and FCN(Fully Convolutional Network)techniques.As shown by the results,the model displays a strong fitting ability,and there is almost no over-fitting in training.Moreover,the model has good accuracy and convergence.For different input data and different grids,the model basically achieves convergence,showing good performances.It is shown that the proposed simulation regression model based on FCN has great potential in typical problems of computational fluid dynamics(CFD)and related data processing.展开更多
This paper proposed improvements to the low bit rate parametric audio coder with sinusoid model as its kernel. Firstly, we propose a new method to effectively order and select the perceptually most important sinusoids...This paper proposed improvements to the low bit rate parametric audio coder with sinusoid model as its kernel. Firstly, we propose a new method to effectively order and select the perceptually most important sinusoids. The sinusoid which contributes most to the reduction of overall NMR is chosen. Combined with our improved parametric psychoacoustic model and advanced peak riddling techniques, the number of sinusoids required can be greatly reduced and the coding efficiency can be greatly enhanced. A lightweight version is also given to reduce the amount of computation with only little sacrifice of performance. Secondly, we propose two enhancement techniques for sinusoid synthesis: bandwidth enhancement and line enhancement. With little overhead, the effective bandwidth can be extended one more octave; the timbre tends to sound much brighter, thicker and more beautiful.展开更多
Laser additive manufacturing(LAM)of titanium(Ti)alloys has emerged as a transformative technology with vast potential across multiple industries.To recap the state of the art,Ti alloys processed by two essential LAM t...Laser additive manufacturing(LAM)of titanium(Ti)alloys has emerged as a transformative technology with vast potential across multiple industries.To recap the state of the art,Ti alloys processed by two essential LAM techniques(i.e.,laser powder bed fusion and laser-directed energy deposition)will be reviewed,covering the aspects of processes,materials and post-processing.The impacts of process parameters and strategies for optimizing parameters will be elucidated.Various types of Ti alloys processed by LAM,includingα-Ti,(α+β)-Ti,andβ-Ti alloys,will be overviewed in terms of micro structures and benchmarking properties.Furthermore,the post-processing methods for improving the performance of L AM-processed Ti alloys,including conventional and novel heat treatment,hot isostatic pressing,and surface processing(e.g.,ultrasonic and laser shot peening),will be systematically reviewed and discussed.The review summarizes the process windows,properties,and performance envelopes and benchmarks the research achievements in LAM of Ti alloys.The outlooks of further trends in LAM of Ti alloys are also highlighted at the end of the review.This comprehensive review could serve as a valuable resource for researchers and practitioners,promoting further advancements in LAM-built Ti alloys and their applications.展开更多
In the analysis of high-rise building, traditional displacement-based plane elements are often used to get the in-plane internal forces of the shear walls by stress integration. Limited by the singular problem produce...In the analysis of high-rise building, traditional displacement-based plane elements are often used to get the in-plane internal forces of the shear walls by stress integration. Limited by the singular problem produced by wall holes and the loss of precision induced by using differential method to derive strains, the displacement-based elements cannot always present accuracy enough for design. In this paper, the hybrid post-processing procedure based on the Hellinger-Reissner variational principle is used for improving the stress precision of two quadrilateral plane elements. In order to find the best stress field, three different forms are assumed for the displacement-based plane elements and with drilling DOF. Numerical results show that by using the proposed method, the accuracy of stress solutions of these two displacement-based plane elements can be improved.展开更多
Visual data mining is one of important approach of data mining techniques. Most of them are based on computer graphic techniques but few of them exploit image-processing techniques. This paper proposes an image proces...Visual data mining is one of important approach of data mining techniques. Most of them are based on computer graphic techniques but few of them exploit image-processing techniques. This paper proposes an image processing method, named RNAM (resemble neighborhood averaging method), to facilitate visual data mining, which is used to post-process the data mining result-image and help users to discover significant features and useful patterns effectively. The experiments show that the method is intuitive, easily-understanding and effectiveness. It provides a new approach for visual data mining.展开更多
Low contrast of Magnetic Resonance(MR)images limits the visibility of subtle structures and adversely affects the outcome of both subjective and automated diagnosis.State-of-the-art contrast boosting techniques intole...Low contrast of Magnetic Resonance(MR)images limits the visibility of subtle structures and adversely affects the outcome of both subjective and automated diagnosis.State-of-the-art contrast boosting techniques intolerably alter inherent features of MR images.Drastic changes in brightness features,induced by post-processing are not appreciated in medical imaging as the grey level values have certain diagnostic meanings.To overcome these issues this paper proposes an algorithm that enhance the contrast of MR images while preserving the underlying features as well.This method termed as Power-law and Logarithmic Modification-based Histogram Equalization(PLMHE)partitions the histogram of the image into two sub histograms after a power-law transformation and a log compression.After a modification intended for improving the dispersion of the sub-histograms and subsequent normalization,cumulative histograms are computed.Enhanced grey level values are computed from the resultant cumulative histograms.The performance of the PLMHE algorithm is comparedwith traditional histogram equalization based algorithms and it has been observed from the results that PLMHE can boost the image contrast without causing dynamic range compression,a significant change in mean brightness,and contrast-overshoot.展开更多
In order to obtain high-precision GPS control point results and provide high-precision known points for various projects,this study uses a variety of mature GPS post-processing software to process the observation data...In order to obtain high-precision GPS control point results and provide high-precision known points for various projects,this study uses a variety of mature GPS post-processing software to process the observation data of the GPS control network of Guanyinge Reservoir,and compares the results obtained by several kinds of software.According to the test results,the reasons for the accuracy differences between different software are analyzed,and the optimal results are obtained in the analysis and comparison.The purpose of this paper is to provide useful reference for GPS software users to process data.展开更多
Processes supported by process-aware information systems are subject to continuous and often subtle changes due to evolving operational,organizational,or regulatory factors.These changes,referred to as incremental con...Processes supported by process-aware information systems are subject to continuous and often subtle changes due to evolving operational,organizational,or regulatory factors.These changes,referred to as incremental concept drift,gradually alter the behavior or structure of processes,making their detection and localization a challenging task.Traditional process mining techniques frequently assume process stationarity and are limited in their ability to detect such drift,particularly from a control-flow perspective.The objective of this research is to develop an interpretable and robust framework capable of detecting and localizing incremental concept drift in event logs,with a specific emphasis on the structural evolution of control-flow semantics in processes.We propose DriftXMiner,a control-flow-aware hybrid framework that combines statistical,machine learning,and process model analysis techniques.The approach comprises three key components:(1)Cumulative Drift Scanner that tracks directional statistical deviations to detect early drift signals;(2)a Temporal Clustering and Drift-Aware Forest Ensemble(DAFE)to capture distributional and classification-level changes in process behavior;and(3)Petri net-based process model reconstruction,which enables the precise localization of structural drift using transition deviation metrics and replay fitness scores.Experimental validation on the BPI Challenge 2017 event log demonstrates that DriftXMiner effectively identifies and localizes gradual and incremental process drift over time.The framework achieves a detection accuracy of 92.5%,a localization precision of 90.3%,and an F1-score of 0.91,outperforming competitive baselines such as CUSUM+Histograms and ADWIN+Alpha Miner.Visual analyses further confirm that identified drift points align with transitions in control-flow models and behavioral cluster structures.DriftXMiner offers a novel and interpretable solution for incremental concept drift detection and localization in dynamic,process-aware systems.By integrating statistical signal accumulation,temporal behavior profiling,and structural process mining,the framework enables finegrained drift explanation and supports adaptive process intelligence in evolving environments.Its modular architecture supports extension to streaming data and real-time monitoring contexts.展开更多
The aging process is an inexorable fact throughout our lives and is considered a major factor in develo ping neurological dysfunctions associated with cognitive,emotional,and motor impairments.Aging-associated neurode...The aging process is an inexorable fact throughout our lives and is considered a major factor in develo ping neurological dysfunctions associated with cognitive,emotional,and motor impairments.Aging-associated neurodegenerative diseases are characterized by the progressive loss of neuronal structure and function.展开更多
The complexity of the seismicity pattern for the subduction zone along the oceanic plate triggered the outer rise events and revealed cyclic tectonic deformation conditions along the plate subduction zones.The outer r...The complexity of the seismicity pattern for the subduction zone along the oceanic plate triggered the outer rise events and revealed cyclic tectonic deformation conditions along the plate subduction zones.The outer rise earthquakes have been observed along the Sunda arc,following the estimated rupture area of the 2005 M_(W)8.6 Nias earthquakes.Here,we used kinematic waveform inversion(KIWI)to obtain the source parameters of the 14 May 2021 M_(W)6.6 event off the west coast of northern Sumatra and to define the fault plane that triggered this outer rise event.The KIWI algorithm allows two types of seismic source to be configured:the moment tensor model to describe the type of shear with six moment tensor components and the Eikonal model for the rupture of pure double-couple sources.This method was chosen for its flexibility to be applied for different sources of seismicity and also for the automated full-moment tensor solution with real-time monitoring.We used full waveform traces from 8 broadband seismic stations within 1000 km epicentral distances sourced from the Incorporated Research Institutions for Seismology(IRIS-IDA)and Geofon GFZ seismic record databases.The initial origin time and hypocenter values are obtained from the IRIS-IDA.The synthetic seismograms used in the inversion process are based on the existing regional green function database model and were accessed from the KIWI Tools Green's Function Database.The obtained scalar seismic moment value is 1.18×10^(19)N·m,equivalent to a moment magnitude M_(W)6.6.The source parameters are 140°,44°,and−99°for the strike,dip,and rake values at a centroid depth of 10.2 km,indicating that this event is a normal fault earthquake that occurred in the outer rise area.The outer rise events with normal faults typically occur at the shallow part of the plate,with nodal-plane dips predominantly in the range of 30°-60°on the weak oceanic lithosphere due to hydrothermal alteration.The stress regime around the plate subduction zone varies both temporally and spatially due to the cyclic influences of megathrust earthquakes.Tensional outer rise earthquakes tend to occur after the megathrust events.The relative timing of these events is not known due to the viscous relaxation of the down going slab and poroelastic response in the trench slope region.The occurrence of the 14 May 2021 earthquake shows the seismicity in the outer rise region in the strongly coupled Sunda arc subduction zone due to elastic bending stress within the duration of the seismic cycle.展开更多
In this study,an automated multimodal system for detecting,classifying,and dating fruit was developed using a two-stage YOLOv11 pipeline.In the first stage,the YOLOv11 detection model locates individual date fruits in...In this study,an automated multimodal system for detecting,classifying,and dating fruit was developed using a two-stage YOLOv11 pipeline.In the first stage,the YOLOv11 detection model locates individual date fruits in real time by drawing bounding boxes around them.These bounding boxes are subsequently passed to a YOLOv11 classification model,which analyzes cropped images and assigns class labels.An additional counting module automatically tallies the detected fruits,offering a near-instantaneous estimation of quantity.The experimental results suggest high precision and recall for detection,high classification accuracy(across 15 classes),and near-perfect counting in real time.This paper presents a multi-stage pipeline for date fruit detection,classification,and automated counting,employing YOLOv11-based models to achieve high accuracy while maintaining real-time throughput.The results demonstrated that the detection precision exceeded 90%,the classification accuracy approached 92%,and the counting module correlated closely with the manual tallies.These findings confirm the potential of reducing manual labour and enhancing operational efficiency in post-harvesting processes.Future studies will include dataset expansion,user-centric interfaces,and integration with harvesting robotics.展开更多
Covert timing channels(CTC)exploit network resources to establish hidden communication pathways,posing signi cant risks to data security and policy compliance.erefore,detecting such hidden and dangerous threats remain...Covert timing channels(CTC)exploit network resources to establish hidden communication pathways,posing signi cant risks to data security and policy compliance.erefore,detecting such hidden and dangerous threats remains one of the security challenges. is paper proposes LinguTimeX,a new framework that combines natural language processing with arti cial intelligence,along with explainable Arti cial Intelligence(AI)not only to detect CTC but also to provide insights into the decision process.LinguTimeX performs multidimensional feature extraction by fusing linguistic attributes with temporal network patterns to identify covert channels precisely.LinguTimeX demonstrates strong e ectiveness in detecting CTC across multiple languages;namely English,Arabic,and Chinese.Speci cally,the LSTM and RNN models achieved F1 scores of 90%on the English dataset,89%on the Arabic dataset,and 88%on the Chinese dataset,showcasing their superior performance and ability to generalize across multiple languages. is highlights their robustness in detecting CTCs within security systems,regardless of the language or cultural context of the data.In contrast,the DeepForest model produced F1-scores ranging from 86%to 87%across the same datasets,further con rming its e ectiveness in CTC detection.Although other algorithms also showed reasonable accuracy,the LSTM and RNN models consistently outperformed them in multilingual settings,suggesting that deep learning models might be better suited for this particular problem.展开更多
Woody biomass is a widely-used and favourable material for energy production due to its carbon neutral status. Energy is generally derived either through direct combustion or gasification. The Irish forestry sector is...Woody biomass is a widely-used and favourable material for energy production due to its carbon neutral status. Energy is generally derived either through direct combustion or gasification. The Irish forestry sector is forecasted to expand significantly in coming years, and so the opportunity exists for the bioenergy sector to take advantage of the material for which there will be no demand from current markets. A by-product of wood processing, wood dust is the cheapest form of wood material available to the bioenergy sector. Currently wood dust is primarily processed into wood pellets for energy generation. Research was conducted on post-processing birch wood dust;the calorific value and the Wobbe Index were determined for a number of wood particle sizes and wood dust concentrations. The Wobbe Index determined for the upper explosive concentration (4000 g/m3) falls within range of that of hydrogen gas, and wood dust-air mixtures of this concentration could therefore behave in a similar manner in a gas turbine. Due to its slightly lower HHV and higher particle density, however, alterations to the gas turbine would be necessary to accommodate wood dust to prevent abrasive damage to the turbine. As an unwanted by-product of wood processing the direct use of wood dust in a gas turbine for energy generation could therefore have economic and environmental benefits.展开更多
基金the Young Investigator Group“Artificial Intelligence for Probabilistic Weather Forecasting”funded by the Vector Stiftungfunding from the Federal Ministry of Education and Research(BMBF)and the Baden-Württemberg Ministry of Science as part of the Excellence Strategy of the German Federal and State Governments。
文摘Weather forecasts from numerical weather prediction models play a central role in solar energy forecasting,where a cascade of physics-based models is used in a model chain approach to convert forecasts of solar irradiance to solar power production.Ensemble simulations from such weather models aim to quantify uncertainty in the future development of the weather,and can be used to propagate this uncertainty through the model chain to generate probabilistic solar energy predictions.However,ensemble prediction systems are known to exhibit systematic errors,and thus require post-processing to obtain accurate and reliable probabilistic forecasts.The overarching aim of our study is to systematically evaluate different strategies to apply post-processing in model chain approaches with a specific focus on solar energy:not applying any post-processing at all;post-processing only the irradiance predictions before the conversion;post-processing only the solar power predictions obtained from the model chain;or applying post-processing in both steps.In a case study based on a benchmark dataset for the Jacumba solar plant in the U.S.,we develop statistical and machine learning methods for postprocessing ensemble predictions of global horizontal irradiance(GHI)and solar power generation.Further,we propose a neural-network-based model for direct solar power forecasting that bypasses the model chain.Our results indicate that postprocessing substantially improves the solar power generation forecasts,in particular when post-processing is applied to the power predictions.The machine learning methods for post-processing slightly outperform the statistical methods,and the direct forecasting approach performs comparably to the post-processing strategies.
文摘In the present computational fluid dynamics (CFD) community, post-processing is regarded as a procedure to view parameter distribution, detect characteristic structure and reveal physical mechanism of fluid flow based on computational or experimental results. Field plots by contours, iso-surfaces, streamlines, vectors and others are traditional post-processing techniques. While the shock wave, as one important and critical flow structure in many aerodynamic problems, can hardly be detected or distinguished in a direct way using these traditional methods, due to possible confusions with other similar discontinuous flow structures like slip line, contact discontinuity, etc. Therefore, method for automatic detection of shock wave in post-processing is of great importance for both academic research and engineering applications. In this paper, the current status of methodologies developed for shock wave detection and implementations in post-processing platform are reviewed, as well as discussions on advantages and limitations of the existing methods and proposals for further studies of shock wave detection method. We also develop an advanced post-processing software, with improved shock detection.
基金Supported by the Chinese Academy of Sciences Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics,Shanghai Branch,University of Science and Technology of Chinathe National Natural Science Foundation of China under Grant No 11405172
文摘Quantum random number generators adopting single negligible dead time of avalanche photodiodes (APDs) photon detection have been restricted due to the non- We propose a new approach based on an APD array to improve the generation rate of random numbers significantly. This method compares the detectors' responses to consecutive optical pulses and generates the random sequence. We implement a demonstration experiment to show its simplicity, compactness and scalability. The generated numbers are proved to be unbiased, post-processing free, ready to use, and their randomness is verified by using the national institute of standard technology statistical test suite. The random bit generation efficiency is as high as 32.8% and the potential generation rate adopting the 32× 32 APD array is up to tens of Gbits/s.
文摘The travel time data collection method is used to assist the congestion management. The use of traditional sensors (e.g. inductive loops, AVI sensors) or more recent Bluetooth sensors installed on major roads for collecting data is not sufficient because of their limited coverage and expensive costs for installation and maintenance. Application of the Global Positioning Systems (GPS) in travel time and delay data collections is proven to be efficient in terms of accuracy, level of details for the data and required data collection of man-power. While data collection automation is improved by the GPS technique, human errors can easily find their way through the post-processing phase, and therefore data post-processing remains a challenge especially in case of big projects with high amount of data. This paper introduces a stand-alone post-processing tool called GPS Calculator, which provides an easy-to-use environment to carry out data post-processing. This is a Visual Basic application that processes the data files obtained in the field and integrates them into Geographic Information Systems (GIS) for analysis and representation. The results show that this tool obtains similar results to the currently used data post-processing method, reduces the post-processing effort, and also eliminates the need for the second person during the data collection.
基金supported by the New Century Excellent Talents in University(NCET-09-0396)the National Science&Technology Key Projects of Numerical Control(2012ZX04014-031)+1 种基金the Natural Science Foundation of Hubei Province(2011CDB279)the Foundation for Innovative Research Groups of the Natural Science Foundation of Hubei Province,China(2010CDA067)
文摘When castings become complicated and the demands for precision of numerical simulation become higher,the numerical data of casting numerical simulation become more massive.On a general personal computer,these massive numerical data may probably exceed the capacity of available memory,resulting in failure of rendering.Based on the out-of-core technique,this paper proposes a method to effectively utilize external storage and reduce memory usage dramatically,so as to solve the problem of insufficient memory for massive data rendering on general personal computers.Based on this method,a new postprocessor is developed.It is capable to illustrate filling and solidification processes of casting,as well as thermal stess.The new post-processor also provides fast interaction to simulation results.Theoretical analysis as well as several practical examples prove that the memory usage and loading time of the post-processor are independent of the size of the relevant files,but the proportion of the number of cells on surface.Meanwhile,the speed of rendering and fetching of value from the mouse is appreciable,and the demands of real-time and interaction are satisfied.
基金Projects 50221402, 50490271 and 50025413 supported by the National Natural Science Foundation of Chinathe National Basic Research Program of China (2009CB219603, 2009 CB724601, 2006CB202209 and 2005CB221500)+1 种基金the Key Project of the Ministry of Education (306002)the Program for Changjiang Scholars and Innovative Research Teams in Universities of MOE (IRT0408)
文摘In order to carry out numerical simulation using geologic structural data obtained from Landmark(seismic interpretation system), underground geological structures are abstracted into mechanical models which can reflect actual situations and facilitate their computation and analyses.Given the importance of model building, further processing methods about traditional seismic interpretation results from Landmark should be studied and the processed result can then be directly used in numerical simulation computations.Through this data conversion procedure, Landmark and FLAC(the international general stress software) are seamlessly connected.Thus, the format conversion between the two systems and the pre-and post-processing in simulation computation is realized.A practical application indicates that this method has many advantages such as simple operation, high accuracy of the element subdivision and high speed, which may definitely satisfy the actual needs of floor grid cutting.
基金supported by the National Natural Science Foundation of China(51875535)the Natural Science Foundation for Young Scientists of Shanxi Province(201701D221017,201901D211242)。
文摘To improve the ability of detecting underwater targets under strong wideband interference environment,an efficient method of line spectrum extraction is proposed,which fully utilizes the feature of the target spectrum that the high intense and stable line spectrum is superimposed on the wide continuous spectrum.This method modifies the traditional beam forming algorithm by calculating and fusing the beam forming results at multi-frequency band and multi-azimuth interval,showing an excellent way to extract the line spectrum when the interference and the target are not in the same azimuth interval simultaneously.Statistical efficiency of the estimated azimuth variance and corresponding power of the line spectrum band depends on the line spectra ratio(LSR)of the line spectrum.The change laws of the output signal to noise ratio(SNR)with the LSR,the input SNR,the integration time and the filtering bandwidth of different algorithms bring the selection principle of the critical LSR.As the basis,the detection gain of wideband energy integration and the narrowband line spectrum algorithm are theoretically analyzed.The simulation detection gain demonstrates a good match with the theoretical model.The application conditions of all methods are verified by the receiver operating characteristic(ROC)curve and experimental data from Qiandao Lake.In fact,combining the two methods for target detection reduces the missed detection rate.The proposed post-processing method in2-dimension with the Kalman filter in the time dimension and the background equalization algorithm in the azimuth dimension makes use of the strong correlation between adjacent frames,could further remove background fluctuation and improve the display effect.
文摘The present study aims to improve the efficiency of typical procedures used for post-processing flow field data by applying a neural-network technology.Assuming a problem of aircraft design as the workhorse,a regression calculation model for processing the flow data of a FCN-VGG19 aircraft is elaborated based on VGGNet(Visual Geometry Group Net)and FCN(Fully Convolutional Network)techniques.As shown by the results,the model displays a strong fitting ability,and there is almost no over-fitting in training.Moreover,the model has good accuracy and convergence.For different input data and different grids,the model basically achieves convergence,showing good performances.It is shown that the proposed simulation regression model based on FCN has great potential in typical problems of computational fluid dynamics(CFD)and related data processing.
文摘This paper proposed improvements to the low bit rate parametric audio coder with sinusoid model as its kernel. Firstly, we propose a new method to effectively order and select the perceptually most important sinusoids. The sinusoid which contributes most to the reduction of overall NMR is chosen. Combined with our improved parametric psychoacoustic model and advanced peak riddling techniques, the number of sinusoids required can be greatly reduced and the coding efficiency can be greatly enhanced. A lightweight version is also given to reduce the amount of computation with only little sacrifice of performance. Secondly, we propose two enhancement techniques for sinusoid synthesis: bandwidth enhancement and line enhancement. With little overhead, the effective bandwidth can be extended one more octave; the timbre tends to sound much brighter, thicker and more beautiful.
基金financially supported by the 2022 MTC Young Individual Research Grants under Singapore Research,Innovation and Enterprise(RIE)2025 Plan(No.M22K3c0097)the Natural Science Foundation of US(No.DMR-2104933)the sponsorship of the China Scholarship Council(No.202106130051)。
文摘Laser additive manufacturing(LAM)of titanium(Ti)alloys has emerged as a transformative technology with vast potential across multiple industries.To recap the state of the art,Ti alloys processed by two essential LAM techniques(i.e.,laser powder bed fusion and laser-directed energy deposition)will be reviewed,covering the aspects of processes,materials and post-processing.The impacts of process parameters and strategies for optimizing parameters will be elucidated.Various types of Ti alloys processed by LAM,includingα-Ti,(α+β)-Ti,andβ-Ti alloys,will be overviewed in terms of micro structures and benchmarking properties.Furthermore,the post-processing methods for improving the performance of L AM-processed Ti alloys,including conventional and novel heat treatment,hot isostatic pressing,and surface processing(e.g.,ultrasonic and laser shot peening),will be systematically reviewed and discussed.The review summarizes the process windows,properties,and performance envelopes and benchmarks the research achievements in LAM of Ti alloys.The outlooks of further trends in LAM of Ti alloys are also highlighted at the end of the review.This comprehensive review could serve as a valuable resource for researchers and practitioners,promoting further advancements in LAM-built Ti alloys and their applications.
文摘In the analysis of high-rise building, traditional displacement-based plane elements are often used to get the in-plane internal forces of the shear walls by stress integration. Limited by the singular problem produced by wall holes and the loss of precision induced by using differential method to derive strains, the displacement-based elements cannot always present accuracy enough for design. In this paper, the hybrid post-processing procedure based on the Hellinger-Reissner variational principle is used for improving the stress precision of two quadrilateral plane elements. In order to find the best stress field, three different forms are assumed for the displacement-based plane elements and with drilling DOF. Numerical results show that by using the proposed method, the accuracy of stress solutions of these two displacement-based plane elements can be improved.
基金Supported by the National Natural Science Foun-dation of China (60173051) ,the Teaching and Research Award Pro-gramfor Outstanding Young Teachers in Higher Education Institu-tions of Ministry of Education of China ,and Liaoning Province HigherEducation Research Foundation (20040206)
文摘Visual data mining is one of important approach of data mining techniques. Most of them are based on computer graphic techniques but few of them exploit image-processing techniques. This paper proposes an image processing method, named RNAM (resemble neighborhood averaging method), to facilitate visual data mining, which is used to post-process the data mining result-image and help users to discover significant features and useful patterns effectively. The experiments show that the method is intuitive, easily-understanding and effectiveness. It provides a new approach for visual data mining.
基金This work was supported by Taif university Researchers Supporting Project Number(TURSP-2020/114),Taif University,Taif,Saudi Arabia.
文摘Low contrast of Magnetic Resonance(MR)images limits the visibility of subtle structures and adversely affects the outcome of both subjective and automated diagnosis.State-of-the-art contrast boosting techniques intolerably alter inherent features of MR images.Drastic changes in brightness features,induced by post-processing are not appreciated in medical imaging as the grey level values have certain diagnostic meanings.To overcome these issues this paper proposes an algorithm that enhance the contrast of MR images while preserving the underlying features as well.This method termed as Power-law and Logarithmic Modification-based Histogram Equalization(PLMHE)partitions the histogram of the image into two sub histograms after a power-law transformation and a log compression.After a modification intended for improving the dispersion of the sub-histograms and subsequent normalization,cumulative histograms are computed.Enhanced grey level values are computed from the resultant cumulative histograms.The performance of the PLMHE algorithm is comparedwith traditional histogram equalization based algorithms and it has been observed from the results that PLMHE can boost the image contrast without causing dynamic range compression,a significant change in mean brightness,and contrast-overshoot.
文摘In order to obtain high-precision GPS control point results and provide high-precision known points for various projects,this study uses a variety of mature GPS post-processing software to process the observation data of the GPS control network of Guanyinge Reservoir,and compares the results obtained by several kinds of software.According to the test results,the reasons for the accuracy differences between different software are analyzed,and the optimal results are obtained in the analysis and comparison.The purpose of this paper is to provide useful reference for GPS software users to process data.
文摘Processes supported by process-aware information systems are subject to continuous and often subtle changes due to evolving operational,organizational,or regulatory factors.These changes,referred to as incremental concept drift,gradually alter the behavior or structure of processes,making their detection and localization a challenging task.Traditional process mining techniques frequently assume process stationarity and are limited in their ability to detect such drift,particularly from a control-flow perspective.The objective of this research is to develop an interpretable and robust framework capable of detecting and localizing incremental concept drift in event logs,with a specific emphasis on the structural evolution of control-flow semantics in processes.We propose DriftXMiner,a control-flow-aware hybrid framework that combines statistical,machine learning,and process model analysis techniques.The approach comprises three key components:(1)Cumulative Drift Scanner that tracks directional statistical deviations to detect early drift signals;(2)a Temporal Clustering and Drift-Aware Forest Ensemble(DAFE)to capture distributional and classification-level changes in process behavior;and(3)Petri net-based process model reconstruction,which enables the precise localization of structural drift using transition deviation metrics and replay fitness scores.Experimental validation on the BPI Challenge 2017 event log demonstrates that DriftXMiner effectively identifies and localizes gradual and incremental process drift over time.The framework achieves a detection accuracy of 92.5%,a localization precision of 90.3%,and an F1-score of 0.91,outperforming competitive baselines such as CUSUM+Histograms and ADWIN+Alpha Miner.Visual analyses further confirm that identified drift points align with transitions in control-flow models and behavioral cluster structures.DriftXMiner offers a novel and interpretable solution for incremental concept drift detection and localization in dynamic,process-aware systems.By integrating statistical signal accumulation,temporal behavior profiling,and structural process mining,the framework enables finegrained drift explanation and supports adaptive process intelligence in evolving environments.Its modular architecture supports extension to streaming data and real-time monitoring contexts.
文摘The aging process is an inexorable fact throughout our lives and is considered a major factor in develo ping neurological dysfunctions associated with cognitive,emotional,and motor impairments.Aging-associated neurodegenerative diseases are characterized by the progressive loss of neuronal structure and function.
基金supported by the National Natural Science Foundation of China(Grant No.42130312)。
文摘The complexity of the seismicity pattern for the subduction zone along the oceanic plate triggered the outer rise events and revealed cyclic tectonic deformation conditions along the plate subduction zones.The outer rise earthquakes have been observed along the Sunda arc,following the estimated rupture area of the 2005 M_(W)8.6 Nias earthquakes.Here,we used kinematic waveform inversion(KIWI)to obtain the source parameters of the 14 May 2021 M_(W)6.6 event off the west coast of northern Sumatra and to define the fault plane that triggered this outer rise event.The KIWI algorithm allows two types of seismic source to be configured:the moment tensor model to describe the type of shear with six moment tensor components and the Eikonal model for the rupture of pure double-couple sources.This method was chosen for its flexibility to be applied for different sources of seismicity and also for the automated full-moment tensor solution with real-time monitoring.We used full waveform traces from 8 broadband seismic stations within 1000 km epicentral distances sourced from the Incorporated Research Institutions for Seismology(IRIS-IDA)and Geofon GFZ seismic record databases.The initial origin time and hypocenter values are obtained from the IRIS-IDA.The synthetic seismograms used in the inversion process are based on the existing regional green function database model and were accessed from the KIWI Tools Green's Function Database.The obtained scalar seismic moment value is 1.18×10^(19)N·m,equivalent to a moment magnitude M_(W)6.6.The source parameters are 140°,44°,and−99°for the strike,dip,and rake values at a centroid depth of 10.2 km,indicating that this event is a normal fault earthquake that occurred in the outer rise area.The outer rise events with normal faults typically occur at the shallow part of the plate,with nodal-plane dips predominantly in the range of 30°-60°on the weak oceanic lithosphere due to hydrothermal alteration.The stress regime around the plate subduction zone varies both temporally and spatially due to the cyclic influences of megathrust earthquakes.Tensional outer rise earthquakes tend to occur after the megathrust events.The relative timing of these events is not known due to the viscous relaxation of the down going slab and poroelastic response in the trench slope region.The occurrence of the 14 May 2021 earthquake shows the seismicity in the outer rise region in the strongly coupled Sunda arc subduction zone due to elastic bending stress within the duration of the seismic cycle.
基金supported by the Deanship of Scientific Research,Vice Presidency for Graduate Studies and Scientific Research,King Faisal University,Saudi Arabia,Grant No.KFU250098.
文摘In this study,an automated multimodal system for detecting,classifying,and dating fruit was developed using a two-stage YOLOv11 pipeline.In the first stage,the YOLOv11 detection model locates individual date fruits in real time by drawing bounding boxes around them.These bounding boxes are subsequently passed to a YOLOv11 classification model,which analyzes cropped images and assigns class labels.An additional counting module automatically tallies the detected fruits,offering a near-instantaneous estimation of quantity.The experimental results suggest high precision and recall for detection,high classification accuracy(across 15 classes),and near-perfect counting in real time.This paper presents a multi-stage pipeline for date fruit detection,classification,and automated counting,employing YOLOv11-based models to achieve high accuracy while maintaining real-time throughput.The results demonstrated that the detection precision exceeded 90%,the classification accuracy approached 92%,and the counting module correlated closely with the manual tallies.These findings confirm the potential of reducing manual labour and enhancing operational efficiency in post-harvesting processes.Future studies will include dataset expansion,user-centric interfaces,and integration with harvesting robotics.
基金This study is financed by the European Union-NextGenerationEU,through the National Recovery and Resilience Plan of the Republic of Bulgaria,Project No.BG-RRP-2.013-0001.
文摘Covert timing channels(CTC)exploit network resources to establish hidden communication pathways,posing signi cant risks to data security and policy compliance.erefore,detecting such hidden and dangerous threats remains one of the security challenges. is paper proposes LinguTimeX,a new framework that combines natural language processing with arti cial intelligence,along with explainable Arti cial Intelligence(AI)not only to detect CTC but also to provide insights into the decision process.LinguTimeX performs multidimensional feature extraction by fusing linguistic attributes with temporal network patterns to identify covert channels precisely.LinguTimeX demonstrates strong e ectiveness in detecting CTC across multiple languages;namely English,Arabic,and Chinese.Speci cally,the LSTM and RNN models achieved F1 scores of 90%on the English dataset,89%on the Arabic dataset,and 88%on the Chinese dataset,showcasing their superior performance and ability to generalize across multiple languages. is highlights their robustness in detecting CTCs within security systems,regardless of the language or cultural context of the data.In contrast,the DeepForest model produced F1-scores ranging from 86%to 87%across the same datasets,further con rming its e ectiveness in CTC detection.Although other algorithms also showed reasonable accuracy,the LSTM and RNN models consistently outperformed them in multilingual settings,suggesting that deep learning models might be better suited for this particular problem.
文摘Woody biomass is a widely-used and favourable material for energy production due to its carbon neutral status. Energy is generally derived either through direct combustion or gasification. The Irish forestry sector is forecasted to expand significantly in coming years, and so the opportunity exists for the bioenergy sector to take advantage of the material for which there will be no demand from current markets. A by-product of wood processing, wood dust is the cheapest form of wood material available to the bioenergy sector. Currently wood dust is primarily processed into wood pellets for energy generation. Research was conducted on post-processing birch wood dust;the calorific value and the Wobbe Index were determined for a number of wood particle sizes and wood dust concentrations. The Wobbe Index determined for the upper explosive concentration (4000 g/m3) falls within range of that of hydrogen gas, and wood dust-air mixtures of this concentration could therefore behave in a similar manner in a gas turbine. Due to its slightly lower HHV and higher particle density, however, alterations to the gas turbine would be necessary to accommodate wood dust to prevent abrasive damage to the turbine. As an unwanted by-product of wood processing the direct use of wood dust in a gas turbine for energy generation could therefore have economic and environmental benefits.