The instantaneous total mortality rate(Z) of a fish population is one of the important parameters in fisheries stock assessment. The estimation of Z is crucial to fish population dynamics analysis,abundance and catch ...The instantaneous total mortality rate(Z) of a fish population is one of the important parameters in fisheries stock assessment. The estimation of Z is crucial to fish population dynamics analysis,abundance and catch forecast,and fisheries management. A catch curve-based method for estimating time-based Z and its change trend from catch per unit effort(CPUE) data of multiple cohorts is developed. Unlike the traditional catch-curve method,the method developed here does not need the assumption of constant Z throughout the time,but the Z values in n continuous years are assumed constant,and then the Z values in different n continuous years are estimated using the age-based CPUE data within these years. The results of the simulation analyses show that the trends of the estimated time-based Z are consistent with the trends of the true Z,and the estimated rates of change from this approach are close to the true change rates(the relative differences between the change rates of the estimated Z and the true Z are smaller than 10%). Variations of both Z and recruitment can affect the estimates of Z value and the trend of Z. The most appropriate value of n can be different given the effects of different factors. Therefore,the appropriate value of n for different fisheries should be determined through a simulation analysis as we demonstrated in this study. Further analyses suggested that selectivity and age estimation are also two factors that can affect the estimated Z values if there is error in either of them,but the estimated change rates of Z are still close to the true change rates. We also applied this approach to the Atlantic cod(G adus morhua) fishery of eastern Newfoundland and Labrador from 1983 to 1997,and obtained reasonable estimates of time-based Z.展开更多
The trust in distributed environment is uncertain, which is variation for various factors. This paper introduces TDTM, a model for time-based dynamic trust. Every entity in the distribute environment is endowed with a...The trust in distributed environment is uncertain, which is variation for various factors. This paper introduces TDTM, a model for time-based dynamic trust. Every entity in the distribute environment is endowed with a trust-vector, which figures the trust intensity between this entity and the others. The trust intensity is dynamic due to the time and the inter-operation between two entities, a method is proposed to quantify this change based on the mind of ant colony algorithm and then an algorithm for the transfer of trust relation is also proposed. Furthermore, this paper analyses the influence to the trust intensity among all entities that is aroused by the change of trust intensity between the two entities, and presents an algorithm to resolve the problem. Finally, we show the process of the trusts' change that is aroused by the time's lapse and the inter-operation through an instance.展开更多
The perception of a 3D space, in which movement takes place, is subjectively based on experience. The pedestrians' perception of subjective duration is one of the related issues that receive tittle attention in urban...The perception of a 3D space, in which movement takes place, is subjectively based on experience. The pedestrians' perception of subjective duration is one of the related issues that receive tittle attention in urban design Literature. Pedestrians often misperceive the required time to pass a certain distance. A wide range of factors affects one's perception of time in urban environments. These factors include individua( factors (e.g., gender, age, and psychoLogicaL state), social and cu(tural contexts, purpose and motivation for being in the space, and knowledge of the given area. This study aims to create an applied checklist that can be used by urban designers in analyzing the effects of individual experience on subjective duration. This checklist wilt enable urban designers to perform a phenomenotogicat assessment of time perception and compare this perception in different urban spaces, thereby improving pedestrians' experiences of time through a purposeful design. A combination of exploratory and descriptive anaLyticaL research is used as methodology due to the complexity of time perception.展开更多
Filtration efficiency of portable air cleaner(PAC)is affected by resident perceptions and adherences to when and how to operate the PAC.Incorporating PAC with smart control and sensor technology holds the promise to e...Filtration efficiency of portable air cleaner(PAC)is affected by resident perceptions and adherences to when and how to operate the PAC.Incorporating PAC with smart control and sensor technology holds the promise to effectively reduce indoor air pollutants.This study aims to evaluate the efficiency of a PAC at removing indoor fine particulate matters(PM_(2.5))exposure under two automated operation settings:(1)a time-based mode in which the operation time is determined based on perceived time periods of indoor pollution by residents;(2)a sensor-based mode in which an air sensor monitor is used to determine the PAC based on the actual PM_(2.5) level against the indoor air quality guideline.The study was conducted in a residential room for 55 days with a rolling setting on PAC(no filtration,sensor-based,time-based fil-trations)and a continuous measurement of PM_(2.5).We found that the PAC operated with sensor-based mode removed PM_(2.5) concentrations by 47%and prolonged clean air(<35 μg/m^(3))period by 23%compared to the purifications with time-based mode which reduced PM_(2.5) by 29%and increased clean air period by 13%.The sensor-based filtration identified indoor pollution episodes that are hardly detected by personal perceptions.Our study findings support an automated sensor-based approach to optimize the use of PAC for effectively reducing indoor PM_(2.5) exposure.展开更多
This paper introduces techniques in Gaussian process regression model for spatiotemporal data collected from complex systems.This study focuses on extracting local structures and then constructing surrogate models bas...This paper introduces techniques in Gaussian process regression model for spatiotemporal data collected from complex systems.This study focuses on extracting local structures and then constructing surrogate models based on Gaussian process assumptions.The proposed Dynamic Gaussian Process Regression(DGPR)consists of a sequence of local surrogate models related to each other.In DGPR,the time-based spatial clustering is carried out to divide the systems into sub-spatio-temporal parts whose interior has similar variation patterns,where the temporal information is used as the prior information for training the spatial-surrogate model.The DGPR is robust and especially suitable for the loosely coupled model structure,also allowing for parallel computation.The numerical results of the test function show the effectiveness of DGPR.Furthermore,the shock tube problem is successfully approximated under different phenomenon complexity.展开更多
Full-waveform inversion(FWI)utilizes optimization methods to recover an optimal Earth model to best fit the observed seismic record in a sense of a predefined norm.Since FWI combines mathematic inversion and full-wave...Full-waveform inversion(FWI)utilizes optimization methods to recover an optimal Earth model to best fit the observed seismic record in a sense of a predefined norm.Since FWI combines mathematic inversion and full-wave equations,it has been recognized as one of the key methods for seismic data imaging and Earth model building in the fields of global/regional and exploration seismology.Unfortunately,conventional FWI fixes background velocity mainly relying on refraction and turning waves that are commonly rich in large offsets.By contrast,reflections in the short offsets mainly contribute to the reconstruction of the high-resolution interfaces.Restricted by acquisition geometries,refractions and turning waves in the record usually have limited penetration depth,which may not reach oil/gas reservoirs.Thus,reflections in the record are the only source that carries the information of these reservoirs.Consequently,it is meaningful to develop reflection-waveform inversion(RWI)that utilizes reflections to recover background velocity including the deep part of the model.This review paper includes:analyzing the weaknesses of FWI when inverting reflections;overviewing the principles of RWI,including separation of the tomography and migration components,the objective functions,constraints;summarizing the current status of the technique of RWI;outlooking the future of RWI.展开更多
The increasing use of the Internet with vehicles has made travel more convenient.However,hackers can attack intelligent vehicles through various technical loopholes,resulting in a range of security issues.Due to these...The increasing use of the Internet with vehicles has made travel more convenient.However,hackers can attack intelligent vehicles through various technical loopholes,resulting in a range of security issues.Due to these security issues,the safety protection technology of the in-vehicle system has become a focus of research.Using the advanced autoencoder network and recurrent neural network in deep learning,we investigated the intrusion detection system based on the in-vehicle system.We combined two algorithms to realize the efficient learning of the vehicle’s boundary behavior and the detection of intrusive behavior.In order to verify the accuracy and efficiency of the proposed model,it was evaluated using real vehicle data.The experimental results show that the combination of the two technologies can effectively and accurately identify abnormal boundary behavior.The parameters of the model are self-iteratively updated using the time-based back propagation algorithm.We verified that the model proposed in this study can reach a nearly 96%accurate detection rate.展开更多
The continual growth of the use of technological appliances during the COVID-19 pandemic has resulted in a massive volume of data flow on the Internet,as many employees have transitioned to working from home.Furthermo...The continual growth of the use of technological appliances during the COVID-19 pandemic has resulted in a massive volume of data flow on the Internet,as many employees have transitioned to working from home.Furthermore,with the increase in the adoption of encrypted data transmission by many people who tend to use a Virtual Private Network(VPN)or Tor Browser(dark web)to keep their data privacy and hidden,network traffic encryption is rapidly becoming a universal approach.This affects and complicates the quality of service(QoS),traffic monitoring,and network security provided by Internet Service Providers(ISPs),particularly for analysis and anomaly detection approaches based on the network traffic’s nature.The method of categorizing encrypted traffic is one of the most challenging issues introduced by a VPN as a way to bypass censorship as well as gain access to geo-locked services.Therefore,an efficient approach is especially needed that enables the identification of encrypted network traffic data to extract and select valuable features which improve the quality of service and network management as well as to oversee the overall performance.In this paper,the classification of network traffic data in terms of VPN and non-VPN traffic is studied based on the efficiency of time-based features extracted from network packets.Therefore,this paper suggests two machine learning models that categorize network traffic into encrypted and non-encrypted traffic.The proposed models utilize statistical features(SF),Pearson Correlation(PC),and a Genetic Algorithm(GA),preprocessing the traffic samples into net flow traffic to accomplish the experiment’s objectives.The GA-based method utilizes a stochastic method based on natural genetics and biological evolution to extract essential features.The PC-based method performs well in removing different features of network traffic.With a microsecond perpacket prediction time,the best model achieved an accuracy of more than 95.02 percent in the most demanding traffic classification task,a drop in accuracy of only 2.37 percent in comparison to the entire statistical-based machine learning approach.This is extremely promising for the development of real-time traffic analyzers.展开更多
We have designed a piezoresistive detector to detect the displacement of an accelerometer.We have used a flexible contact force and impact time detector for sensing the acceleration in the time domain.The advantage of...We have designed a piezoresistive detector to detect the displacement of an accelerometer.We have used a flexible contact force and impact time detector for sensing the acceleration in the time domain.The advantage of using this mechanism is good linearity,compactness,scalability,and the potential to realize a higher precision accelerometer due to time-based measurement.The estimated mechanical and electrical parameters of beam detector are presented.We used COMSOL Multiphysics for designing the detector and Matlab for analysis.展开更多
On-the-go soil sensors measuring apparent electrical conductivity (EC<sub>a</sub>) in agricultural fields have provided valuable information to producers, consultants, and researchers on understanding soil...On-the-go soil sensors measuring apparent electrical conductivity (EC<sub>a</sub>) in agricultural fields have provided valuable information to producers, consultants, and researchers on understanding soil spatial patterns and their relationship with crop components. Nevertheless, more information is needed in Mississippi, USA, on the longevity of EC<sub>a</sub> measurements collected with an on-the-go soil sensor system. That information will be valuable to users interesting in employing the technology to assist them with management decisions. This study compared the spatial patterns of EC<sub>a</sub> data collected at two different periods to determine the temporal stability of map products derived from the data. The study focused on data collected in 2016 and 2021 from a field plot consisting of clay and loam soils. Apparent electrical conductivity shallow (0 - 30 cm) and deep (0 - 90 cm) measurements were obtained with a mobile system. Descriptive statistics, Pearson correlation analysis, paired t-test, and cluster analysis (k-means) were used to compare the data sets. Similar trends were evident in both datasets;apparent electrical conductivity deep measurements were greater (P 0.90) existed between the EC<sub>a</sub> shallow and deep measurements. Also, a high correlation (r ≥ 0.79) was observed between the EC<sub>a </sub>measurements and the y-coordinates recorded by a global positioning system, indicating a spatial trend in the north and south direction (vice versa) of the plot. Comparable spatial patterns were observed between the years in the EC<sub>a</sub> shallow and deep thematic maps developed via clustering. Apparent electrical conductivity data measurement patterns were consistent over the five years of this study. Thus the user has at least a five-year window from the first data collection to the next data collection to determine the relationship of the EC<sub>a</sub> data to other agronomic variables.展开更多
The development of energy and cost efficient IoT nodes is very important for the successful deployment of IoT solutions across various application domains. This paper presents energy models, which will enable the esti...The development of energy and cost efficient IoT nodes is very important for the successful deployment of IoT solutions across various application domains. This paper presents energy models, which will enable the estimation of battery life, for both time-based and event-based low-cost IoT monitoring nodes. These nodes are based on the low-cost ESP8266 (ESP) modules which integrate both transceiver and microcontroller on a single small-size chip and only cost about $2. The active/sleep energy saving approach was used in the design of the IoT monitoring nodes because the power consumption of ESP modules is relatively high and often impacts negatively on the cost of operating the nodes. A low energy application layer protocol, that is, Message Queue Telemetry Transport (MQTT) was also employed for energy efficient wireless data transport. The finite automata theory was used to model the various states and behavior of the ESP modules used in IoT monitoring applications. The applicability of the models presented was tested in real life application scenarios and results are presented. In a temperature and humidity monitoring node, for example, the model shows a significant reduction in average current consumption from 70.89 mA to 0.58 mA for sleep durations of 0 and 30 minutes, respectively. The battery life of batteries rated in mAh can therefore be easily calculated from the current consumption figures.展开更多
This paper describes a novel energy-efficient, high-speed ADC architecture combining a flash ADC and a TDC. A high conversion rate can be obtained owing to the flash coarse ADC, and low-power dissipation can be attain...This paper describes a novel energy-efficient, high-speed ADC architecture combining a flash ADC and a TDC. A high conversion rate can be obtained owing to the flash coarse ADC, and low-power dissipation can be attained using the TDC as a fine ADC. Moreover, a capacitive coupled ramp circuit is proposed to achieve high linearity. A test chip was fabricated using 65-nm digital CMOS technology. The test chip demonstrated a high sampling frequency of 500 MHz and a low-power dissipation of 2.0 mW, resulting in a low FOM of 32 fJ/conversion-step.展开更多
Purpose: Leagile manufacturing is one of the time-based manufacturing practices used to improve factory performance. It is a practice that combines initiatives of Lean and agile manufacturing under certain enabling co...Purpose: Leagile manufacturing is one of the time-based manufacturing practices used to improve factory performance. It is a practice that combines initiatives of Lean and agile manufacturing under certain enabling competences. Therefore, the purpose of this study is investigate the combinative nature of time-based manufacturing practices under unique enabling competences and their impact on performance of factories in Uganda. Methodology: Firstly, the underlying factor structure of competences and time-based manufacturing was examined was conducted using Principal Component Analysis (PCA). Enabling competences and time-base manufacturing practices were modelled and validated for each using confirmatory factor analysis, particularly composite reliability, average variance extracted and convergent validity. A fully fledged structural equation model was used to test the impact of leagile manufacturing on performance of factories. Findings: The study results revealed that time-based manufacturing of lean, and leagile are related but differ, in terms of their enabling competences and philosophical orientation. The findings also revealed that when small and medium factories in Uganda adopt leagile practice, they are likely not improve their performance. This is perhaps due to the fact that small and medium factories have inadequate resources. Practical Implications: The study findings shed more insights on the factors that enable adoption and implementation of time-based manufacturing practices. The extent to which these competences are orchestrated determines the benefits derived from the time-based manufacturing practices. In addition, small and medium enterprises should keenly make a choice on the appropriate practices that purposely reduce their lead time and cost of conversion. Originality: This study investigated the combinative nature of time-based manufacturing practices under unique enabling competences and their impact on performance of factories in Uganda. It is among the few studies that provide evidence on the leagile model anchored in the appropriate enabling competences in the context of developing countries. The empirical survey was done on small and medium factories to validate a leagile manufacturing model and tested its impact on factory performance.展开更多
在分销系统中,对库存补货策略进行科学管理与控制是一直是学者们研究的热点之一,学术界始终没有有效提高三个传统库存补货策略运作效率的有效方法.在两个传统补货策略(EB(echelon-based),TB(time-based))策略的基础上,从减少EB和TB策略...在分销系统中,对库存补货策略进行科学管理与控制是一直是学者们研究的热点之一,学术界始终没有有效提高三个传统库存补货策略运作效率的有效方法.在两个传统补货策略(EB(echelon-based),TB(time-based))策略的基础上,从减少EB和TB策略的极端情况角度,提出了混合策略1(HBl,Hybrid Based Policy1)和混合策略2(HB2,Hybrid Based Policy2),并将HB1和HB2的优点结合起来形成双混合策略(RH,Re-Hybrid Policy).数值试验表明,HB1、HB2对EB、TB的总成本费用比率有不同程度的改善,同时RH能有效改善HB1、HB2的总成本费用比率.展开更多
Hazardous wastes pose increasing threats to people and environment during the processes of offsite collection,storage,treatment,and disposal.A novel game theoretic model,including two levels,is developed for the corre...Hazardous wastes pose increasing threats to people and environment during the processes of offsite collection,storage,treatment,and disposal.A novel game theoretic model,including two levels,is developed for the corresponding optimization of emergency logistics,where the upper level indicates the location and capacity problem for the regulator,and the lower level reflects the allocation problem for the emergency commander.Different from other works in the literature,we focus on the issue of multi-quality coverages (full and partial coverages) in the optimization of facility location and allocation.To be specific,the regulator decides the location plan and the corresponding capacity of storing emergency groups for multiple types of hazmats,so to minimizes the total potential environmental risk posed by incident sites;while the commander minimizes the total costs to provide an efficient allocation policy.To solve the bi-level programming model,two solution techniques,namely a KKT condition approach and a heuristic model,are designed and compared.The proposed model and solution techniques are then applied to a hypothetical case and a real-world case to demonstrate the practicality and provide managerial insights.展开更多
comparatorAbstract: A cryogenic successive approximation register (SAR) analog to digital converter (ADC) is presented. It has been designed to operate in cryogenic infrared readout systems as they are cooled fro...comparatorAbstract: A cryogenic successive approximation register (SAR) analog to digital converter (ADC) is presented. It has been designed to operate in cryogenic infrared readout systems as they are cooled from room temperature to their final cryogenic operation temperature. In order to preserve the circuit's performance over this wide temperature range, a temperature-compensated time-based comparator architecture is used in the ADC, which provides a steady performance with ultra low power for extreme temperature (from room temperature down to 77 K) operation. The converter implemented in a standard 0.35 μm CMOS process exhibits 0.64 LSB maximum differential nonlinearity (DNL) and 0.59 LSB maximum integral nonlinearity (1NL). It achieves 9.3 bit effective number of bits (ENOB) with 200 kS/s sampling rate at 77 K, dissipating 0.23 mW under 3.3 V supply voltage and occupies 0.8 × 0.3 mm^2.展开更多
基金Supported by the USDA Cooperative State Research,Education and Extension Service,Hatch Project(No.0210510)the National Natural Science Foundations of China(Nos.31270527,40801225)+1 种基金the Natural Science Foundation of Zhejiang Province(No.LY13D010005)the Young Academic Leaders Climbing Program of Zhejiang Province(No.pd2013222)
文摘The instantaneous total mortality rate(Z) of a fish population is one of the important parameters in fisheries stock assessment. The estimation of Z is crucial to fish population dynamics analysis,abundance and catch forecast,and fisheries management. A catch curve-based method for estimating time-based Z and its change trend from catch per unit effort(CPUE) data of multiple cohorts is developed. Unlike the traditional catch-curve method,the method developed here does not need the assumption of constant Z throughout the time,but the Z values in n continuous years are assumed constant,and then the Z values in different n continuous years are estimated using the age-based CPUE data within these years. The results of the simulation analyses show that the trends of the estimated time-based Z are consistent with the trends of the true Z,and the estimated rates of change from this approach are close to the true change rates(the relative differences between the change rates of the estimated Z and the true Z are smaller than 10%). Variations of both Z and recruitment can affect the estimates of Z value and the trend of Z. The most appropriate value of n can be different given the effects of different factors. Therefore,the appropriate value of n for different fisheries should be determined through a simulation analysis as we demonstrated in this study. Further analyses suggested that selectivity and age estimation are also two factors that can affect the estimated Z values if there is error in either of them,but the estimated change rates of Z are still close to the true change rates. We also applied this approach to the Atlantic cod(G adus morhua) fishery of eastern Newfoundland and Labrador from 1983 to 1997,and obtained reasonable estimates of time-based Z.
基金Supported by the National Natural Science Foun-dation of China (60403027) Natural Science Foundation of HubeiProvince (2005ABA258) Open Foundation of State Key Labora-tory of Software Engineering (SKLSE05-07)
文摘The trust in distributed environment is uncertain, which is variation for various factors. This paper introduces TDTM, a model for time-based dynamic trust. Every entity in the distribute environment is endowed with a trust-vector, which figures the trust intensity between this entity and the others. The trust intensity is dynamic due to the time and the inter-operation between two entities, a method is proposed to quantify this change based on the mind of ant colony algorithm and then an algorithm for the transfer of trust relation is also proposed. Furthermore, this paper analyses the influence to the trust intensity among all entities that is aroused by the change of trust intensity between the two entities, and presents an algorithm to resolve the problem. Finally, we show the process of the trusts' change that is aroused by the time's lapse and the inter-operation through an instance.
文摘The perception of a 3D space, in which movement takes place, is subjectively based on experience. The pedestrians' perception of subjective duration is one of the related issues that receive tittle attention in urban design Literature. Pedestrians often misperceive the required time to pass a certain distance. A wide range of factors affects one's perception of time in urban environments. These factors include individua( factors (e.g., gender, age, and psychoLogicaL state), social and cu(tural contexts, purpose and motivation for being in the space, and knowledge of the given area. This study aims to create an applied checklist that can be used by urban designers in analyzing the effects of individual experience on subjective duration. This checklist wilt enable urban designers to perform a phenomenotogicat assessment of time perception and compare this perception in different urban spaces, thereby improving pedestrians' experiences of time through a purposeful design. A combination of exploratory and descriptive anaLyticaL research is used as methodology due to the complexity of time perception.
基金supported by the start-up funding of University at Buffalo.
文摘Filtration efficiency of portable air cleaner(PAC)is affected by resident perceptions and adherences to when and how to operate the PAC.Incorporating PAC with smart control and sensor technology holds the promise to effectively reduce indoor air pollutants.This study aims to evaluate the efficiency of a PAC at removing indoor fine particulate matters(PM_(2.5))exposure under two automated operation settings:(1)a time-based mode in which the operation time is determined based on perceived time periods of indoor pollution by residents;(2)a sensor-based mode in which an air sensor monitor is used to determine the PAC based on the actual PM_(2.5) level against the indoor air quality guideline.The study was conducted in a residential room for 55 days with a rolling setting on PAC(no filtration,sensor-based,time-based fil-trations)and a continuous measurement of PM_(2.5).We found that the PAC operated with sensor-based mode removed PM_(2.5) concentrations by 47%and prolonged clean air(<35 μg/m^(3))period by 23%compared to the purifications with time-based mode which reduced PM_(2.5) by 29%and increased clean air period by 13%.The sensor-based filtration identified indoor pollution episodes that are hardly detected by personal perceptions.Our study findings support an automated sensor-based approach to optimize the use of PAC for effectively reducing indoor PM_(2.5) exposure.
基金co-supported by the National Natural Science Foundation of China(No.12101608)the NSAF(No.U2230208)the Hunan Provincial Innovation Foundation for Postgraduate,China(No.CX20220034).
文摘This paper introduces techniques in Gaussian process regression model for spatiotemporal data collected from complex systems.This study focuses on extracting local structures and then constructing surrogate models based on Gaussian process assumptions.The proposed Dynamic Gaussian Process Regression(DGPR)consists of a sequence of local surrogate models related to each other.In DGPR,the time-based spatial clustering is carried out to divide the systems into sub-spatio-temporal parts whose interior has similar variation patterns,where the temporal information is used as the prior information for training the spatial-surrogate model.The DGPR is robust and especially suitable for the loosely coupled model structure,also allowing for parallel computation.The numerical results of the test function show the effectiveness of DGPR.Furthermore,the shock tube problem is successfully approximated under different phenomenon complexity.
基金supported by National Key R&D Program of China(No.2018YFA0702502)NSFC(Grant No.41974142)Science Foundation of China University of petroleum,Beijing(No.2462019YJRC005).
文摘Full-waveform inversion(FWI)utilizes optimization methods to recover an optimal Earth model to best fit the observed seismic record in a sense of a predefined norm.Since FWI combines mathematic inversion and full-wave equations,it has been recognized as one of the key methods for seismic data imaging and Earth model building in the fields of global/regional and exploration seismology.Unfortunately,conventional FWI fixes background velocity mainly relying on refraction and turning waves that are commonly rich in large offsets.By contrast,reflections in the short offsets mainly contribute to the reconstruction of the high-resolution interfaces.Restricted by acquisition geometries,refractions and turning waves in the record usually have limited penetration depth,which may not reach oil/gas reservoirs.Thus,reflections in the record are the only source that carries the information of these reservoirs.Consequently,it is meaningful to develop reflection-waveform inversion(RWI)that utilizes reflections to recover background velocity including the deep part of the model.This review paper includes:analyzing the weaknesses of FWI when inverting reflections;overviewing the principles of RWI,including separation of the tomography and migration components,the objective functions,constraints;summarizing the current status of the technique of RWI;outlooking the future of RWI.
基金This work was supported by Research on the Influences of Network Security Threat Intelligence on Sichuan Government and Enterprises and the Development Countermeasure(Project ID 2018ZR0220)Research on Key Technologies of Network Security Protection in Intelligent Vehicle Based on(Project ID 2018JY0510)+3 种基金the Research on Abnormal Behavior Detection Technology of Automotive CAN Bus Based on Information Entropy(Project ID 2018Z105)the Research on the Training Mechanism of Driverless Network Safety Talents for Sichuan Auto Industry Based on Industry-University Synergy(Project ID 18RKX0667),Research and implementation of traffic cooperative perception and traffic signal optimization of main road(Project ID 2018YF0500707SN)Research and implementation of intelligent traffic control and monitoring system(Project ID 2019YGG0201)Remote upgrade system of intelligent vehicle software(Project ID 2018GZDZX0011).
文摘The increasing use of the Internet with vehicles has made travel more convenient.However,hackers can attack intelligent vehicles through various technical loopholes,resulting in a range of security issues.Due to these security issues,the safety protection technology of the in-vehicle system has become a focus of research.Using the advanced autoencoder network and recurrent neural network in deep learning,we investigated the intrusion detection system based on the in-vehicle system.We combined two algorithms to realize the efficient learning of the vehicle’s boundary behavior and the detection of intrusive behavior.In order to verify the accuracy and efficiency of the proposed model,it was evaluated using real vehicle data.The experimental results show that the combination of the two technologies can effectively and accurately identify abnormal boundary behavior.The parameters of the model are self-iteratively updated using the time-based back propagation algorithm.We verified that the model proposed in this study can reach a nearly 96%accurate detection rate.
文摘The continual growth of the use of technological appliances during the COVID-19 pandemic has resulted in a massive volume of data flow on the Internet,as many employees have transitioned to working from home.Furthermore,with the increase in the adoption of encrypted data transmission by many people who tend to use a Virtual Private Network(VPN)or Tor Browser(dark web)to keep their data privacy and hidden,network traffic encryption is rapidly becoming a universal approach.This affects and complicates the quality of service(QoS),traffic monitoring,and network security provided by Internet Service Providers(ISPs),particularly for analysis and anomaly detection approaches based on the network traffic’s nature.The method of categorizing encrypted traffic is one of the most challenging issues introduced by a VPN as a way to bypass censorship as well as gain access to geo-locked services.Therefore,an efficient approach is especially needed that enables the identification of encrypted network traffic data to extract and select valuable features which improve the quality of service and network management as well as to oversee the overall performance.In this paper,the classification of network traffic data in terms of VPN and non-VPN traffic is studied based on the efficiency of time-based features extracted from network packets.Therefore,this paper suggests two machine learning models that categorize network traffic into encrypted and non-encrypted traffic.The proposed models utilize statistical features(SF),Pearson Correlation(PC),and a Genetic Algorithm(GA),preprocessing the traffic samples into net flow traffic to accomplish the experiment’s objectives.The GA-based method utilizes a stochastic method based on natural genetics and biological evolution to extract essential features.The PC-based method performs well in removing different features of network traffic.With a microsecond perpacket prediction time,the best model achieved an accuracy of more than 95.02 percent in the most demanding traffic classification task,a drop in accuracy of only 2.37 percent in comparison to the entire statistical-based machine learning approach.This is extremely promising for the development of real-time traffic analyzers.
文摘We have designed a piezoresistive detector to detect the displacement of an accelerometer.We have used a flexible contact force and impact time detector for sensing the acceleration in the time domain.The advantage of using this mechanism is good linearity,compactness,scalability,and the potential to realize a higher precision accelerometer due to time-based measurement.The estimated mechanical and electrical parameters of beam detector are presented.We used COMSOL Multiphysics for designing the detector and Matlab for analysis.
文摘On-the-go soil sensors measuring apparent electrical conductivity (EC<sub>a</sub>) in agricultural fields have provided valuable information to producers, consultants, and researchers on understanding soil spatial patterns and their relationship with crop components. Nevertheless, more information is needed in Mississippi, USA, on the longevity of EC<sub>a</sub> measurements collected with an on-the-go soil sensor system. That information will be valuable to users interesting in employing the technology to assist them with management decisions. This study compared the spatial patterns of EC<sub>a</sub> data collected at two different periods to determine the temporal stability of map products derived from the data. The study focused on data collected in 2016 and 2021 from a field plot consisting of clay and loam soils. Apparent electrical conductivity shallow (0 - 30 cm) and deep (0 - 90 cm) measurements were obtained with a mobile system. Descriptive statistics, Pearson correlation analysis, paired t-test, and cluster analysis (k-means) were used to compare the data sets. Similar trends were evident in both datasets;apparent electrical conductivity deep measurements were greater (P 0.90) existed between the EC<sub>a</sub> shallow and deep measurements. Also, a high correlation (r ≥ 0.79) was observed between the EC<sub>a </sub>measurements and the y-coordinates recorded by a global positioning system, indicating a spatial trend in the north and south direction (vice versa) of the plot. Comparable spatial patterns were observed between the years in the EC<sub>a</sub> shallow and deep thematic maps developed via clustering. Apparent electrical conductivity data measurement patterns were consistent over the five years of this study. Thus the user has at least a five-year window from the first data collection to the next data collection to determine the relationship of the EC<sub>a</sub> data to other agronomic variables.
文摘The development of energy and cost efficient IoT nodes is very important for the successful deployment of IoT solutions across various application domains. This paper presents energy models, which will enable the estimation of battery life, for both time-based and event-based low-cost IoT monitoring nodes. These nodes are based on the low-cost ESP8266 (ESP) modules which integrate both transceiver and microcontroller on a single small-size chip and only cost about $2. The active/sleep energy saving approach was used in the design of the IoT monitoring nodes because the power consumption of ESP modules is relatively high and often impacts negatively on the cost of operating the nodes. A low energy application layer protocol, that is, Message Queue Telemetry Transport (MQTT) was also employed for energy efficient wireless data transport. The finite automata theory was used to model the various states and behavior of the ESP modules used in IoT monitoring applications. The applicability of the models presented was tested in real life application scenarios and results are presented. In a temperature and humidity monitoring node, for example, the model shows a significant reduction in average current consumption from 70.89 mA to 0.58 mA for sleep durations of 0 and 30 minutes, respectively. The battery life of batteries rated in mAh can therefore be easily calculated from the current consumption figures.
文摘This paper describes a novel energy-efficient, high-speed ADC architecture combining a flash ADC and a TDC. A high conversion rate can be obtained owing to the flash coarse ADC, and low-power dissipation can be attained using the TDC as a fine ADC. Moreover, a capacitive coupled ramp circuit is proposed to achieve high linearity. A test chip was fabricated using 65-nm digital CMOS technology. The test chip demonstrated a high sampling frequency of 500 MHz and a low-power dissipation of 2.0 mW, resulting in a low FOM of 32 fJ/conversion-step.
文摘Purpose: Leagile manufacturing is one of the time-based manufacturing practices used to improve factory performance. It is a practice that combines initiatives of Lean and agile manufacturing under certain enabling competences. Therefore, the purpose of this study is investigate the combinative nature of time-based manufacturing practices under unique enabling competences and their impact on performance of factories in Uganda. Methodology: Firstly, the underlying factor structure of competences and time-based manufacturing was examined was conducted using Principal Component Analysis (PCA). Enabling competences and time-base manufacturing practices were modelled and validated for each using confirmatory factor analysis, particularly composite reliability, average variance extracted and convergent validity. A fully fledged structural equation model was used to test the impact of leagile manufacturing on performance of factories. Findings: The study results revealed that time-based manufacturing of lean, and leagile are related but differ, in terms of their enabling competences and philosophical orientation. The findings also revealed that when small and medium factories in Uganda adopt leagile practice, they are likely not improve their performance. This is perhaps due to the fact that small and medium factories have inadequate resources. Practical Implications: The study findings shed more insights on the factors that enable adoption and implementation of time-based manufacturing practices. The extent to which these competences are orchestrated determines the benefits derived from the time-based manufacturing practices. In addition, small and medium enterprises should keenly make a choice on the appropriate practices that purposely reduce their lead time and cost of conversion. Originality: This study investigated the combinative nature of time-based manufacturing practices under unique enabling competences and their impact on performance of factories in Uganda. It is among the few studies that provide evidence on the leagile model anchored in the appropriate enabling competences in the context of developing countries. The empirical survey was done on small and medium factories to validate a leagile manufacturing model and tested its impact on factory performance.
文摘在分销系统中,对库存补货策略进行科学管理与控制是一直是学者们研究的热点之一,学术界始终没有有效提高三个传统库存补货策略运作效率的有效方法.在两个传统补货策略(EB(echelon-based),TB(time-based))策略的基础上,从减少EB和TB策略的极端情况角度,提出了混合策略1(HBl,Hybrid Based Policy1)和混合策略2(HB2,Hybrid Based Policy2),并将HB1和HB2的优点结合起来形成双混合策略(RH,Re-Hybrid Policy).数值试验表明,HB1、HB2对EB、TB的总成本费用比率有不同程度的改善,同时RH能有效改善HB1、HB2的总成本费用比率.
基金supported by grants from the National Natural Science Foundations of China under grant No.61803091the Natural Science Foundation of Guangdong province under grant No.2016A030310263as well as a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada under grant No.RGPIN-2015-04013.
文摘Hazardous wastes pose increasing threats to people and environment during the processes of offsite collection,storage,treatment,and disposal.A novel game theoretic model,including two levels,is developed for the corresponding optimization of emergency logistics,where the upper level indicates the location and capacity problem for the regulator,and the lower level reflects the allocation problem for the emergency commander.Different from other works in the literature,we focus on the issue of multi-quality coverages (full and partial coverages) in the optimization of facility location and allocation.To be specific,the regulator decides the location plan and the corresponding capacity of storing emergency groups for multiple types of hazmats,so to minimizes the total potential environmental risk posed by incident sites;while the commander minimizes the total costs to provide an efficient allocation policy.To solve the bi-level programming model,two solution techniques,namely a KKT condition approach and a heuristic model,are designed and compared.The proposed model and solution techniques are then applied to a hypothetical case and a real-world case to demonstrate the practicality and provide managerial insights.
文摘comparatorAbstract: A cryogenic successive approximation register (SAR) analog to digital converter (ADC) is presented. It has been designed to operate in cryogenic infrared readout systems as they are cooled from room temperature to their final cryogenic operation temperature. In order to preserve the circuit's performance over this wide temperature range, a temperature-compensated time-based comparator architecture is used in the ADC, which provides a steady performance with ultra low power for extreme temperature (from room temperature down to 77 K) operation. The converter implemented in a standard 0.35 μm CMOS process exhibits 0.64 LSB maximum differential nonlinearity (DNL) and 0.59 LSB maximum integral nonlinearity (1NL). It achieves 9.3 bit effective number of bits (ENOB) with 200 kS/s sampling rate at 77 K, dissipating 0.23 mW under 3.3 V supply voltage and occupies 0.8 × 0.3 mm^2.