Concern for individual perception is essential to enhance greenspace management.Various landscape elements are key factors affecting visitors’perception engaging in greenspaces.Targeting Belgian public greenspaces,we...Concern for individual perception is essential to enhance greenspace management.Various landscape elements are key factors affecting visitors’perception engaging in greenspaces.Targeting Belgian public greenspaces,we develop a comprehensive approach to quantify visitors’perceptions from multiple dimensions.Applying user-generated data and unsupervised machine learning approach,we identified the landscape elements and classified the greenspaces to extract perception rates and detect dominant elements.The satisfaction of every landscape element was then analyzed by the natural language process approach and standardized major axis regression to discover their contributions to overall satisfaction.Furthermore,we calculated and visualized the positive and negative interactions between elements through network analysis.Integrating the perception rates and contributions,inconsistency was observed between the dominant element and the most contributing element.The perception rate of the human element was in an overwhelmingly dominant position,with 2.46.Despite the variations among the 5 greenspace groups,multiple natural elements highly contributed to overall satisfaction,especially animal and vegetation,which achieved contributions higher than 1.2 in most of the groups.Regarding the interactions,stronger negative interactions appeared generally,reaching up to 0.496.The coexistence of natural and artificial elements has a stronger collective effect on greenspace perception,regardless of positive or negative interaction.By providing an understanding of the landscape elements,our findings can assist greenspace planners in identifying key factors of different greenspace categories from various perspectives and support explicit and effective greenspace management.展开更多
Deep neural networks provide accurate results for most applications.However,they need a big dataset to train properly.Providing a big dataset is a significant challenge in most applications.Image augmentation refers t...Deep neural networks provide accurate results for most applications.However,they need a big dataset to train properly.Providing a big dataset is a significant challenge in most applications.Image augmentation refers to techniques that increase the amount of image data.Common operations for image augmentation include changes in illumination,rotation,contrast,size,viewing angle,and others.Recently,Generative Adversarial Networks(GANs)have been employed for image generation.However,like image augmentation methods,GAN approaches can only generate images that are similar to the original images.Therefore,they also cannot generate new classes of data.Texture images presentmore challenges than general images,and generating textures is more complex than creating other types of images.This study proposes a gradient-based deep neural network method that generates a new class of texture.It is possible to rapidly generate new classes of textures using different kernels from pre-trained deep networks.After generating new textures for each class,the number of textures increases through image augmentation.During this process,several techniques are proposed to automatically remove incomplete and similar textures that are created.The proposed method is faster than some well-known generative networks by around 4 to 10 times.In addition,the quality of the generated textures surpasses that of these networks.The proposed method can generate textures that surpass those of someGANs and parametric models in certain image qualitymetrics.It can provide a big texture dataset to train deep networks.A new big texture dataset is created artificially using the proposed method.This dataset is approximately 2 GB in size and comprises 30,000 textures,each 150×150 pixels in size,organized into 600 classes.It is uploaded to the Kaggle site and Google Drive.This dataset is called BigTex.Compared to other texture datasets,the proposed dataset is the largest and can serve as a comprehensive texture dataset for training more powerful deep neural networks and mitigating overfitting.展开更多
This paper explores the data theory of value along the line of reasoning epochal characteristics of data-theoretical innovation-paradigmatic transformation and,through a comparison of hard and soft factors and observa...This paper explores the data theory of value along the line of reasoning epochal characteristics of data-theoretical innovation-paradigmatic transformation and,through a comparison of hard and soft factors and observation of data peculiar features,it draws the conclusion that data have the epochal characteristics of non-competitiveness and non-exclusivity,decreasing marginal cost and increasing marginal return,non-physical and intangible form,and non-finiteness and non-scarcity.It is the epochal characteristics of data that undermine the traditional theory of value and innovate the“production-exchange”theory,including data value generation,data value realization,data value rights determination and data value pricing.From the perspective of data value generation,the levels of data quality,processing,use and connectivity,data application scenarios and data openness will influence data value.From the perspective of data value realization,data,as independent factors of production,show value creation effect,create a value multiplier effect by empowering other factors of production,and substitute other factors of production to create a zero-price effect.From the perspective of data value rights determination,based on the theory of property,the tragedy of the private outweighs the comedy of the private with respect to data,and based on the theory of sharing economy,the comedy of the commons outweighs the tragedy of the commons with respect to data.From the perspective of data pricing,standardized data products can be priced according to the physical product attributes,and non-standardized data products can be priced according to the virtual product attributes.Based on the epochal characteristics of data and theoretical innovation,the“production-exchange”paradigm has undergone a transformation from“using tangible factors to produce tangible products and exchanging tangible products for tangible products”to“using intangible factors to produce tangible products and exchanging intangible products for tangible products”and ultimately to“using intangible factors to produce intangible products and exchanging intangible products for intangible products”.展开更多
To predict the lithium-ion(Li-ion)battery degradation trajectory in the early phase,arranging the maintenance of battery energy storage systems is of great importance.However,under different operation conditions,Li-io...To predict the lithium-ion(Li-ion)battery degradation trajectory in the early phase,arranging the maintenance of battery energy storage systems is of great importance.However,under different operation conditions,Li-ion batteries present distinct degradation patterns,and it is challenging to capture negligible capacity fade in early cycles.Despite the data-driven method showing promising performance,insufficient data is still a big issue since the ageing experiments on the batteries are too slow and expensive.In this study,we proposed twin autoencoders integrated into a two-stage method to predict the early cycles'degradation trajectories.The two-stage method can properly predict the degradation from course to fine.The twin autoencoders serve as a feature extractor and a synthetic data generator,respectively.Ultimately,a learning procedure based on the long-short term memory(LSTM)network is designed to hybridize the learning process between the real and synthetic data.The performance of the proposed method is verified on three datasets,and the experimental results show that the proposed method can achieve accurate predictions compared to its competitors.展开更多
Testing is an integral part of software development.Current fastpaced system developments have rendered traditional testing techniques obsolete.Therefore,automated testing techniques are needed to adapt to such system...Testing is an integral part of software development.Current fastpaced system developments have rendered traditional testing techniques obsolete.Therefore,automated testing techniques are needed to adapt to such system developments speed.Model-based testing(MBT)is a technique that uses system models to generate and execute test cases automatically.It was identified that the test data generation(TDG)in many existing model-based test case generation(MB-TCG)approaches were still manual.An automatic and effective TDG can further reduce testing cost while detecting more faults.This study proposes an automated TDG approach in MB-TCG using the extended finite state machine model(EFSM).The proposed approach integrates MBT with combinatorial testing.The information available in an EFSM model and the boundary value analysis strategy are used to automate the domain input classifications which were done manually by the existing approach.The results showed that the proposed approach was able to detect 6.62 percent more faults than the conventionalMB-TCG but at the same time generated 43 more tests.The proposed approach effectively detects faults,but a further treatment to the generated tests such as test case prioritization should be done to increase the effectiveness and efficiency of testing.展开更多
For rechargeable wireless sensor networks,limited energy storage capacity,dynamic energy supply,low and dynamic duty cycles cause that it is unpractical to maintain a fixed routing path for packets delivery permanentl...For rechargeable wireless sensor networks,limited energy storage capacity,dynamic energy supply,low and dynamic duty cycles cause that it is unpractical to maintain a fixed routing path for packets delivery permanently from a source to destination in a distributed scenario.Therefore,before data delivery,a sensor has to update its waking schedule continuously and share them to its neighbors,which lead to high energy expenditure for reestablishing path links frequently and low efficiency of energy utilization for collecting packets.In this work,we propose the maximum data generation rate routing protocol based on data flow controlling technology.For a sensor,it does not share its waking schedule to its neighbors and cache any waking schedules of other sensors.Hence,the energy consumption for time synchronization,location information and waking schedule shared will be reduced significantly.The saving energy can be used for improving data collection rate.Simulation shows our scheme is efficient to improve packets generation rate in rechargeable wireless sensor networks.展开更多
By analyzing some existing test data generation methods, a new automated test data generation approach was presented. The linear predicate functions on a given path was directly used to construct a linear constrain sy...By analyzing some existing test data generation methods, a new automated test data generation approach was presented. The linear predicate functions on a given path was directly used to construct a linear constrain system for input variables. Only when the predicate function is nonlinear, does the linear arithmetic representation need to be computed. If the entire predicate functions on the given path are linear, either the desired test data or the guarantee that the path is infeasible can be gotten from the solution of the constrain system. Otherwise, the iterative refining for the input is required to obtain the desired test data. Theoretical analysis and test results show that the approach is simple and effective, and takes less computation. The scheme can also be used to generate path-based test data for the programs with arrays and loops.展开更多
The automatic generation of test data is a key step in realizing automated testing.Most automated testing tools for unit testing only provide test case execution drivers and cannot generate test data that meets covera...The automatic generation of test data is a key step in realizing automated testing.Most automated testing tools for unit testing only provide test case execution drivers and cannot generate test data that meets coverage requirements.This paper presents an improved Whale Genetic Algorithm for generating test data re-quired for unit testing MC/DC coverage.The proposed algorithm introduces an elite retention strategy to avoid the genetic algorithm from falling into iterative degradation.At the same time,the mutation threshold of the whale algorithm is introduced to balance the global exploration and local search capabilities of the genetic al-gorithm.The threshold is dynamically adjusted according to the diversity and evolution stage of current popu-lation,which positively guides the evolution of the population.Finally,an improved crossover strategy is pro-posed to accelerate the convergence of the algorithm.The improved whale genetic algorithm is compared with genetic algorithm,whale algorithm and particle swarm algorithm on two benchmark programs.The results show that the proposed algorithm is faster for test data generation than comparison methods and can provide better coverage with fewer evaluations,and has great advantages in generating test data.展开更多
Many search-based algorithms have been successfully applied in sev-eral software engineering activities.Genetic algorithms(GAs)are the most used in the scientific domains by scholars to solve software testing problems....Many search-based algorithms have been successfully applied in sev-eral software engineering activities.Genetic algorithms(GAs)are the most used in the scientific domains by scholars to solve software testing problems.They imi-tate the theory of natural selection and evolution.The harmony search algorithm(HSA)is one of the most recent search algorithms in the last years.It imitates the behavior of a musician tofind the best harmony.Scholars have estimated the simi-larities and the differences between genetic algorithms and the harmony search algorithm in diverse research domains.The test data generation process represents a critical task in software validation.Unfortunately,there is no work comparing the performance of genetic algorithms and the harmony search algorithm in the test data generation process.This paper studies the similarities and the differences between genetic algorithms and the harmony search algorithm based on the ability and speed offinding the required test data.The current research performs an empirical comparison of the HSA and the GAs,and then the significance of the results is estimated using the t-Test.The study investigates the efficiency of the harmony search algorithm and the genetic algorithms according to(1)the time performance,(2)the significance of the generated test data,and(3)the adequacy of the generated test data to satisfy a given testing criterion.The results showed that the harmony search algorithm is significantly faster than the genetic algo-rithms because the t-Test showed that the p-value of the time values is 0.026<α(αis the significance level=0.05 at 95%confidence level).In contrast,there is no significant difference between the two algorithms in generating the adequate test data because the t-Test showed that the p-value of thefitness values is 0.25>α.展开更多
Dynamic numerical simulation of water conditions is useful for reservoir management. In remote semi-arid areas, however, meteorological and hydrological time-series data needed for computation are not frequently measu...Dynamic numerical simulation of water conditions is useful for reservoir management. In remote semi-arid areas, however, meteorological and hydrological time-series data needed for computation are not frequently measured and must be obtained using other information. This paper presents a case study of data generation for the computation of thermal conditions in the Joumine Reservoir, Tunisia. Data from the Wind Finder web site and daily sunshine duration at the nearest weather stations were utilized to generate cloud cover and solar radiation data based on meteorological correlations obtained in Japan, which is located at the same latitude as Tunisia. A time series of inflow water temperature was estimated from air temperature using a numerical filter expressed as a linear second-order differential equation. A numerical simulation using a vertical 2-D (two-dimensional) turbulent flow model for a stratified water body with generated data successfully reproduced seasonal thermal conditions in the reservoir, which were monitored using a thermistor chain.展开更多
ZTE Corporation (ZTE) announced on February 16,2009 that their complete line of mobile broadband data cards would support Windows 7 and be compliant with the Windows Network Driver Interface Specification 6.20,NDIS6.20.
This paper outlines research findings from an investigation into a range of options for generating vehicle data relevant to traffic management systems.Linking data from freight vehicles with traffic management systems...This paper outlines research findings from an investigation into a range of options for generating vehicle data relevant to traffic management systems.Linking data from freight vehicles with traffic management systems stands to provide a number of benefits.These include reducing congestion,improving safety,reducing freight vehicle trip times,informing alternative routing for freight vehicles,and informing transport planning and investment decisions.This paper will explore a number of different methods to detect,classify,and track vehicles,each having strengths and weaknesses,and each with different levels of accuracy and associated costs.In terms of freight management applications,the key feature is the capability to track in real time the position of the vehicle.This can be done using a range of technologies that either are located on the vehicle such as GPS(global positioning system)trackers and RFID(Radio Frequency Identification)Tags or are part of the network infrastructure such as CCTV(Closed Circuit Television)cameras,satellites,mobile phone towers,Wi-Fi receivers and RFID readers.Technology in this space is advancing quickly having started with a focus on infrastructure based sensors and communications devices and more recently shifting to GPS and mobile devices.The paper concludes with an overview of considerations for how data from freight vehicles may interact with traffic management systems for mutual benefit.This new area of research and practice seeks to balance the needs of traffic management systems in order to better manage traffic and prevent bottlenecks and congestion while delivering tangible benefits to freight companies stands to be of great interest in the coming decade.This research has been developed with funding and support provided by Australia’s SBEnrc(Sustainable Built Environment National Research Centre)and its partners.展开更多
To solve the emerging complex optimization problems, multi objectiveoptimization algorithms are needed. By introducing the surrogate model forapproximate fitness calculation, the multi objective firefly algorithm with...To solve the emerging complex optimization problems, multi objectiveoptimization algorithms are needed. By introducing the surrogate model forapproximate fitness calculation, the multi objective firefly algorithm with surrogatemodel (MOFA-SM) is proposed in this paper. Firstly, the population wasinitialized according to the chaotic mapping. Secondly, the external archive wasconstructed based on the preference sorting, with the lightweight clustering pruningstrategy. In the process of evolution, the elite solutions selected from archivewere used to guide the movement to search optimal solutions. Simulation resultsshow that the proposed algorithm can achieve better performance in terms ofconvergence iteration and stability.展开更多
Discovering floating wastes,especially bottles on water,is a crucial research problem in environmental hygiene.Nevertheless,real-world applications often face challenges such as interference from irrelevant objects an...Discovering floating wastes,especially bottles on water,is a crucial research problem in environmental hygiene.Nevertheless,real-world applications often face challenges such as interference from irrelevant objects and the high cost associated with data collection.Consequently,devising algorithms capable of accurately localizing specific objects within a scene in scenarios where annotated data is limited remains a formidable challenge.To solve this problem,this paper proposes an object discovery by request problem setting and a corresponding algorithmic framework.The proposed problem setting aims to identify specified objects in scenes,and the associated algorithmic framework comprises pseudo data generation and object discovery by request network.Pseudo-data generation generates images resembling natural scenes through various data augmentation rules,using a small number of object samples and scene images.The network structure of object discovery by request utilizes the pre-trained Vision Transformer(ViT)model as the backbone,employs object-centric methods to learn the latent representations of foreground objects,and applies patch-level reconstruction constraints to the model.During the validation phase,we use the generated pseudo datasets as training sets and evaluate the performance of our model on the original test sets.Experiments have proved that our method achieves state-of-the-art performance on Unmanned Aerial Vehicles-Bottle Detection(UAV-BD)dataset and self-constructed dataset Bottle,especially in multi-object scenarios.展开更多
The phenomenon of sub-synchronous oscillation(SSO)poses significant threats to the stability of power systems.The advent of artificial intelligence(AI)has revolutionized SSO research through data-driven methodologies,...The phenomenon of sub-synchronous oscillation(SSO)poses significant threats to the stability of power systems.The advent of artificial intelligence(AI)has revolutionized SSO research through data-driven methodologies,which necessitates a substantial collection of data for effective training,a requirement frequently unfulfilled in practical power systems due to limited data availability.To address the critical issue of data scarcity in training AI models,this paper proposes a novel transfer-learning-based(TL-based)Wasserstein generative adversarial network(WGAN)approach for synthetic data generation of SSO in wind farms.To improve the capability of WGAN to capture the bidirectional temporal features inherent in oscillation data,a bidirectional long short-term memory(BiLSTM)layer is introduced.Additionally,to address the training instability caused by few-shot learning scenarios,the discriminator is augmented with mini-batch discrimination(MBD)layers and gradient penalty(GP)terms.Finally,TL is leveraged to finetune the model,effectively bridging the gap between the training data and real-world system data.To evaluate the quality of the synthetic data,two indexes are proposed based on dynamic time warping(DTW)and frequency domain analysis,followed by a classification task.Case studies demonstrate the effectiveness of the proposed approach in swiftly generating a large volume of synthetic SSO data,thereby significantly mitigating the issue of data scarcity prevalent in SSO research.展开更多
Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using d...Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested.展开更多
Software testing has been attracting a lot of attention for effective software development.In model driven approach,Unified Modelling Language(UML)is a conceptual modelling approach for obligations and other features ...Software testing has been attracting a lot of attention for effective software development.In model driven approach,Unified Modelling Language(UML)is a conceptual modelling approach for obligations and other features of the system in a model-driven methodology.Specialized tools interpret these models into other software artifacts such as code,test data and documentation.The generation of test cases permits the appropriate test data to be determined that have the aptitude to ascertain the requirements.This paper focuses on optimizing the test data obtained from UML activity and state chart diagrams by using Basic Genetic Algorithm(BGA).For generating the test cases,both diagrams were converted into their corresponding intermediate graphical forms namely,Activity Diagram Graph(ADG)and State Chart Diagram Graph(SCDG).Then both graphs will be combined to form a single graph called,Activity State Chart Diagram Graph(ASCDG).Both graphs were then joined to create a single graph known as the Activity State Chart Diagram Graph(ASCDG).Next,the ASCDG will be optimized using BGA to generate the test data.A case study involving a withdrawal from the automated teller machine(ATM)of a bank was employed to demonstrate the approach.The approach successfully identified defects in various ATM functions such as messaging and operation.展开更多
Full-parallax light-field is captured by a small-scale 3D image scanning system and applied to holographic display. A vertical camera array is scanned horizontally to capture full-parallax imagery, and the vertical vi...Full-parallax light-field is captured by a small-scale 3D image scanning system and applied to holographic display. A vertical camera array is scanned horizontally to capture full-parallax imagery, and the vertical views between cameras are interpolated by depth image-based rendering technique. An improved technique for depth estimation reduces the estimation error and high-density light-field is obtained. The captured data is employed for the calculation of computer hologram using ray-sampling plane. This technique enables high-resolution display even in deep 3D scene although a hologram is calculated from ray information, and thus it makes use of the important advantage of holographic 3D display.展开更多
At present,deep learning has been well applied in many fields.However,due to the high complexity of hypothesis space,numerous training samples are usually required to ensure the reliability of minimizing experience ri...At present,deep learning has been well applied in many fields.However,due to the high complexity of hypothesis space,numerous training samples are usually required to ensure the reliability of minimizing experience risk.Therefore,training a classifier with a small number of training examples is a challenging task.From a biological point of view,based on the assumption that rich prior knowledge and analogical association should enable human beings to quickly distinguish novel things from a few or even one example,we proposed a dynamic analogical association algorithm to make the model use only a few labeled samples for classification.To be specific,the algorithm search for knowledge structures similar to existing tasks in prior knowledge based on manifold matching,and combine sampling distributions to generate offsets instead of two sample points,thereby ensuring high confidence and significant contribution to the classification.The comparative results on two common benchmark datasets substantiate the superiority of the proposed method compared to existing data generation approaches for few-shot learning,and the effectiveness of the algorithm has been proved through ablation experiments.展开更多
The purpose of this paper is to describe the challenges of elite interviewing and identify key factors that ensure success for qualitative researchers. The authors draw on their own experiences of interviewing powerfu...The purpose of this paper is to describe the challenges of elite interviewing and identify key factors that ensure success for qualitative researchers. The authors draw on their own experiences of interviewing powerful and influential members of government and various professions as well as tips from experienced researchers from the fields of social sciences and health. They identify five essential steps to successful interviewing: (1) identifying the key informants; (2) negotiating access; (3) background research and preparation; (4) site selection, presentation questioning approach, and execution; and (5) follow-up. Each of them is discussed in detail. The authors argue that the most important quality for the elite interviewer is self-management which involves developing an individual style of interviewing that is responsive to setbacks and unexpected opportunities展开更多
基金funded by the China Scholarship Council(grant number:202004910422)
文摘Concern for individual perception is essential to enhance greenspace management.Various landscape elements are key factors affecting visitors’perception engaging in greenspaces.Targeting Belgian public greenspaces,we develop a comprehensive approach to quantify visitors’perceptions from multiple dimensions.Applying user-generated data and unsupervised machine learning approach,we identified the landscape elements and classified the greenspaces to extract perception rates and detect dominant elements.The satisfaction of every landscape element was then analyzed by the natural language process approach and standardized major axis regression to discover their contributions to overall satisfaction.Furthermore,we calculated and visualized the positive and negative interactions between elements through network analysis.Integrating the perception rates and contributions,inconsistency was observed between the dominant element and the most contributing element.The perception rate of the human element was in an overwhelmingly dominant position,with 2.46.Despite the variations among the 5 greenspace groups,multiple natural elements highly contributed to overall satisfaction,especially animal and vegetation,which achieved contributions higher than 1.2 in most of the groups.Regarding the interactions,stronger negative interactions appeared generally,reaching up to 0.496.The coexistence of natural and artificial elements has a stronger collective effect on greenspace perception,regardless of positive or negative interaction.By providing an understanding of the landscape elements,our findings can assist greenspace planners in identifying key factors of different greenspace categories from various perspectives and support explicit and effective greenspace management.
基金supported via funding from Prince Sattam bin Abdulaziz University(PSAU/2025/R/1446)Princess Nourah bint Abdulrahman University(PNURSP2025R300)Prince Sultan University.
文摘Deep neural networks provide accurate results for most applications.However,they need a big dataset to train properly.Providing a big dataset is a significant challenge in most applications.Image augmentation refers to techniques that increase the amount of image data.Common operations for image augmentation include changes in illumination,rotation,contrast,size,viewing angle,and others.Recently,Generative Adversarial Networks(GANs)have been employed for image generation.However,like image augmentation methods,GAN approaches can only generate images that are similar to the original images.Therefore,they also cannot generate new classes of data.Texture images presentmore challenges than general images,and generating textures is more complex than creating other types of images.This study proposes a gradient-based deep neural network method that generates a new class of texture.It is possible to rapidly generate new classes of textures using different kernels from pre-trained deep networks.After generating new textures for each class,the number of textures increases through image augmentation.During this process,several techniques are proposed to automatically remove incomplete and similar textures that are created.The proposed method is faster than some well-known generative networks by around 4 to 10 times.In addition,the quality of the generated textures surpasses that of these networks.The proposed method can generate textures that surpass those of someGANs and parametric models in certain image qualitymetrics.It can provide a big texture dataset to train deep networks.A new big texture dataset is created artificially using the proposed method.This dataset is approximately 2 GB in size and comprises 30,000 textures,each 150×150 pixels in size,organized into 600 classes.It is uploaded to the Kaggle site and Google Drive.This dataset is called BigTex.Compared to other texture datasets,the proposed dataset is the largest and can serve as a comprehensive texture dataset for training more powerful deep neural networks and mitigating overfitting.
基金funded by“Management Model Innovation of Chinese Enterprises”Research Project,Institute of Industrial Economics,CASS(Grant No.2019-gjs-06)Project under the Graduate Student Scientific and Research Innovation Support Program,University of Chinese Academy of Social Sciences(Graduate School)(Grant No.2022-KY-118).
文摘This paper explores the data theory of value along the line of reasoning epochal characteristics of data-theoretical innovation-paradigmatic transformation and,through a comparison of hard and soft factors and observation of data peculiar features,it draws the conclusion that data have the epochal characteristics of non-competitiveness and non-exclusivity,decreasing marginal cost and increasing marginal return,non-physical and intangible form,and non-finiteness and non-scarcity.It is the epochal characteristics of data that undermine the traditional theory of value and innovate the“production-exchange”theory,including data value generation,data value realization,data value rights determination and data value pricing.From the perspective of data value generation,the levels of data quality,processing,use and connectivity,data application scenarios and data openness will influence data value.From the perspective of data value realization,data,as independent factors of production,show value creation effect,create a value multiplier effect by empowering other factors of production,and substitute other factors of production to create a zero-price effect.From the perspective of data value rights determination,based on the theory of property,the tragedy of the private outweighs the comedy of the private with respect to data,and based on the theory of sharing economy,the comedy of the commons outweighs the tragedy of the commons with respect to data.From the perspective of data pricing,standardized data products can be priced according to the physical product attributes,and non-standardized data products can be priced according to the virtual product attributes.Based on the epochal characteristics of data and theoretical innovation,the“production-exchange”paradigm has undergone a transformation from“using tangible factors to produce tangible products and exchanging tangible products for tangible products”to“using intangible factors to produce tangible products and exchanging intangible products for tangible products”and ultimately to“using intangible factors to produce intangible products and exchanging intangible products for intangible products”.
基金financially supported by the National Natural Science Foundation of China under Grant 62372369,52107229,62272383the Key Research and Development Program of Shaanxi Province(2024GX-YBXM-442)Natural Science Basic Research Program of Shaanxi Province(2024JC-YBMS-477)。
文摘To predict the lithium-ion(Li-ion)battery degradation trajectory in the early phase,arranging the maintenance of battery energy storage systems is of great importance.However,under different operation conditions,Li-ion batteries present distinct degradation patterns,and it is challenging to capture negligible capacity fade in early cycles.Despite the data-driven method showing promising performance,insufficient data is still a big issue since the ageing experiments on the batteries are too slow and expensive.In this study,we proposed twin autoencoders integrated into a two-stage method to predict the early cycles'degradation trajectories.The two-stage method can properly predict the degradation from course to fine.The twin autoencoders serve as a feature extractor and a synthetic data generator,respectively.Ultimately,a learning procedure based on the long-short term memory(LSTM)network is designed to hybridize the learning process between the real and synthetic data.The performance of the proposed method is verified on three datasets,and the experimental results show that the proposed method can achieve accurate predictions compared to its competitors.
基金The research was funded by Universiti Teknologi Malaysia(UTM)and the MalaysianMinistry of Higher Education(MOHE)under the Industry-International Incentive Grant Scheme(IIIGS)(Vote Number:Q.J130000.3651.02M67 and Q.J130000.3051.01M86)the Aca-demic Fellowship Scheme(SLAM).
文摘Testing is an integral part of software development.Current fastpaced system developments have rendered traditional testing techniques obsolete.Therefore,automated testing techniques are needed to adapt to such system developments speed.Model-based testing(MBT)is a technique that uses system models to generate and execute test cases automatically.It was identified that the test data generation(TDG)in many existing model-based test case generation(MB-TCG)approaches were still manual.An automatic and effective TDG can further reduce testing cost while detecting more faults.This study proposes an automated TDG approach in MB-TCG using the extended finite state machine model(EFSM).The proposed approach integrates MBT with combinatorial testing.The information available in an EFSM model and the boundary value analysis strategy are used to automate the domain input classifications which were done manually by the existing approach.The results showed that the proposed approach was able to detect 6.62 percent more faults than the conventionalMB-TCG but at the same time generated 43 more tests.The proposed approach effectively detects faults,but a further treatment to the generated tests such as test case prioritization should be done to increase the effectiveness and efficiency of testing.
基金This work was supported by The National Natural Science Fund of China(Grant No.31670554)The Natural Science Foundation of Jiangsu Province of China(Grant No.BK20161527)+1 种基金We also received three Projects Funded by The Project funded by China Postdoctoral Science Foundation(Grant Nos.2018T110505,2017M611828)The Priority Academic Program Development(PAPD)of Jiangsu Higher Education Institutions.The authors wish to express their appreciation to the reviewers for their helpful suggestions which greatly improved the presentation of this paper.
文摘For rechargeable wireless sensor networks,limited energy storage capacity,dynamic energy supply,low and dynamic duty cycles cause that it is unpractical to maintain a fixed routing path for packets delivery permanently from a source to destination in a distributed scenario.Therefore,before data delivery,a sensor has to update its waking schedule continuously and share them to its neighbors,which lead to high energy expenditure for reestablishing path links frequently and low efficiency of energy utilization for collecting packets.In this work,we propose the maximum data generation rate routing protocol based on data flow controlling technology.For a sensor,it does not share its waking schedule to its neighbors and cache any waking schedules of other sensors.Hence,the energy consumption for time synchronization,location information and waking schedule shared will be reduced significantly.The saving energy can be used for improving data collection rate.Simulation shows our scheme is efficient to improve packets generation rate in rechargeable wireless sensor networks.
文摘By analyzing some existing test data generation methods, a new automated test data generation approach was presented. The linear predicate functions on a given path was directly used to construct a linear constrain system for input variables. Only when the predicate function is nonlinear, does the linear arithmetic representation need to be computed. If the entire predicate functions on the given path are linear, either the desired test data or the guarantee that the path is infeasible can be gotten from the solution of the constrain system. Otherwise, the iterative refining for the input is required to obtain the desired test data. Theoretical analysis and test results show that the approach is simple and effective, and takes less computation. The scheme can also be used to generate path-based test data for the programs with arrays and loops.
文摘The automatic generation of test data is a key step in realizing automated testing.Most automated testing tools for unit testing only provide test case execution drivers and cannot generate test data that meets coverage requirements.This paper presents an improved Whale Genetic Algorithm for generating test data re-quired for unit testing MC/DC coverage.The proposed algorithm introduces an elite retention strategy to avoid the genetic algorithm from falling into iterative degradation.At the same time,the mutation threshold of the whale algorithm is introduced to balance the global exploration and local search capabilities of the genetic al-gorithm.The threshold is dynamically adjusted according to the diversity and evolution stage of current popu-lation,which positively guides the evolution of the population.Finally,an improved crossover strategy is pro-posed to accelerate the convergence of the algorithm.The improved whale genetic algorithm is compared with genetic algorithm,whale algorithm and particle swarm algorithm on two benchmark programs.The results show that the proposed algorithm is faster for test data generation than comparison methods and can provide better coverage with fewer evaluations,and has great advantages in generating test data.
文摘Many search-based algorithms have been successfully applied in sev-eral software engineering activities.Genetic algorithms(GAs)are the most used in the scientific domains by scholars to solve software testing problems.They imi-tate the theory of natural selection and evolution.The harmony search algorithm(HSA)is one of the most recent search algorithms in the last years.It imitates the behavior of a musician tofind the best harmony.Scholars have estimated the simi-larities and the differences between genetic algorithms and the harmony search algorithm in diverse research domains.The test data generation process represents a critical task in software validation.Unfortunately,there is no work comparing the performance of genetic algorithms and the harmony search algorithm in the test data generation process.This paper studies the similarities and the differences between genetic algorithms and the harmony search algorithm based on the ability and speed offinding the required test data.The current research performs an empirical comparison of the HSA and the GAs,and then the significance of the results is estimated using the t-Test.The study investigates the efficiency of the harmony search algorithm and the genetic algorithms according to(1)the time performance,(2)the significance of the generated test data,and(3)the adequacy of the generated test data to satisfy a given testing criterion.The results showed that the harmony search algorithm is significantly faster than the genetic algo-rithms because the t-Test showed that the p-value of the time values is 0.026<α(αis the significance level=0.05 at 95%confidence level).In contrast,there is no significant difference between the two algorithms in generating the adequate test data because the t-Test showed that the p-value of thefitness values is 0.25>α.
文摘Dynamic numerical simulation of water conditions is useful for reservoir management. In remote semi-arid areas, however, meteorological and hydrological time-series data needed for computation are not frequently measured and must be obtained using other information. This paper presents a case study of data generation for the computation of thermal conditions in the Joumine Reservoir, Tunisia. Data from the Wind Finder web site and daily sunshine duration at the nearest weather stations were utilized to generate cloud cover and solar radiation data based on meteorological correlations obtained in Japan, which is located at the same latitude as Tunisia. A time series of inflow water temperature was estimated from air temperature using a numerical filter expressed as a linear second-order differential equation. A numerical simulation using a vertical 2-D (two-dimensional) turbulent flow model for a stratified water body with generated data successfully reproduced seasonal thermal conditions in the reservoir, which were monitored using a thermistor chain.
文摘ZTE Corporation (ZTE) announced on February 16,2009 that their complete line of mobile broadband data cards would support Windows 7 and be compliant with the Windows Network Driver Interface Specification 6.20,NDIS6.20.
基金funding and support provided by Australia’s SBEnrc(Sustainable Built Environment National Research Centre)and its partners.
文摘This paper outlines research findings from an investigation into a range of options for generating vehicle data relevant to traffic management systems.Linking data from freight vehicles with traffic management systems stands to provide a number of benefits.These include reducing congestion,improving safety,reducing freight vehicle trip times,informing alternative routing for freight vehicles,and informing transport planning and investment decisions.This paper will explore a number of different methods to detect,classify,and track vehicles,each having strengths and weaknesses,and each with different levels of accuracy and associated costs.In terms of freight management applications,the key feature is the capability to track in real time the position of the vehicle.This can be done using a range of technologies that either are located on the vehicle such as GPS(global positioning system)trackers and RFID(Radio Frequency Identification)Tags or are part of the network infrastructure such as CCTV(Closed Circuit Television)cameras,satellites,mobile phone towers,Wi-Fi receivers and RFID readers.Technology in this space is advancing quickly having started with a focus on infrastructure based sensors and communications devices and more recently shifting to GPS and mobile devices.The paper concludes with an overview of considerations for how data from freight vehicles may interact with traffic management systems for mutual benefit.This new area of research and practice seeks to balance the needs of traffic management systems in order to better manage traffic and prevent bottlenecks and congestion while delivering tangible benefits to freight companies stands to be of great interest in the coming decade.This research has been developed with funding and support provided by Australia’s SBEnrc(Sustainable Built Environment National Research Centre)and its partners.
文摘To solve the emerging complex optimization problems, multi objectiveoptimization algorithms are needed. By introducing the surrogate model forapproximate fitness calculation, the multi objective firefly algorithm with surrogatemodel (MOFA-SM) is proposed in this paper. Firstly, the population wasinitialized according to the chaotic mapping. Secondly, the external archive wasconstructed based on the preference sorting, with the lightweight clustering pruningstrategy. In the process of evolution, the elite solutions selected from archivewere used to guide the movement to search optimal solutions. Simulation resultsshow that the proposed algorithm can achieve better performance in terms ofconvergence iteration and stability.
文摘Discovering floating wastes,especially bottles on water,is a crucial research problem in environmental hygiene.Nevertheless,real-world applications often face challenges such as interference from irrelevant objects and the high cost associated with data collection.Consequently,devising algorithms capable of accurately localizing specific objects within a scene in scenarios where annotated data is limited remains a formidable challenge.To solve this problem,this paper proposes an object discovery by request problem setting and a corresponding algorithmic framework.The proposed problem setting aims to identify specified objects in scenes,and the associated algorithmic framework comprises pseudo data generation and object discovery by request network.Pseudo-data generation generates images resembling natural scenes through various data augmentation rules,using a small number of object samples and scene images.The network structure of object discovery by request utilizes the pre-trained Vision Transformer(ViT)model as the backbone,employs object-centric methods to learn the latent representations of foreground objects,and applies patch-level reconstruction constraints to the model.During the validation phase,we use the generated pseudo datasets as training sets and evaluate the performance of our model on the original test sets.Experiments have proved that our method achieves state-of-the-art performance on Unmanned Aerial Vehicles-Bottle Detection(UAV-BD)dataset and self-constructed dataset Bottle,especially in multi-object scenarios.
基金supported by the National Natural Science Foundation of China(No.52377084)the Zhishan Young Scholar Program of Southeast University,China(No.2242024RCB0019)。
文摘The phenomenon of sub-synchronous oscillation(SSO)poses significant threats to the stability of power systems.The advent of artificial intelligence(AI)has revolutionized SSO research through data-driven methodologies,which necessitates a substantial collection of data for effective training,a requirement frequently unfulfilled in practical power systems due to limited data availability.To address the critical issue of data scarcity in training AI models,this paper proposes a novel transfer-learning-based(TL-based)Wasserstein generative adversarial network(WGAN)approach for synthetic data generation of SSO in wind farms.To improve the capability of WGAN to capture the bidirectional temporal features inherent in oscillation data,a bidirectional long short-term memory(BiLSTM)layer is introduced.Additionally,to address the training instability caused by few-shot learning scenarios,the discriminator is augmented with mini-batch discrimination(MBD)layers and gradient penalty(GP)terms.Finally,TL is leveraged to finetune the model,effectively bridging the gap between the training data and real-world system data.To evaluate the quality of the synthetic data,two indexes are proposed based on dynamic time warping(DTW)and frequency domain analysis,followed by a classification task.Case studies demonstrate the effectiveness of the proposed approach in swiftly generating a large volume of synthetic SSO data,thereby significantly mitigating the issue of data scarcity prevalent in SSO research.
基金The work described in this paper was fully supported by a grant from Hong Kong Metropolitan University(RIF/2021/05).
文摘Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested.
基金support from the Deanship of Scientific Research,University of Hail,Saudi Arabia through the project Ref.(RG-191315).
文摘Software testing has been attracting a lot of attention for effective software development.In model driven approach,Unified Modelling Language(UML)is a conceptual modelling approach for obligations and other features of the system in a model-driven methodology.Specialized tools interpret these models into other software artifacts such as code,test data and documentation.The generation of test cases permits the appropriate test data to be determined that have the aptitude to ascertain the requirements.This paper focuses on optimizing the test data obtained from UML activity and state chart diagrams by using Basic Genetic Algorithm(BGA).For generating the test cases,both diagrams were converted into their corresponding intermediate graphical forms namely,Activity Diagram Graph(ADG)and State Chart Diagram Graph(SCDG).Then both graphs will be combined to form a single graph called,Activity State Chart Diagram Graph(ASCDG).Both graphs were then joined to create a single graph known as the Activity State Chart Diagram Graph(ASCDG).Next,the ASCDG will be optimized using BGA to generate the test data.A case study involving a withdrawal from the automated teller machine(ATM)of a bank was employed to demonstrate the approach.The approach successfully identified defects in various ATM functions such as messaging and operation.
基金partly supported by the JSPS Grant-in-Aid for Scientific Research #17300032
文摘Full-parallax light-field is captured by a small-scale 3D image scanning system and applied to holographic display. A vertical camera array is scanned horizontally to capture full-parallax imagery, and the vertical views between cameras are interpolated by depth image-based rendering technique. An improved technique for depth estimation reduces the estimation error and high-density light-field is obtained. The captured data is employed for the calculation of computer hologram using ray-sampling plane. This technique enables high-resolution display even in deep 3D scene although a hologram is calculated from ray information, and thus it makes use of the important advantage of holographic 3D display.
基金This work was supported by The National Natural Science Foundation of China(No.61402537)Sichuan Science and Technology Program(Nos.2019ZDZX0006,2020YFQ0056)+1 种基金the West Light Foundation of Chinese Academy of Sciences(201899)the Talents by Sichuan provincial Party Committee Organization Department,and Science and Technology Service Network Initiative(KFJ-STS-QYZD-2021-21-001).
文摘At present,deep learning has been well applied in many fields.However,due to the high complexity of hypothesis space,numerous training samples are usually required to ensure the reliability of minimizing experience risk.Therefore,training a classifier with a small number of training examples is a challenging task.From a biological point of view,based on the assumption that rich prior knowledge and analogical association should enable human beings to quickly distinguish novel things from a few or even one example,we proposed a dynamic analogical association algorithm to make the model use only a few labeled samples for classification.To be specific,the algorithm search for knowledge structures similar to existing tasks in prior knowledge based on manifold matching,and combine sampling distributions to generate offsets instead of two sample points,thereby ensuring high confidence and significant contribution to the classification.The comparative results on two common benchmark datasets substantiate the superiority of the proposed method compared to existing data generation approaches for few-shot learning,and the effectiveness of the algorithm has been proved through ablation experiments.
文摘The purpose of this paper is to describe the challenges of elite interviewing and identify key factors that ensure success for qualitative researchers. The authors draw on their own experiences of interviewing powerful and influential members of government and various professions as well as tips from experienced researchers from the fields of social sciences and health. They identify five essential steps to successful interviewing: (1) identifying the key informants; (2) negotiating access; (3) background research and preparation; (4) site selection, presentation questioning approach, and execution; and (5) follow-up. Each of them is discussed in detail. The authors argue that the most important quality for the elite interviewer is self-management which involves developing an individual style of interviewing that is responsive to setbacks and unexpected opportunities