Industrial data mining usually deals with data from different sources.These heterogeneous datasets describe the same object in different views.However,samples from some of the datasets may be lost.Then the remaining s...Industrial data mining usually deals with data from different sources.These heterogeneous datasets describe the same object in different views.However,samples from some of the datasets may be lost.Then the remaining samples do not correspond one-to-one correctly.Mismatched datasets caused by missing samples make the industrial data unavailable for further machine learning.In order to align the mismatched samples,this article presents a cooperative iteration matching method(CIMM)based on the modified dynamic time warping(DTW).The proposed method regards the sequentially accumulated industrial data as the time series.Mismatched samples are aligned by the DTW.In addition,dynamic constraints are applied to the warping distance of the DTW process to make the alignment more efficient.Then a series of models are trained with the cumulated samples iteratively.Several groups of numerical experiments on different missing patterns and missing locations are designed and analyzed to prove the effectiveness and the applicability of the proposed method.展开更多
An intelligent diagnosis method based on self-adaptiveWasserstein dual generative adversarial networks and feature fusion is proposed due to problems such as insufficient sample size and incomplete fault feature extra...An intelligent diagnosis method based on self-adaptiveWasserstein dual generative adversarial networks and feature fusion is proposed due to problems such as insufficient sample size and incomplete fault feature extraction,which are commonly faced by rolling bearings and lead to low diagnostic accuracy.Initially,dual models of the Wasserstein deep convolutional generative adversarial network incorporating gradient penalty(1D-2DWDCGAN)are constructed to augment the original dataset.A self-adaptive loss threshold control training strategy is introduced,and establishing a self-adaptive balancing mechanism for stable model training.Subsequently,a diagnostic model based on multidimensional feature fusion is designed,wherein complex features from various dimensions are extracted,merging the original signal waveform features,structured features,and time-frequency features into a deep composite feature representation that encompasses multiple dimensions and scales;thus,efficient and accurate small sample fault diagnosis is facilitated.Finally,an experiment between the bearing fault dataset of CaseWestern ReserveUniversity and the fault simulation experimental platformdataset of this research group shows that this method effectively supplements the dataset and remarkably improves the diagnostic accuracy.The diagnostic accuracy after data augmentation reached 99.94%and 99.87%in two different experimental environments,respectively.In addition,robustness analysis is conducted on the diagnostic accuracy of the proposed method under different noise backgrounds,verifying its good generalization performance.展开更多
Flow cytometry(FCM),characterized by its simplicity,rapid processing,multiparameter analysis,and high sen-sitivity,is widely used in the diagnosis,treatment,and prognosis of hematological malignancies.FCM testing of t...Flow cytometry(FCM),characterized by its simplicity,rapid processing,multiparameter analysis,and high sen-sitivity,is widely used in the diagnosis,treatment,and prognosis of hematological malignancies.FCM testing of tissue samples not only aids in diagnosing and classifying hematological cancers,but also enables the detection of solid tumors.Its ability to detect numerous marker parameters from small samples is particularly useful when dealing with limited cell quantities,such as in fine-needle biopsy samples.This attribute not only addresses the challenge posed by small sample sizes,but also boosts the sensitivity of tumor cell detection.The significance of FCM in clinical and pathological applications continues to grow.To standardize the use of FCM in detecting hematological malignant cells in tissue samples and to improve quality control during the detection process,experts from the Cell Analysis Professional Committee of the Chinese Society of Biotechnology jointly drafted and agreed upon this consensus.This consensus was formulated based on current literature and clinical practices of all experts across clinical,laboratory,and pathological fields in China.It outlines a comprehensive workflow of FCM-based assay for the detection of hematological malignancies in tissue samples,including report content,interpretation,quality control,and key considerations.Additionally,it provides recommendations on antibody panel designs and analytical approaches to enhancing FCM tests,particularly in cases with limited sample sizes.展开更多
In the task of Facial Expression Recognition(FER),data uncertainty has been a critical factor affecting performance,typically arising from the ambiguity of facial expressions,low-quality images,and the subjectivity of...In the task of Facial Expression Recognition(FER),data uncertainty has been a critical factor affecting performance,typically arising from the ambiguity of facial expressions,low-quality images,and the subjectivity of annotators.Tracking the training history reveals that misclassified samples often exhibit high confidence and excessive uncertainty in the early stages of training.To address this issue,we propose an uncertainty-based robust sample selection strategy,which combines confidence error with RandAugment to improve image diversity,effectively reducing overfitting caused by uncertain samples during deep learning model training.To validate the effectiveness of the proposed method,extensive experiments were conducted on FER public benchmarks.The accuracy obtained were 89.08%on RAF-DB,63.12%on AffectNet,and 88.73%on FERPlus.展开更多
Continuous control protocols are extensively utilized in traditional MASs,in which information needs to be transmitted among agents consecutively,therefore resulting in excessive consumption of limited resources.To de...Continuous control protocols are extensively utilized in traditional MASs,in which information needs to be transmitted among agents consecutively,therefore resulting in excessive consumption of limited resources.To decrease the control cost,based on ISC,several LFC problems are investigated for second-order MASs without and with time delay,respectively.Firstly,an intermittent sampled controller is designed,and a sufficient and necessary condition is derived,under which state errors between the leader and all the followers approach zero asymptotically.Considering that time delay is inevitable,a new protocol is proposed to deal with the time-delay situation.The error system’s stability is analyzed using the Schur stability theorem,and sufficient and necessary conditions for LFC are obtained,which are closely associated with the coupling gain,the system parameters,and the network structure.Furthermore,for the case where the current position and velocity information are not available,a distributed protocol is designed that depends only on the sampled position information.The sufficient and necessary conditions for LFC are also given.The results show that second-order MASs can achieve the LFC if and only if the system parameters satisfy the inequalities proposed in the paper.Finally,the correctness of the obtained results is verified by numerical simulations.展开更多
Quantum photonic processors are emerging as promising platforms to prove preliminary evidence of quantum computational advantage toward the realization of universal quantum computers.In the context of nonuniversal noi...Quantum photonic processors are emerging as promising platforms to prove preliminary evidence of quantum computational advantage toward the realization of universal quantum computers.In the context of nonuniversal noisy intermediate quantum devices,photonic-based sampling machines solving the Gaussian boson sampling(GBS)problem currently play a central role in the experimental demonstration of quantum computational advantage.A relevant issue is the validation of the sampling process in the presence of experimental noise,such as photon losses,which could undermine the hardness of simulating the experiment.We test the capability of a validation protocol that exploits the connection between GBS and graph perfect match counting to perform such an assessment in a noisy scenario.In particular,we use as a test bench the recently developed machine Borealis,a large-scale sampling machine that has been made available online for external users,and address its operation in the presence of noise.The employed approach to validation is also shown to provide connections with the open question on the effective advantage of using noisy GBS devices for graph similarity and isomorphism problems and thus provides an effective method for certification of quantum hardware.展开更多
In this paper,an image processing algorithm which is able to synthesize material textures of arbitrary shapes is proposed.The presented approach uses an arbitrary image to construct a structure layer of the material.T...In this paper,an image processing algorithm which is able to synthesize material textures of arbitrary shapes is proposed.The presented approach uses an arbitrary image to construct a structure layer of the material.The resulting structure layer is then used to constrain the material texture synthesis.The field of second-moment matrices is used to represent the structure layer.Many tests with various constraint images are conducted to ensure that the proposed approach accurately reproduces the visual aspects of the input material sample.The results demonstrate that the proposed algorithm is able to accurately synthesize arbitrary-shaped material textures while respecting the local characteristics of the exemplar.This paves the way toward the synthesis of 3D material textures of arbitrary shapes from 2D material samples,which has a wide application range in virtual material design and materials characterization.展开更多
In this paper, we use sample average approximation with adaptive multiple importance sampling to explore moderate deviations for the optimal values. Utilizing the moderate deviation principle for martingale difference...In this paper, we use sample average approximation with adaptive multiple importance sampling to explore moderate deviations for the optimal values. Utilizing the moderate deviation principle for martingale differences and an appropriate Delta method, we establish a moderate deviation principle for the optimal value. Moreover, for a functional form of stochastic programming, we obtain a functional moderate deviation principle for its optimal value.展开更多
Large amounts of labeled data are usually needed for training deep neural networks in medical image studies,particularly in medical image classification.However,in the field of semi-supervised medical image analysis,l...Large amounts of labeled data are usually needed for training deep neural networks in medical image studies,particularly in medical image classification.However,in the field of semi-supervised medical image analysis,labeled data is very scarce due to patient privacy concerns.For researchers,obtaining high-quality labeled images is exceedingly challenging because it involves manual annotation and clinical understanding.In addition,skin datasets are highly suitable for medical image classification studies due to the inter-class relationships and the inter-class similarities of skin lesions.In this paper,we propose a model called Coalition Sample Relation Consistency(CSRC),a consistency-based method that leverages Canonical Correlation Analysis(CCA)to capture the intrinsic relationships between samples.Considering that traditional consistency-based models only focus on the consistency of prediction,we additionally explore the similarity between features by using CCA.We enforce feature relation consistency based on traditional models,encouraging the model to learn more meaningful information from unlabeled data.Finally,considering that cross-entropy loss is not as suitable as the supervised loss when studying with imbalanced datasets(i.e.,ISIC 2017 and ISIC 2018),we improve the supervised loss to achieve better classification accuracy.Our study shows that this model performs better than many semi-supervised methods.展开更多
Geological samples often contain significant amounts of iron,which,although not typically the target element,can substantially interfere with the analysis of other elements of interest.To mitigate these interferences,...Geological samples often contain significant amounts of iron,which,although not typically the target element,can substantially interfere with the analysis of other elements of interest.To mitigate these interferences,amidoximebased radiation grafted adsorbents have been identified as effective for iron removal.In this study,an amidoximefunctionalized,radiation-grafted adsorbent synthesized from polypropylene waste(PPw-g-AO-10)was employed to remove iron from leached geological samples.The adsorption process was systematically optimized by investigating the effects of pH,contact time,adsorbent dosage,and initial ferric ion concentration.Under optimal conditions-pH1.4,a contact time of 90 min,and an initial ferric ion concentration of 4500 mg/L-the adsorbent exhibited a maximum iron adsorption capacity of 269.02 mg/g.After optimizing the critical adsorption parameters,the adsorbent was applied to the leached geological samples,achieving a 91%removal of the iron content.The adsorbent was regenerated through two consecutive cycles using 0.2 N HNO_(3),achieving a regeneration efficiency of 65%.These findings confirm the efficacy of the synthesized PPw-g-AO-10 as a cost-effective and eco-friendly adsorbent for successfully removing iron from leached geological matrices while maintaining a reasonable degree of reusability.展开更多
In real industrial scenarios, equipment cannot be operated in a faulty state for a long time, resulting in a very limited number of available fault samples, and the method of data augmentation using generative adversa...In real industrial scenarios, equipment cannot be operated in a faulty state for a long time, resulting in a very limited number of available fault samples, and the method of data augmentation using generative adversarial networks for smallsample data has achieved a wide range of applications. However, the current generative adversarial networks applied in industrial processes do not impose realistic physical constraints on the generation of data, resulting in the generation of data that do not have realistic physical consistency. To address this problem, this paper proposes a physical consistency-based WGAN, designs a loss function containing physical constraints for industrial processes, and validates the effectiveness of the method using a common dataset in the field of industrial process fault diagnosis. The experimental results show that the proposed method not only makes the generated data consistent with the physical constraints of the industrial process, but also has better fault diagnosis performance than the existing GAN-based methods.展开更多
Face Presentation Attack Detection(fPAD)plays a vital role in securing face recognition systems against various presentation attacks.While supervised learning-based methods demonstrate effectiveness,they are prone to ...Face Presentation Attack Detection(fPAD)plays a vital role in securing face recognition systems against various presentation attacks.While supervised learning-based methods demonstrate effectiveness,they are prone to overfitting to known attack types and struggle to generalize to novel attack scenarios.Recent studies have explored formulating fPAD as an anomaly detection problem or one-class classification task,enabling the training of generalized models for unknown attack detection.However,conventional anomaly detection approaches encounter difficulties in precisely delineating the boundary between bonafide samples and unknown attacks.To address this challenge,we propose a novel framework focusing on unknown attack detection using exclusively bonafide facial data during training.The core innovation lies in our pseudo-negative sample synthesis(PNSS)strategy,which facilitates learning of compact decision boundaries between bonafide faces and potential attack variations.Specifically,PNSS generates synthetic negative samples within low-likelihood regions of the bonafide feature space to represent diverse unknown attack patterns.To overcome the inherent imbalance between positive and synthetic negative samples during iterative training,we implement a dual-loss mechanism combining focal loss for classification optimization with pairwise confusion loss as a regularizer.This architecture effectively mitigates model bias towards bonafide samples while maintaining discriminative power.Comprehensive evaluations across three benchmark datasets validate the framework’s superior performance.Notably,our PNSS achieves 8%–18% average classification error rate(ACER)reduction compared with state-of-the-art one-class fPAD methods in cross-dataset evaluations on Idiap Replay-Attack and MSU-MFSD datasets.展开更多
Policy training against diverse opponents remains a challenge when using Multi-Agent Reinforcement Learning(MARL)in multiple Unmanned Combat Aerial Vehicle(UCAV)air combat scenarios.In view of this,this paper proposes...Policy training against diverse opponents remains a challenge when using Multi-Agent Reinforcement Learning(MARL)in multiple Unmanned Combat Aerial Vehicle(UCAV)air combat scenarios.In view of this,this paper proposes a novel Dominant and Non-dominant strategy sample selection(DoNot)mechanism and a Local Observation Enhanced Multi-Agent Proximal Policy Optimization(LOE-MAPPO)algorithm to train the multi-UCAV air combat policy and improve its generalization.Specifically,the LOE-MAPPO algorithm adopts a mixed state that concatenates the global state and individual agent's local observation to enable efficient value function learning in multi-UCAV air combat.The DoNot mechanism classifies opponents into dominant or non-dominant strategy opponents,and samples from easier to more challenging opponents to form an adaptive training curriculum.Empirical results demonstrate that the proposed LOE-MAPPO algorithm outperforms baseline MARL algorithms in multi-UCAV air combat scenarios,and the DoNot mechanism leads to stronger policy generalization when facing diverse opponents.The results pave the way for the fast generation of cooperative strategies for air combat agents with MARLalgorithms.展开更多
In the era of big data,data-driven technologies are increasingly leveraged by industry to facilitate autonomous learning and intelligent decision-making.However,the challenge of“small samples in big data”emerges whe...In the era of big data,data-driven technologies are increasingly leveraged by industry to facilitate autonomous learning and intelligent decision-making.However,the challenge of“small samples in big data”emerges when datasets lack the comprehensive information necessary for addressing complex scenarios,which hampers adaptability.Thus,enhancing data completeness is essential.Knowledge-guided virtual sample generation transforms domain knowledge into extensive virtual datasets,thereby reducing dependence on limited real samples and enabling zero-sample fault diagnosis.This study used building air conditioning systems as a case study.We innovatively used the large language model(LLM)to acquire domain knowledge for sample generation,significantly lowering knowledge acquisition costs and establishing a generalized framework for knowledge acquisition in engineering applications.This acquired knowledge guided the design of diffusion boundaries in mega-trend diffusion(MTD),while the Monte Carlo method was used to sample within the diffusion function to create information-rich virtual samples.Additionally,a noise-adding technique was introduced to enhance the information entropy of these samples,thereby improving the robustness of neural networks trained with them.Experimental results showed that training the diagnostic model exclusively with virtual samples achieved an accuracy of 72.80%,significantly surpassing traditional small-sample supervised learning in terms of generalization.This underscores the quality and completeness of the generated virtual samples.展开更多
The investigation of whether sediment samples contain representative grain size distribution information is important for the accurate extraction of sediment characteristics and conduct of related sedimentary record s...The investigation of whether sediment samples contain representative grain size distribution information is important for the accurate extraction of sediment characteristics and conduct of related sedimentary record studies.This study comparatively analyzed the numerical and qualitative differences and the degree of correlation of 36 sets of the characteristic parameters of surface sediment parallel sample grain size distribution from three sampling profiles at Jinsha Bay Beach in Zhanjiang,western Guangdong.At each sampling point,five parallel subsamples were established at intervals of 0,10,20,50,and 100 cm along the coastline.The research findings indicate the following:1)relatively large differences in the mean values of the different parallel samples(0.19–0.34Φ),with smaller differences observed in other characteristic grain sizes(D_(10),D_(50),and D_(90));2)small differences in characteristic values among various parallel sample grain size parameters,with at least 33%of the combinations of qualitative results showing inconsistency;3)50%of the regression equations between the skewness of different parallel samples displaying no significant correlation;4)relative deviations of−47.91%to 27.63%and−49.20%to 2.08%existing between the particle size parameters of a single sample and parallel samples(with the average obtained)at intervals of 10 and 50 cm,respectively.As such,small spatial differences,even within 100 cm,can considerably affect grain size parameters.Given the uncertain reasons underlying the representativeness of the samples,which may only cover the area immediately surrounding the sampling station,researchers are advised to design parallel sample collection strategies based on the spatiotemporal distribution characteristics of the parameters of interest during sediment sample collection.This study provides a typical case of the comparative analysis of parallel sample grain size parameters,with a focus on small spatial beach sediment,which contributes to the enhanced understanding of the accuracy and reliability of sediment sample collection strategies and extraction of grain size information.展开更多
Dear Editor,This letter is concerned with stability analysis and stabilization design for sampled-data based load frequency control(LFC) systems via a data-driven method. By describing the dynamic behavior of LFC syst...Dear Editor,This letter is concerned with stability analysis and stabilization design for sampled-data based load frequency control(LFC) systems via a data-driven method. By describing the dynamic behavior of LFC systems based on a data-based representation, a stability criterion is derived to obtain the admissible maximum sampling interval(MSI) for a given controller and a design condition of the PI-type controller is further developed to meet the required MSI. Finally, the effectiveness of the proposed methods is verified by a case study.展开更多
Currently,the main idea of iterative rendering methods is to allocate a fixed number of samples to pixels that have not been fully rendered by calculating the completion rate.It is obvious that this strategy ignores t...Currently,the main idea of iterative rendering methods is to allocate a fixed number of samples to pixels that have not been fully rendered by calculating the completion rate.It is obvious that this strategy ignores the changes in pixel values during the previous rendering process,which may result in additional iterative operations.展开更多
The exploration of asteroids has received increasing attention since the 1990s because of the unique information these objects contain about the history of the early solar system.Quasi-satellites are a population of a...The exploration of asteroids has received increasing attention since the 1990s because of the unique information these objects contain about the history of the early solar system.Quasi-satellites are a population of asteroids that co-orbit closely with,but are outside the gravitational control of,the planet.So far,only five Earth quasi-satellites have been recognized,among which(469219)Kamo’oalewa(provisionally designated as 2016 HO3)is currently considered the most stable and the closest of these.However,little is known about this particular asteroid or this class of near-Earth asteroids because of the difficulties of observing them.China has announced that Tianwen-2,the asteroid sample-return mission to Kamo’oalewa,will be launched in 2025.Here,we review the current knowledge of Kamo’oalewa in terms of its physical characteristics,dynamic evolution,surface environment,and origin,and we propose possible breakthroughs that the samples could bring concerning the asteroid Kamo’oalewa as an Earth quasi-satellite.Confirming the origin of Kamo’oalewa,from its prevailing provenance as debris of the Moon,could be a promising start to inferring the evolutionary history of the Moon.This history would probably include a more comprehensive view of the lunar farside and the origin of the asymmetry between the two sides of the Moon.Comparing the samples from the Moon and Kamo’oalewa would also provide new insights into the Earth wind.展开更多
Three-dimensional printing(3DP)offers valuable insight into the characterization of natural rocks and the verification of theoretical models due to its high reproducibility and accurate replication of complex defects ...Three-dimensional printing(3DP)offers valuable insight into the characterization of natural rocks and the verification of theoretical models due to its high reproducibility and accurate replication of complex defects such as cracks and pores.In this study,3DP gypsum samples with different printing directions were subjected to a series of uniaxial compression tests with in situ micro-computed tomography(micro-CT)scanning to quantitatively investigate their mechanical anisotropic properties and damage evolution characteristics.Based on the two-dimensional(2D)CT images obtained at different scanning steps,a novel void ratio variable was derived using the mean value and variance of CT intensity.Additionally,a constitutive model was formulated incorporating the proposed damage variable,utilizing the void ratio variable.The crack evolution and crack morphology of 3DP gypsum samples were obtained and analyzed using the 3D models reconstructed from the CT images.The results indicate that 3DP gypsum samples exhibit mechanical anisotropic characteristics similar to those found in naturally sedimentary rocks.The mechanical anisotropy is attributed to the bedding planes formed between adjacent layers and pillar-like structures along the printing direction formed by CaSO_(4)·2H_(2)O crystals of needle-like morphology.The mean gray intensity of the voids has a positive linear relationship with the threshold value,while the CT variance and void ratio have concave and convex relationships,respectively.The constitutive model can effectively match the stress–strain curves obtained from uniaxial compression experiments.This study provides comprehensive explanations of the failure modes and anisotropic mechanisms of 3DP gypsum samples,which is important for characterizing and understanding the failure mechanism and microstructural evolution of 3DP rocks when modeling natural rock behavior.展开更多
基金the Key National Natural Science Foundation of China(No.U1864211)the National Natural Science Foundation of China(No.11772191)the Natural Science Foundation of Shanghai(No.21ZR1431500)。
文摘Industrial data mining usually deals with data from different sources.These heterogeneous datasets describe the same object in different views.However,samples from some of the datasets may be lost.Then the remaining samples do not correspond one-to-one correctly.Mismatched datasets caused by missing samples make the industrial data unavailable for further machine learning.In order to align the mismatched samples,this article presents a cooperative iteration matching method(CIMM)based on the modified dynamic time warping(DTW).The proposed method regards the sequentially accumulated industrial data as the time series.Mismatched samples are aligned by the DTW.In addition,dynamic constraints are applied to the warping distance of the DTW process to make the alignment more efficient.Then a series of models are trained with the cumulated samples iteratively.Several groups of numerical experiments on different missing patterns and missing locations are designed and analyzed to prove the effectiveness and the applicability of the proposed method.
基金supported by the National Natural Science Foundation of China(Grant Nos.12272259 and 52005148).
文摘An intelligent diagnosis method based on self-adaptiveWasserstein dual generative adversarial networks and feature fusion is proposed due to problems such as insufficient sample size and incomplete fault feature extraction,which are commonly faced by rolling bearings and lead to low diagnostic accuracy.Initially,dual models of the Wasserstein deep convolutional generative adversarial network incorporating gradient penalty(1D-2DWDCGAN)are constructed to augment the original dataset.A self-adaptive loss threshold control training strategy is introduced,and establishing a self-adaptive balancing mechanism for stable model training.Subsequently,a diagnostic model based on multidimensional feature fusion is designed,wherein complex features from various dimensions are extracted,merging the original signal waveform features,structured features,and time-frequency features into a deep composite feature representation that encompasses multiple dimensions and scales;thus,efficient and accurate small sample fault diagnosis is facilitated.Finally,an experiment between the bearing fault dataset of CaseWestern ReserveUniversity and the fault simulation experimental platformdataset of this research group shows that this method effectively supplements the dataset and remarkably improves the diagnostic accuracy.The diagnostic accuracy after data augmentation reached 99.94%and 99.87%in two different experimental environments,respectively.In addition,robustness analysis is conducted on the diagnostic accuracy of the proposed method under different noise backgrounds,verifying its good generalization performance.
基金supported by grants from the National Natural Science Foundation of China(grant numbers:82370195,82270203,81770211)the Fundamental Research Funds for the Central Univer-sities(grant number:2022CDJYGRH-001)Chongqing Technology Innovation and Application Development Special Key Project(grant number:CSTB2024TIAD-KPX0031).
文摘Flow cytometry(FCM),characterized by its simplicity,rapid processing,multiparameter analysis,and high sen-sitivity,is widely used in the diagnosis,treatment,and prognosis of hematological malignancies.FCM testing of tissue samples not only aids in diagnosing and classifying hematological cancers,but also enables the detection of solid tumors.Its ability to detect numerous marker parameters from small samples is particularly useful when dealing with limited cell quantities,such as in fine-needle biopsy samples.This attribute not only addresses the challenge posed by small sample sizes,but also boosts the sensitivity of tumor cell detection.The significance of FCM in clinical and pathological applications continues to grow.To standardize the use of FCM in detecting hematological malignant cells in tissue samples and to improve quality control during the detection process,experts from the Cell Analysis Professional Committee of the Chinese Society of Biotechnology jointly drafted and agreed upon this consensus.This consensus was formulated based on current literature and clinical practices of all experts across clinical,laboratory,and pathological fields in China.It outlines a comprehensive workflow of FCM-based assay for the detection of hematological malignancies in tissue samples,including report content,interpretation,quality control,and key considerations.Additionally,it provides recommendations on antibody panel designs and analytical approaches to enhancing FCM tests,particularly in cases with limited sample sizes.
文摘In the task of Facial Expression Recognition(FER),data uncertainty has been a critical factor affecting performance,typically arising from the ambiguity of facial expressions,low-quality images,and the subjectivity of annotators.Tracking the training history reveals that misclassified samples often exhibit high confidence and excessive uncertainty in the early stages of training.To address this issue,we propose an uncertainty-based robust sample selection strategy,which combines confidence error with RandAugment to improve image diversity,effectively reducing overfitting caused by uncertain samples during deep learning model training.To validate the effectiveness of the proposed method,extensive experiments were conducted on FER public benchmarks.The accuracy obtained were 89.08%on RAF-DB,63.12%on AffectNet,and 88.73%on FERPlus.
基金supported by the National Natural Science Foundation of China under Grants 62476138 and 42375016.
文摘Continuous control protocols are extensively utilized in traditional MASs,in which information needs to be transmitted among agents consecutively,therefore resulting in excessive consumption of limited resources.To decrease the control cost,based on ISC,several LFC problems are investigated for second-order MASs without and with time delay,respectively.Firstly,an intermittent sampled controller is designed,and a sufficient and necessary condition is derived,under which state errors between the leader and all the followers approach zero asymptotically.Considering that time delay is inevitable,a new protocol is proposed to deal with the time-delay situation.The error system’s stability is analyzed using the Schur stability theorem,and sufficient and necessary conditions for LFC are obtained,which are closely associated with the coupling gain,the system parameters,and the network structure.Furthermore,for the case where the current position and velocity information are not available,a distributed protocol is designed that depends only on the sampled position information.The sufficient and necessary conditions for LFC are also given.The results show that second-order MASs can achieve the LFC if and only if the system parameters satisfy the inequalities proposed in the paper.Finally,the correctness of the obtained results is verified by numerical simulations.
基金supported by the ERC Advanced Grant QU-BOSS(QUantum advantage via nonlinear BOSon Sampling,Grant No.884676)by ICSC-Centro Nazionale di Ricerca in High Performance Computing,Big Data,and Quantum Computing,funded by the European Union-NextGenerationEU.D.S.acknowledges Thales Alenia Space Italia for supporting the PhD fellowship.N.S.acknowledges funding from Sapienza Universitàdi Roma via Bando Ricerca 2020:Progetti di Ricerca Piccoli,Project No.RP120172B8A36B37.
文摘Quantum photonic processors are emerging as promising platforms to prove preliminary evidence of quantum computational advantage toward the realization of universal quantum computers.In the context of nonuniversal noisy intermediate quantum devices,photonic-based sampling machines solving the Gaussian boson sampling(GBS)problem currently play a central role in the experimental demonstration of quantum computational advantage.A relevant issue is the validation of the sampling process in the presence of experimental noise,such as photon losses,which could undermine the hardness of simulating the experiment.We test the capability of a validation protocol that exploits the connection between GBS and graph perfect match counting to perform such an assessment in a noisy scenario.In particular,we use as a test bench the recently developed machine Borealis,a large-scale sampling machine that has been made available online for external users,and address its operation in the presence of noise.The employed approach to validation is also shown to provide connections with the open question on the effective advantage of using noisy GBS devices for graph similarity and isomorphism problems and thus provides an effective method for certification of quantum hardware.
文摘In this paper,an image processing algorithm which is able to synthesize material textures of arbitrary shapes is proposed.The presented approach uses an arbitrary image to construct a structure layer of the material.The resulting structure layer is then used to constrain the material texture synthesis.The field of second-moment matrices is used to represent the structure layer.Many tests with various constraint images are conducted to ensure that the proposed approach accurately reproduces the visual aspects of the input material sample.The results demonstrate that the proposed algorithm is able to accurately synthesize arbitrary-shaped material textures while respecting the local characteristics of the exemplar.This paves the way toward the synthesis of 3D material textures of arbitrary shapes from 2D material samples,which has a wide application range in virtual material design and materials characterization.
基金Supported by the National Natural Science Foundation of China(Grant No.12071175)。
文摘In this paper, we use sample average approximation with adaptive multiple importance sampling to explore moderate deviations for the optimal values. Utilizing the moderate deviation principle for martingale differences and an appropriate Delta method, we establish a moderate deviation principle for the optimal value. Moreover, for a functional form of stochastic programming, we obtain a functional moderate deviation principle for its optimal value.
基金sponsored by the National Natural Science Foundation of China Grant No.62271302the Shanghai Municipal Natural Science Foundation Grant 20ZR1423500.
文摘Large amounts of labeled data are usually needed for training deep neural networks in medical image studies,particularly in medical image classification.However,in the field of semi-supervised medical image analysis,labeled data is very scarce due to patient privacy concerns.For researchers,obtaining high-quality labeled images is exceedingly challenging because it involves manual annotation and clinical understanding.In addition,skin datasets are highly suitable for medical image classification studies due to the inter-class relationships and the inter-class similarities of skin lesions.In this paper,we propose a model called Coalition Sample Relation Consistency(CSRC),a consistency-based method that leverages Canonical Correlation Analysis(CCA)to capture the intrinsic relationships between samples.Considering that traditional consistency-based models only focus on the consistency of prediction,we additionally explore the similarity between features by using CCA.We enforce feature relation consistency based on traditional models,encouraging the model to learn more meaningful information from unlabeled data.Finally,considering that cross-entropy loss is not as suitable as the supervised loss when studying with imbalanced datasets(i.e.,ISIC 2017 and ISIC 2018),we improve the supervised loss to achieve better classification accuracy.Our study shows that this model performs better than many semi-supervised methods.
文摘Geological samples often contain significant amounts of iron,which,although not typically the target element,can substantially interfere with the analysis of other elements of interest.To mitigate these interferences,amidoximebased radiation grafted adsorbents have been identified as effective for iron removal.In this study,an amidoximefunctionalized,radiation-grafted adsorbent synthesized from polypropylene waste(PPw-g-AO-10)was employed to remove iron from leached geological samples.The adsorption process was systematically optimized by investigating the effects of pH,contact time,adsorbent dosage,and initial ferric ion concentration.Under optimal conditions-pH1.4,a contact time of 90 min,and an initial ferric ion concentration of 4500 mg/L-the adsorbent exhibited a maximum iron adsorption capacity of 269.02 mg/g.After optimizing the critical adsorption parameters,the adsorbent was applied to the leached geological samples,achieving a 91%removal of the iron content.The adsorbent was regenerated through two consecutive cycles using 0.2 N HNO_(3),achieving a regeneration efficiency of 65%.These findings confirm the efficacy of the synthesized PPw-g-AO-10 as a cost-effective and eco-friendly adsorbent for successfully removing iron from leached geological matrices while maintaining a reasonable degree of reusability.
文摘In real industrial scenarios, equipment cannot be operated in a faulty state for a long time, resulting in a very limited number of available fault samples, and the method of data augmentation using generative adversarial networks for smallsample data has achieved a wide range of applications. However, the current generative adversarial networks applied in industrial processes do not impose realistic physical constraints on the generation of data, resulting in the generation of data that do not have realistic physical consistency. To address this problem, this paper proposes a physical consistency-based WGAN, designs a loss function containing physical constraints for industrial processes, and validates the effectiveness of the method using a common dataset in the field of industrial process fault diagnosis. The experimental results show that the proposed method not only makes the generated data consistent with the physical constraints of the industrial process, but also has better fault diagnosis performance than the existing GAN-based methods.
基金supported in part by the National Natural Science Foundation of China under Grants 61972267,and 61772070in part by the Natural Science Foundation of Hebei Province under Grant F2024210005.
文摘Face Presentation Attack Detection(fPAD)plays a vital role in securing face recognition systems against various presentation attacks.While supervised learning-based methods demonstrate effectiveness,they are prone to overfitting to known attack types and struggle to generalize to novel attack scenarios.Recent studies have explored formulating fPAD as an anomaly detection problem or one-class classification task,enabling the training of generalized models for unknown attack detection.However,conventional anomaly detection approaches encounter difficulties in precisely delineating the boundary between bonafide samples and unknown attacks.To address this challenge,we propose a novel framework focusing on unknown attack detection using exclusively bonafide facial data during training.The core innovation lies in our pseudo-negative sample synthesis(PNSS)strategy,which facilitates learning of compact decision boundaries between bonafide faces and potential attack variations.Specifically,PNSS generates synthetic negative samples within low-likelihood regions of the bonafide feature space to represent diverse unknown attack patterns.To overcome the inherent imbalance between positive and synthetic negative samples during iterative training,we implement a dual-loss mechanism combining focal loss for classification optimization with pairwise confusion loss as a regularizer.This architecture effectively mitigates model bias towards bonafide samples while maintaining discriminative power.Comprehensive evaluations across three benchmark datasets validate the framework’s superior performance.Notably,our PNSS achieves 8%–18% average classification error rate(ACER)reduction compared with state-of-the-art one-class fPAD methods in cross-dataset evaluations on Idiap Replay-Attack and MSU-MFSD datasets.
文摘Policy training against diverse opponents remains a challenge when using Multi-Agent Reinforcement Learning(MARL)in multiple Unmanned Combat Aerial Vehicle(UCAV)air combat scenarios.In view of this,this paper proposes a novel Dominant and Non-dominant strategy sample selection(DoNot)mechanism and a Local Observation Enhanced Multi-Agent Proximal Policy Optimization(LOE-MAPPO)algorithm to train the multi-UCAV air combat policy and improve its generalization.Specifically,the LOE-MAPPO algorithm adopts a mixed state that concatenates the global state and individual agent's local observation to enable efficient value function learning in multi-UCAV air combat.The DoNot mechanism classifies opponents into dominant or non-dominant strategy opponents,and samples from easier to more challenging opponents to form an adaptive training curriculum.Empirical results demonstrate that the proposed LOE-MAPPO algorithm outperforms baseline MARL algorithms in multi-UCAV air combat scenarios,and the DoNot mechanism leads to stronger policy generalization when facing diverse opponents.The results pave the way for the fast generation of cooperative strategies for air combat agents with MARLalgorithms.
基金supported by the National Natural Science Foundation of China(No.62306281)the Natural Science Foundation of Zhejiang Province(Nos.LQ23E060006 and LTGG24E050005)the Key Research Plan of Jiaxing City(No.2024BZ20016).
文摘In the era of big data,data-driven technologies are increasingly leveraged by industry to facilitate autonomous learning and intelligent decision-making.However,the challenge of“small samples in big data”emerges when datasets lack the comprehensive information necessary for addressing complex scenarios,which hampers adaptability.Thus,enhancing data completeness is essential.Knowledge-guided virtual sample generation transforms domain knowledge into extensive virtual datasets,thereby reducing dependence on limited real samples and enabling zero-sample fault diagnosis.This study used building air conditioning systems as a case study.We innovatively used the large language model(LLM)to acquire domain knowledge for sample generation,significantly lowering knowledge acquisition costs and establishing a generalized framework for knowledge acquisition in engineering applications.This acquired knowledge guided the design of diffusion boundaries in mega-trend diffusion(MTD),while the Monte Carlo method was used to sample within the diffusion function to create information-rich virtual samples.Additionally,a noise-adding technique was introduced to enhance the information entropy of these samples,thereby improving the robustness of neural networks trained with them.Experimental results showed that training the diagnostic model exclusively with virtual samples achieved an accuracy of 72.80%,significantly surpassing traditional small-sample supervised learning in terms of generalization.This underscores the quality and completeness of the generated virtual samples.
基金supported by the Innovation Driven Development Foundation of Guangxi(No.AD22080035)the Open Project Funding of the Key Laboratory of Tropical Marine Ecosystem and Bioresource,Ministry of Natural Resources(No.2023-QN04)+1 种基金the Guangdong Provincial Ordinary University Youth Innovative Talent Project in 2024(No.2024KQNCX134)the Guangdong Provincial Special Fund Project for Talent Development Strategy in 2024(No.2024R3005).
文摘The investigation of whether sediment samples contain representative grain size distribution information is important for the accurate extraction of sediment characteristics and conduct of related sedimentary record studies.This study comparatively analyzed the numerical and qualitative differences and the degree of correlation of 36 sets of the characteristic parameters of surface sediment parallel sample grain size distribution from three sampling profiles at Jinsha Bay Beach in Zhanjiang,western Guangdong.At each sampling point,five parallel subsamples were established at intervals of 0,10,20,50,and 100 cm along the coastline.The research findings indicate the following:1)relatively large differences in the mean values of the different parallel samples(0.19–0.34Φ),with smaller differences observed in other characteristic grain sizes(D_(10),D_(50),and D_(90));2)small differences in characteristic values among various parallel sample grain size parameters,with at least 33%of the combinations of qualitative results showing inconsistency;3)50%of the regression equations between the skewness of different parallel samples displaying no significant correlation;4)relative deviations of−47.91%to 27.63%and−49.20%to 2.08%existing between the particle size parameters of a single sample and parallel samples(with the average obtained)at intervals of 10 and 50 cm,respectively.As such,small spatial differences,even within 100 cm,can considerably affect grain size parameters.Given the uncertain reasons underlying the representativeness of the samples,which may only cover the area immediately surrounding the sampling station,researchers are advised to design parallel sample collection strategies based on the spatiotemporal distribution characteristics of the parameters of interest during sediment sample collection.This study provides a typical case of the comparative analysis of parallel sample grain size parameters,with a focus on small spatial beach sediment,which contributes to the enhanced understanding of the accuracy and reliability of sediment sample collection strategies and extraction of grain size information.
基金supported in part by the National Natural Science Foundation of China(62373337,62373333)the 111 Project(B17040)State Key Laboratory of Advanced Electromagnetic Technology(2024KF002)
文摘Dear Editor,This letter is concerned with stability analysis and stabilization design for sampled-data based load frequency control(LFC) systems via a data-driven method. By describing the dynamic behavior of LFC systems based on a data-based representation, a stability criterion is derived to obtain the admissible maximum sampling interval(MSI) for a given controller and a design condition of the PI-type controller is further developed to meet the required MSI. Finally, the effectiveness of the proposed methods is verified by a case study.
基金supported partially by the National Natural Science Foundation of China(No.U19A2063)the Jilin Provincial Science&Technology Development Program of China(No.20230201080GX)。
文摘Currently,the main idea of iterative rendering methods is to allocate a fixed number of samples to pixels that have not been fully rendered by calculating the completion rate.It is obvious that this strategy ignores the changes in pixel values during the previous rendering process,which may result in additional iterative operations.
基金supported by the National Natural Science Foundation of China(Grant Nos.42241106 and 42388101).
文摘The exploration of asteroids has received increasing attention since the 1990s because of the unique information these objects contain about the history of the early solar system.Quasi-satellites are a population of asteroids that co-orbit closely with,but are outside the gravitational control of,the planet.So far,only five Earth quasi-satellites have been recognized,among which(469219)Kamo’oalewa(provisionally designated as 2016 HO3)is currently considered the most stable and the closest of these.However,little is known about this particular asteroid or this class of near-Earth asteroids because of the difficulties of observing them.China has announced that Tianwen-2,the asteroid sample-return mission to Kamo’oalewa,will be launched in 2025.Here,we review the current knowledge of Kamo’oalewa in terms of its physical characteristics,dynamic evolution,surface environment,and origin,and we propose possible breakthroughs that the samples could bring concerning the asteroid Kamo’oalewa as an Earth quasi-satellite.Confirming the origin of Kamo’oalewa,from its prevailing provenance as debris of the Moon,could be a promising start to inferring the evolutionary history of the Moon.This history would probably include a more comprehensive view of the lunar farside and the origin of the asymmetry between the two sides of the Moon.Comparing the samples from the Moon and Kamo’oalewa would also provide new insights into the Earth wind.
基金supported by grants from the Human Resources Development program(Grant No.20204010600250)the Training Program of CCUS for the Green Growth(Grant No.20214000000500)by the Korea Institute of Energy Technology Evaluation and Planning(KETEP)funded by the Ministry of Trade,Industry,and Energy of the Korean Government(MOTIE).
文摘Three-dimensional printing(3DP)offers valuable insight into the characterization of natural rocks and the verification of theoretical models due to its high reproducibility and accurate replication of complex defects such as cracks and pores.In this study,3DP gypsum samples with different printing directions were subjected to a series of uniaxial compression tests with in situ micro-computed tomography(micro-CT)scanning to quantitatively investigate their mechanical anisotropic properties and damage evolution characteristics.Based on the two-dimensional(2D)CT images obtained at different scanning steps,a novel void ratio variable was derived using the mean value and variance of CT intensity.Additionally,a constitutive model was formulated incorporating the proposed damage variable,utilizing the void ratio variable.The crack evolution and crack morphology of 3DP gypsum samples were obtained and analyzed using the 3D models reconstructed from the CT images.The results indicate that 3DP gypsum samples exhibit mechanical anisotropic characteristics similar to those found in naturally sedimentary rocks.The mechanical anisotropy is attributed to the bedding planes formed between adjacent layers and pillar-like structures along the printing direction formed by CaSO_(4)·2H_(2)O crystals of needle-like morphology.The mean gray intensity of the voids has a positive linear relationship with the threshold value,while the CT variance and void ratio have concave and convex relationships,respectively.The constitutive model can effectively match the stress–strain curves obtained from uniaxial compression experiments.This study provides comprehensive explanations of the failure modes and anisotropic mechanisms of 3DP gypsum samples,which is important for characterizing and understanding the failure mechanism and microstructural evolution of 3DP rocks when modeling natural rock behavior.