A maximum test in lieu of forcing a choice between the two dependent samples t-test and Wilcoxon signed-ranks test is proposed. The maximum test, which requires a new table of critical values, maintains nominal α whi...A maximum test in lieu of forcing a choice between the two dependent samples t-test and Wilcoxon signed-ranks test is proposed. The maximum test, which requires a new table of critical values, maintains nominal α while guaranteeing the maximum power of the two constituent tests. Critical values, obtained via Monte Carlo methods, are uniformly smaller than the Bonferroni-Dunn adjustment, giving it power superiority when testing for treatment alternatives of shift in location parameter when data are sampled from non-normal distributions.展开更多
Industrial data mining usually deals with data from different sources.These heterogeneous datasets describe the same object in different views.However,samples from some of the datasets may be lost.Then the remaining s...Industrial data mining usually deals with data from different sources.These heterogeneous datasets describe the same object in different views.However,samples from some of the datasets may be lost.Then the remaining samples do not correspond one-to-one correctly.Mismatched datasets caused by missing samples make the industrial data unavailable for further machine learning.In order to align the mismatched samples,this article presents a cooperative iteration matching method(CIMM)based on the modified dynamic time warping(DTW).The proposed method regards the sequentially accumulated industrial data as the time series.Mismatched samples are aligned by the DTW.In addition,dynamic constraints are applied to the warping distance of the DTW process to make the alignment more efficient.Then a series of models are trained with the cumulated samples iteratively.Several groups of numerical experiments on different missing patterns and missing locations are designed and analyzed to prove the effectiveness and the applicability of the proposed method.展开更多
Marine gas hydrates are highly sensitive to temperature and pressure fluctuations,and deviations from in-situ conditions may cause irreversible changes in phase state,microstructure,and mechanical properties.However,c...Marine gas hydrates are highly sensitive to temperature and pressure fluctuations,and deviations from in-situ conditions may cause irreversible changes in phase state,microstructure,and mechanical properties.However,conventional samplers often fail to maintain sealing and thermal stability,resulting in low sampling success rates.To address these challenges,an in-situ temperature-and pressure-preserved sampler for marine applications has been developed.The experimental results indicate that the selfdeveloped magnetically controlled pressure-preserved controller reliably achieves autonomous triggering and self-sealing,provides an initial sealing force of 83 N,and is capable of maintaining pressures up to 40 MPa.Additionally,a custom-designed intelligent temperature control chip and high-precision sensors were integrated into the sampler.Through the design of an optimized heat transfer structure,a temperature-preserved system was developed,achieving no more than a 0.3℃ rise in temperature within 2 h.The performance evaluation and sampling operations of the sampler were conducted at the Haima Cold Seep in the South China Sea,resulting in the successful recovery of hydrate maintained under in-situ pressure of 13.8 MPa and a temperature of 6.5℃.This advancement enables the acquisition of high-fidelity hydrate samples,providing critical support for the safe exploitation and scientific analysis of marine gas hydrate resources.展开更多
As an emerging microscopic detection tool,quantum microscopes based on the principle of quantum precision measurement have attracted widespread attention in recent years.Compared with the imaging of classical light,qu...As an emerging microscopic detection tool,quantum microscopes based on the principle of quantum precision measurement have attracted widespread attention in recent years.Compared with the imaging of classical light,quantum-enhanced imaging can achieve ultra-high resolution,ultra-sensitive detection,and anti-interference imaging.Here,we introduce a quantum-enhanced scanning microscope under illumination of an entangled NOON state in polarization.For the phase imager with NOON states,we propose a simple four-basis projection method to replace the four-step phase-shifting method.We have achieved the phase imaging of micrometer-sized birefringent samples and biological cell specimens,with sensitivity close to the Heisenberg limit.The visibility of transmittance-based imaging shows a great enhancement for NOON states.Besides,we also demonstrate that the scanning imaging with NOON states enables the spatial resolution enhancement of√N compared with classical measurement.Our imaging method may provide some reference for the practical application of quantum imaging and is expected to promote the development of microscopic detection.展开更多
An intelligent diagnosis method based on self-adaptiveWasserstein dual generative adversarial networks and feature fusion is proposed due to problems such as insufficient sample size and incomplete fault feature extra...An intelligent diagnosis method based on self-adaptiveWasserstein dual generative adversarial networks and feature fusion is proposed due to problems such as insufficient sample size and incomplete fault feature extraction,which are commonly faced by rolling bearings and lead to low diagnostic accuracy.Initially,dual models of the Wasserstein deep convolutional generative adversarial network incorporating gradient penalty(1D-2DWDCGAN)are constructed to augment the original dataset.A self-adaptive loss threshold control training strategy is introduced,and establishing a self-adaptive balancing mechanism for stable model training.Subsequently,a diagnostic model based on multidimensional feature fusion is designed,wherein complex features from various dimensions are extracted,merging the original signal waveform features,structured features,and time-frequency features into a deep composite feature representation that encompasses multiple dimensions and scales;thus,efficient and accurate small sample fault diagnosis is facilitated.Finally,an experiment between the bearing fault dataset of CaseWestern ReserveUniversity and the fault simulation experimental platformdataset of this research group shows that this method effectively supplements the dataset and remarkably improves the diagnostic accuracy.The diagnostic accuracy after data augmentation reached 99.94%and 99.87%in two different experimental environments,respectively.In addition,robustness analysis is conducted on the diagnostic accuracy of the proposed method under different noise backgrounds,verifying its good generalization performance.展开更多
Flow cytometry(FCM),characterized by its simplicity,rapid processing,multiparameter analysis,and high sen-sitivity,is widely used in the diagnosis,treatment,and prognosis of hematological malignancies.FCM testing of t...Flow cytometry(FCM),characterized by its simplicity,rapid processing,multiparameter analysis,and high sen-sitivity,is widely used in the diagnosis,treatment,and prognosis of hematological malignancies.FCM testing of tissue samples not only aids in diagnosing and classifying hematological cancers,but also enables the detection of solid tumors.Its ability to detect numerous marker parameters from small samples is particularly useful when dealing with limited cell quantities,such as in fine-needle biopsy samples.This attribute not only addresses the challenge posed by small sample sizes,but also boosts the sensitivity of tumor cell detection.The significance of FCM in clinical and pathological applications continues to grow.To standardize the use of FCM in detecting hematological malignant cells in tissue samples and to improve quality control during the detection process,experts from the Cell Analysis Professional Committee of the Chinese Society of Biotechnology jointly drafted and agreed upon this consensus.This consensus was formulated based on current literature and clinical practices of all experts across clinical,laboratory,and pathological fields in China.It outlines a comprehensive workflow of FCM-based assay for the detection of hematological malignancies in tissue samples,including report content,interpretation,quality control,and key considerations.Additionally,it provides recommendations on antibody panel designs and analytical approaches to enhancing FCM tests,particularly in cases with limited sample sizes.展开更多
In the task of Facial Expression Recognition(FER),data uncertainty has been a critical factor affecting performance,typically arising from the ambiguity of facial expressions,low-quality images,and the subjectivity of...In the task of Facial Expression Recognition(FER),data uncertainty has been a critical factor affecting performance,typically arising from the ambiguity of facial expressions,low-quality images,and the subjectivity of annotators.Tracking the training history reveals that misclassified samples often exhibit high confidence and excessive uncertainty in the early stages of training.To address this issue,we propose an uncertainty-based robust sample selection strategy,which combines confidence error with RandAugment to improve image diversity,effectively reducing overfitting caused by uncertain samples during deep learning model training.To validate the effectiveness of the proposed method,extensive experiments were conducted on FER public benchmarks.The accuracy obtained were 89.08%on RAF-DB,63.12%on AffectNet,and 88.73%on FERPlus.展开更多
Continuous control protocols are extensively utilized in traditional MASs,in which information needs to be transmitted among agents consecutively,therefore resulting in excessive consumption of limited resources.To de...Continuous control protocols are extensively utilized in traditional MASs,in which information needs to be transmitted among agents consecutively,therefore resulting in excessive consumption of limited resources.To decrease the control cost,based on ISC,several LFC problems are investigated for second-order MASs without and with time delay,respectively.Firstly,an intermittent sampled controller is designed,and a sufficient and necessary condition is derived,under which state errors between the leader and all the followers approach zero asymptotically.Considering that time delay is inevitable,a new protocol is proposed to deal with the time-delay situation.The error system’s stability is analyzed using the Schur stability theorem,and sufficient and necessary conditions for LFC are obtained,which are closely associated with the coupling gain,the system parameters,and the network structure.Furthermore,for the case where the current position and velocity information are not available,a distributed protocol is designed that depends only on the sampled position information.The sufficient and necessary conditions for LFC are also given.The results show that second-order MASs can achieve the LFC if and only if the system parameters satisfy the inequalities proposed in the paper.Finally,the correctness of the obtained results is verified by numerical simulations.展开更多
Quantum photonic processors are emerging as promising platforms to prove preliminary evidence of quantum computational advantage toward the realization of universal quantum computers.In the context of nonuniversal noi...Quantum photonic processors are emerging as promising platforms to prove preliminary evidence of quantum computational advantage toward the realization of universal quantum computers.In the context of nonuniversal noisy intermediate quantum devices,photonic-based sampling machines solving the Gaussian boson sampling(GBS)problem currently play a central role in the experimental demonstration of quantum computational advantage.A relevant issue is the validation of the sampling process in the presence of experimental noise,such as photon losses,which could undermine the hardness of simulating the experiment.We test the capability of a validation protocol that exploits the connection between GBS and graph perfect match counting to perform such an assessment in a noisy scenario.In particular,we use as a test bench the recently developed machine Borealis,a large-scale sampling machine that has been made available online for external users,and address its operation in the presence of noise.The employed approach to validation is also shown to provide connections with the open question on the effective advantage of using noisy GBS devices for graph similarity and isomorphism problems and thus provides an effective method for certification of quantum hardware.展开更多
In this paper,an image processing algorithm which is able to synthesize material textures of arbitrary shapes is proposed.The presented approach uses an arbitrary image to construct a structure layer of the material.T...In this paper,an image processing algorithm which is able to synthesize material textures of arbitrary shapes is proposed.The presented approach uses an arbitrary image to construct a structure layer of the material.The resulting structure layer is then used to constrain the material texture synthesis.The field of second-moment matrices is used to represent the structure layer.Many tests with various constraint images are conducted to ensure that the proposed approach accurately reproduces the visual aspects of the input material sample.The results demonstrate that the proposed algorithm is able to accurately synthesize arbitrary-shaped material textures while respecting the local characteristics of the exemplar.This paves the way toward the synthesis of 3D material textures of arbitrary shapes from 2D material samples,which has a wide application range in virtual material design and materials characterization.展开更多
In this paper, we use sample average approximation with adaptive multiple importance sampling to explore moderate deviations for the optimal values. Utilizing the moderate deviation principle for martingale difference...In this paper, we use sample average approximation with adaptive multiple importance sampling to explore moderate deviations for the optimal values. Utilizing the moderate deviation principle for martingale differences and an appropriate Delta method, we establish a moderate deviation principle for the optimal value. Moreover, for a functional form of stochastic programming, we obtain a functional moderate deviation principle for its optimal value.展开更多
Large amounts of labeled data are usually needed for training deep neural networks in medical image studies,particularly in medical image classification.However,in the field of semi-supervised medical image analysis,l...Large amounts of labeled data are usually needed for training deep neural networks in medical image studies,particularly in medical image classification.However,in the field of semi-supervised medical image analysis,labeled data is very scarce due to patient privacy concerns.For researchers,obtaining high-quality labeled images is exceedingly challenging because it involves manual annotation and clinical understanding.In addition,skin datasets are highly suitable for medical image classification studies due to the inter-class relationships and the inter-class similarities of skin lesions.In this paper,we propose a model called Coalition Sample Relation Consistency(CSRC),a consistency-based method that leverages Canonical Correlation Analysis(CCA)to capture the intrinsic relationships between samples.Considering that traditional consistency-based models only focus on the consistency of prediction,we additionally explore the similarity between features by using CCA.We enforce feature relation consistency based on traditional models,encouraging the model to learn more meaningful information from unlabeled data.Finally,considering that cross-entropy loss is not as suitable as the supervised loss when studying with imbalanced datasets(i.e.,ISIC 2017 and ISIC 2018),we improve the supervised loss to achieve better classification accuracy.Our study shows that this model performs better than many semi-supervised methods.展开更多
Geological samples often contain significant amounts of iron,which,although not typically the target element,can substantially interfere with the analysis of other elements of interest.To mitigate these interferences,...Geological samples often contain significant amounts of iron,which,although not typically the target element,can substantially interfere with the analysis of other elements of interest.To mitigate these interferences,amidoximebased radiation grafted adsorbents have been identified as effective for iron removal.In this study,an amidoximefunctionalized,radiation-grafted adsorbent synthesized from polypropylene waste(PPw-g-AO-10)was employed to remove iron from leached geological samples.The adsorption process was systematically optimized by investigating the effects of pH,contact time,adsorbent dosage,and initial ferric ion concentration.Under optimal conditions-pH1.4,a contact time of 90 min,and an initial ferric ion concentration of 4500 mg/L-the adsorbent exhibited a maximum iron adsorption capacity of 269.02 mg/g.After optimizing the critical adsorption parameters,the adsorbent was applied to the leached geological samples,achieving a 91%removal of the iron content.The adsorbent was regenerated through two consecutive cycles using 0.2 N HNO_(3),achieving a regeneration efficiency of 65%.These findings confirm the efficacy of the synthesized PPw-g-AO-10 as a cost-effective and eco-friendly adsorbent for successfully removing iron from leached geological matrices while maintaining a reasonable degree of reusability.展开更多
The electromagnetic pulse valve,as a key component in baghouse dust removal systems,plays a crucial role in the performance of the system.However,despite the promising results of intelligent fault diagnosis methods ba...The electromagnetic pulse valve,as a key component in baghouse dust removal systems,plays a crucial role in the performance of the system.However,despite the promising results of intelligent fault diagnosis methods based on extensive data in diagnosing electromagnetic valves,real-world diagnostic scenarios still face numerous challenges.Collecting fault data for electromagnetic pulse valves is not only time-consuming but also costly,making it difficult to obtain sufficient fault data in advance,which poses challenges for small sample fault diagnosis.To address this issue,this paper proposes a fault diagnosis method for electromagnetic pulse valves based on deep transfer learning and simulated data.This method achieves effective transfer from simulated data to real data through four parameter transfer strategies,which combine parameter freezing and fine-tuning operations.Furthermore,this paper identifies a parameter transfer strategy that simultaneously fine-tunes the feature extractor and classifier,and introduces an attention mechanism to integrate fault features,thereby enhancing the correlation and information complementarity among multi-sensor data.The effectiveness of the proposed method is evaluated through two fault diagnosis cases under different operating conditions.In this study,small sample data accounted for 7.9%and 8.2%of the total dataset,and the experimental results showed transfer accuracies of 93.5%and 94.2%,respectively,validating the reliability and effectiveness of the method under small sample conditions.展开更多
To analyze the complexity of interval-valued time series(ITSs),a novel interval multiscale sample entropy(IMSE)methodology is proposed in this paper.To validate the effectiveness and feasibility of IMSE in characteriz...To analyze the complexity of interval-valued time series(ITSs),a novel interval multiscale sample entropy(IMSE)methodology is proposed in this paper.To validate the effectiveness and feasibility of IMSE in characterizing ITS complexity,the method is initially implemented on simulated time series.The experimental results demonstrate that IMSE not only successfully identifies series complexity and long-range autocorrelation patterns but also effectively captures the intrinsic relationships between interval boundaries.Furthermore,the test results show that IMSE can also be applied to measure the complexity of multivariate time series of equal length.Subsequently,IMSE is applied to investigate interval temperature series(2000–2023)from four Chinese cities:Shanghai,Kunming,Chongqing,and Nagqu.The results show that IMSE not only distinctly differentiates temperature patterns across cities but also effectively quantifies complexity and long-term autocorrelation in ITSs.All the results indicate that IMSE is an alternative and effective method for studying the complexity of ITSs.展开更多
In real industrial scenarios, equipment cannot be operated in a faulty state for a long time, resulting in a very limited number of available fault samples, and the method of data augmentation using generative adversa...In real industrial scenarios, equipment cannot be operated in a faulty state for a long time, resulting in a very limited number of available fault samples, and the method of data augmentation using generative adversarial networks for smallsample data has achieved a wide range of applications. However, the current generative adversarial networks applied in industrial processes do not impose realistic physical constraints on the generation of data, resulting in the generation of data that do not have realistic physical consistency. To address this problem, this paper proposes a physical consistency-based WGAN, designs a loss function containing physical constraints for industrial processes, and validates the effectiveness of the method using a common dataset in the field of industrial process fault diagnosis. The experimental results show that the proposed method not only makes the generated data consistent with the physical constraints of the industrial process, but also has better fault diagnosis performance than the existing GAN-based methods.展开更多
Face Presentation Attack Detection(fPAD)plays a vital role in securing face recognition systems against various presentation attacks.While supervised learning-based methods demonstrate effectiveness,they are prone to ...Face Presentation Attack Detection(fPAD)plays a vital role in securing face recognition systems against various presentation attacks.While supervised learning-based methods demonstrate effectiveness,they are prone to overfitting to known attack types and struggle to generalize to novel attack scenarios.Recent studies have explored formulating fPAD as an anomaly detection problem or one-class classification task,enabling the training of generalized models for unknown attack detection.However,conventional anomaly detection approaches encounter difficulties in precisely delineating the boundary between bonafide samples and unknown attacks.To address this challenge,we propose a novel framework focusing on unknown attack detection using exclusively bonafide facial data during training.The core innovation lies in our pseudo-negative sample synthesis(PNSS)strategy,which facilitates learning of compact decision boundaries between bonafide faces and potential attack variations.Specifically,PNSS generates synthetic negative samples within low-likelihood regions of the bonafide feature space to represent diverse unknown attack patterns.To overcome the inherent imbalance between positive and synthetic negative samples during iterative training,we implement a dual-loss mechanism combining focal loss for classification optimization with pairwise confusion loss as a regularizer.This architecture effectively mitigates model bias towards bonafide samples while maintaining discriminative power.Comprehensive evaluations across three benchmark datasets validate the framework’s superior performance.Notably,our PNSS achieves 8%–18% average classification error rate(ACER)reduction compared with state-of-the-art one-class fPAD methods in cross-dataset evaluations on Idiap Replay-Attack and MSU-MFSD datasets.展开更多
Policy training against diverse opponents remains a challenge when using Multi-Agent Reinforcement Learning(MARL)in multiple Unmanned Combat Aerial Vehicle(UCAV)air combat scenarios.In view of this,this paper proposes...Policy training against diverse opponents remains a challenge when using Multi-Agent Reinforcement Learning(MARL)in multiple Unmanned Combat Aerial Vehicle(UCAV)air combat scenarios.In view of this,this paper proposes a novel Dominant and Non-dominant strategy sample selection(DoNot)mechanism and a Local Observation Enhanced Multi-Agent Proximal Policy Optimization(LOE-MAPPO)algorithm to train the multi-UCAV air combat policy and improve its generalization.Specifically,the LOE-MAPPO algorithm adopts a mixed state that concatenates the global state and individual agent's local observation to enable efficient value function learning in multi-UCAV air combat.The DoNot mechanism classifies opponents into dominant or non-dominant strategy opponents,and samples from easier to more challenging opponents to form an adaptive training curriculum.Empirical results demonstrate that the proposed LOE-MAPPO algorithm outperforms baseline MARL algorithms in multi-UCAV air combat scenarios,and the DoNot mechanism leads to stronger policy generalization when facing diverse opponents.The results pave the way for the fast generation of cooperative strategies for air combat agents with MARLalgorithms.展开更多
In the era of big data,traditional statistical inference methods are faced with great challenges.Taking the two-sample distribution test scenario of big data as an example,this paper proposes the BB-KS test based on m...In the era of big data,traditional statistical inference methods are faced with great challenges.Taking the two-sample distribution test scenario of big data as an example,this paper proposes the BB-KS test based on m out of n bootstrap for solving a single-machine memory and computing constraints.It is verified to the feasibility and effectiveness of the proposed test method through theoretical analysis and numerical simulation.The results show that the BB-KS test can improve the calculation efficiency of the test to a certain extent in the single machine scenario.展开更多
文摘A maximum test in lieu of forcing a choice between the two dependent samples t-test and Wilcoxon signed-ranks test is proposed. The maximum test, which requires a new table of critical values, maintains nominal α while guaranteeing the maximum power of the two constituent tests. Critical values, obtained via Monte Carlo methods, are uniformly smaller than the Bonferroni-Dunn adjustment, giving it power superiority when testing for treatment alternatives of shift in location parameter when data are sampled from non-normal distributions.
基金the Key National Natural Science Foundation of China(No.U1864211)the National Natural Science Foundation of China(No.11772191)the Natural Science Foundation of Shanghai(No.21ZR1431500)。
文摘Industrial data mining usually deals with data from different sources.These heterogeneous datasets describe the same object in different views.However,samples from some of the datasets may be lost.Then the remaining samples do not correspond one-to-one correctly.Mismatched datasets caused by missing samples make the industrial data unavailable for further machine learning.In order to align the mismatched samples,this article presents a cooperative iteration matching method(CIMM)based on the modified dynamic time warping(DTW).The proposed method regards the sequentially accumulated industrial data as the time series.Mismatched samples are aligned by the DTW.In addition,dynamic constraints are applied to the warping distance of the DTW process to make the alignment more efficient.Then a series of models are trained with the cumulated samples iteratively.Several groups of numerical experiments on different missing patterns and missing locations are designed and analyzed to prove the effectiveness and the applicability of the proposed method.
基金financially supported by Shenzhen Science and Technology Program(Nos.JSGG20220831105002005 and KJZD20231025152759002)Support from the National Natural Science Foundation of China(Nos.52374357 and 523B2101)funded by the Shared Voyages Project for Deep-sea and Abyss Scientific Research and Equipment Sea Trials of Hainan Deep-Sea Technology Innovation Center(No.DSTIC-GXHC-2022002)。
文摘Marine gas hydrates are highly sensitive to temperature and pressure fluctuations,and deviations from in-situ conditions may cause irreversible changes in phase state,microstructure,and mechanical properties.However,conventional samplers often fail to maintain sealing and thermal stability,resulting in low sampling success rates.To address these challenges,an in-situ temperature-and pressure-preserved sampler for marine applications has been developed.The experimental results indicate that the selfdeveloped magnetically controlled pressure-preserved controller reliably achieves autonomous triggering and self-sealing,provides an initial sealing force of 83 N,and is capable of maintaining pressures up to 40 MPa.Additionally,a custom-designed intelligent temperature control chip and high-precision sensors were integrated into the sampler.Through the design of an optimized heat transfer structure,a temperature-preserved system was developed,achieving no more than a 0.3℃ rise in temperature within 2 h.The performance evaluation and sampling operations of the sampler were conducted at the Haima Cold Seep in the South China Sea,resulting in the successful recovery of hydrate maintained under in-situ pressure of 13.8 MPa and a temperature of 6.5℃.This advancement enables the acquisition of high-fidelity hydrate samples,providing critical support for the safe exploitation and scientific analysis of marine gas hydrate resources.
基金supported by he National Natural Science Foundation of China(Grant Nos.12304359,12304398,12404382,12234009,12274215,and 12427808)the China Postdoctoral Science Foundation(Grant No.2023M731611)+4 种基金the Jiangsu Funding Program for Excellent Postdoctoral Talent(Grant No.2023ZB717)Innovation Program for Quantum Science and Technology(Grant No.2021ZD0301400)Key R&D Program of Jiangsu Province(Grant No.BE2023002)Natural Science Foundation of Jiangsu Province(Grant Nos.BK20220759 and BK20233001)Program for Innovative Talents and Entrepreneurs in Jiangsu,and Key R&D Program of Guangdong Province(Grant No.2020B0303010001).
文摘As an emerging microscopic detection tool,quantum microscopes based on the principle of quantum precision measurement have attracted widespread attention in recent years.Compared with the imaging of classical light,quantum-enhanced imaging can achieve ultra-high resolution,ultra-sensitive detection,and anti-interference imaging.Here,we introduce a quantum-enhanced scanning microscope under illumination of an entangled NOON state in polarization.For the phase imager with NOON states,we propose a simple four-basis projection method to replace the four-step phase-shifting method.We have achieved the phase imaging of micrometer-sized birefringent samples and biological cell specimens,with sensitivity close to the Heisenberg limit.The visibility of transmittance-based imaging shows a great enhancement for NOON states.Besides,we also demonstrate that the scanning imaging with NOON states enables the spatial resolution enhancement of√N compared with classical measurement.Our imaging method may provide some reference for the practical application of quantum imaging and is expected to promote the development of microscopic detection.
基金supported by the National Natural Science Foundation of China(Grant Nos.12272259 and 52005148).
文摘An intelligent diagnosis method based on self-adaptiveWasserstein dual generative adversarial networks and feature fusion is proposed due to problems such as insufficient sample size and incomplete fault feature extraction,which are commonly faced by rolling bearings and lead to low diagnostic accuracy.Initially,dual models of the Wasserstein deep convolutional generative adversarial network incorporating gradient penalty(1D-2DWDCGAN)are constructed to augment the original dataset.A self-adaptive loss threshold control training strategy is introduced,and establishing a self-adaptive balancing mechanism for stable model training.Subsequently,a diagnostic model based on multidimensional feature fusion is designed,wherein complex features from various dimensions are extracted,merging the original signal waveform features,structured features,and time-frequency features into a deep composite feature representation that encompasses multiple dimensions and scales;thus,efficient and accurate small sample fault diagnosis is facilitated.Finally,an experiment between the bearing fault dataset of CaseWestern ReserveUniversity and the fault simulation experimental platformdataset of this research group shows that this method effectively supplements the dataset and remarkably improves the diagnostic accuracy.The diagnostic accuracy after data augmentation reached 99.94%and 99.87%in two different experimental environments,respectively.In addition,robustness analysis is conducted on the diagnostic accuracy of the proposed method under different noise backgrounds,verifying its good generalization performance.
基金supported by grants from the National Natural Science Foundation of China(grant numbers:82370195,82270203,81770211)the Fundamental Research Funds for the Central Univer-sities(grant number:2022CDJYGRH-001)Chongqing Technology Innovation and Application Development Special Key Project(grant number:CSTB2024TIAD-KPX0031).
文摘Flow cytometry(FCM),characterized by its simplicity,rapid processing,multiparameter analysis,and high sen-sitivity,is widely used in the diagnosis,treatment,and prognosis of hematological malignancies.FCM testing of tissue samples not only aids in diagnosing and classifying hematological cancers,but also enables the detection of solid tumors.Its ability to detect numerous marker parameters from small samples is particularly useful when dealing with limited cell quantities,such as in fine-needle biopsy samples.This attribute not only addresses the challenge posed by small sample sizes,but also boosts the sensitivity of tumor cell detection.The significance of FCM in clinical and pathological applications continues to grow.To standardize the use of FCM in detecting hematological malignant cells in tissue samples and to improve quality control during the detection process,experts from the Cell Analysis Professional Committee of the Chinese Society of Biotechnology jointly drafted and agreed upon this consensus.This consensus was formulated based on current literature and clinical practices of all experts across clinical,laboratory,and pathological fields in China.It outlines a comprehensive workflow of FCM-based assay for the detection of hematological malignancies in tissue samples,including report content,interpretation,quality control,and key considerations.Additionally,it provides recommendations on antibody panel designs and analytical approaches to enhancing FCM tests,particularly in cases with limited sample sizes.
文摘In the task of Facial Expression Recognition(FER),data uncertainty has been a critical factor affecting performance,typically arising from the ambiguity of facial expressions,low-quality images,and the subjectivity of annotators.Tracking the training history reveals that misclassified samples often exhibit high confidence and excessive uncertainty in the early stages of training.To address this issue,we propose an uncertainty-based robust sample selection strategy,which combines confidence error with RandAugment to improve image diversity,effectively reducing overfitting caused by uncertain samples during deep learning model training.To validate the effectiveness of the proposed method,extensive experiments were conducted on FER public benchmarks.The accuracy obtained were 89.08%on RAF-DB,63.12%on AffectNet,and 88.73%on FERPlus.
基金supported by the National Natural Science Foundation of China under Grants 62476138 and 42375016.
文摘Continuous control protocols are extensively utilized in traditional MASs,in which information needs to be transmitted among agents consecutively,therefore resulting in excessive consumption of limited resources.To decrease the control cost,based on ISC,several LFC problems are investigated for second-order MASs without and with time delay,respectively.Firstly,an intermittent sampled controller is designed,and a sufficient and necessary condition is derived,under which state errors between the leader and all the followers approach zero asymptotically.Considering that time delay is inevitable,a new protocol is proposed to deal with the time-delay situation.The error system’s stability is analyzed using the Schur stability theorem,and sufficient and necessary conditions for LFC are obtained,which are closely associated with the coupling gain,the system parameters,and the network structure.Furthermore,for the case where the current position and velocity information are not available,a distributed protocol is designed that depends only on the sampled position information.The sufficient and necessary conditions for LFC are also given.The results show that second-order MASs can achieve the LFC if and only if the system parameters satisfy the inequalities proposed in the paper.Finally,the correctness of the obtained results is verified by numerical simulations.
基金supported by the ERC Advanced Grant QU-BOSS(QUantum advantage via nonlinear BOSon Sampling,Grant No.884676)by ICSC-Centro Nazionale di Ricerca in High Performance Computing,Big Data,and Quantum Computing,funded by the European Union-NextGenerationEU.D.S.acknowledges Thales Alenia Space Italia for supporting the PhD fellowship.N.S.acknowledges funding from Sapienza Universitàdi Roma via Bando Ricerca 2020:Progetti di Ricerca Piccoli,Project No.RP120172B8A36B37.
文摘Quantum photonic processors are emerging as promising platforms to prove preliminary evidence of quantum computational advantage toward the realization of universal quantum computers.In the context of nonuniversal noisy intermediate quantum devices,photonic-based sampling machines solving the Gaussian boson sampling(GBS)problem currently play a central role in the experimental demonstration of quantum computational advantage.A relevant issue is the validation of the sampling process in the presence of experimental noise,such as photon losses,which could undermine the hardness of simulating the experiment.We test the capability of a validation protocol that exploits the connection between GBS and graph perfect match counting to perform such an assessment in a noisy scenario.In particular,we use as a test bench the recently developed machine Borealis,a large-scale sampling machine that has been made available online for external users,and address its operation in the presence of noise.The employed approach to validation is also shown to provide connections with the open question on the effective advantage of using noisy GBS devices for graph similarity and isomorphism problems and thus provides an effective method for certification of quantum hardware.
文摘In this paper,an image processing algorithm which is able to synthesize material textures of arbitrary shapes is proposed.The presented approach uses an arbitrary image to construct a structure layer of the material.The resulting structure layer is then used to constrain the material texture synthesis.The field of second-moment matrices is used to represent the structure layer.Many tests with various constraint images are conducted to ensure that the proposed approach accurately reproduces the visual aspects of the input material sample.The results demonstrate that the proposed algorithm is able to accurately synthesize arbitrary-shaped material textures while respecting the local characteristics of the exemplar.This paves the way toward the synthesis of 3D material textures of arbitrary shapes from 2D material samples,which has a wide application range in virtual material design and materials characterization.
基金Supported by the National Natural Science Foundation of China(Grant No.12071175)。
文摘In this paper, we use sample average approximation with adaptive multiple importance sampling to explore moderate deviations for the optimal values. Utilizing the moderate deviation principle for martingale differences and an appropriate Delta method, we establish a moderate deviation principle for the optimal value. Moreover, for a functional form of stochastic programming, we obtain a functional moderate deviation principle for its optimal value.
基金sponsored by the National Natural Science Foundation of China Grant No.62271302the Shanghai Municipal Natural Science Foundation Grant 20ZR1423500.
文摘Large amounts of labeled data are usually needed for training deep neural networks in medical image studies,particularly in medical image classification.However,in the field of semi-supervised medical image analysis,labeled data is very scarce due to patient privacy concerns.For researchers,obtaining high-quality labeled images is exceedingly challenging because it involves manual annotation and clinical understanding.In addition,skin datasets are highly suitable for medical image classification studies due to the inter-class relationships and the inter-class similarities of skin lesions.In this paper,we propose a model called Coalition Sample Relation Consistency(CSRC),a consistency-based method that leverages Canonical Correlation Analysis(CCA)to capture the intrinsic relationships between samples.Considering that traditional consistency-based models only focus on the consistency of prediction,we additionally explore the similarity between features by using CCA.We enforce feature relation consistency based on traditional models,encouraging the model to learn more meaningful information from unlabeled data.Finally,considering that cross-entropy loss is not as suitable as the supervised loss when studying with imbalanced datasets(i.e.,ISIC 2017 and ISIC 2018),we improve the supervised loss to achieve better classification accuracy.Our study shows that this model performs better than many semi-supervised methods.
文摘Geological samples often contain significant amounts of iron,which,although not typically the target element,can substantially interfere with the analysis of other elements of interest.To mitigate these interferences,amidoximebased radiation grafted adsorbents have been identified as effective for iron removal.In this study,an amidoximefunctionalized,radiation-grafted adsorbent synthesized from polypropylene waste(PPw-g-AO-10)was employed to remove iron from leached geological samples.The adsorption process was systematically optimized by investigating the effects of pH,contact time,adsorbent dosage,and initial ferric ion concentration.Under optimal conditions-pH1.4,a contact time of 90 min,and an initial ferric ion concentration of 4500 mg/L-the adsorbent exhibited a maximum iron adsorption capacity of 269.02 mg/g.After optimizing the critical adsorption parameters,the adsorbent was applied to the leached geological samples,achieving a 91%removal of the iron content.The adsorbent was regenerated through two consecutive cycles using 0.2 N HNO_(3),achieving a regeneration efficiency of 65%.These findings confirm the efficacy of the synthesized PPw-g-AO-10 as a cost-effective and eco-friendly adsorbent for successfully removing iron from leached geological matrices while maintaining a reasonable degree of reusability.
基金Supported by National Natural Science Foundation of China(Grant No.51675040)。
文摘The electromagnetic pulse valve,as a key component in baghouse dust removal systems,plays a crucial role in the performance of the system.However,despite the promising results of intelligent fault diagnosis methods based on extensive data in diagnosing electromagnetic valves,real-world diagnostic scenarios still face numerous challenges.Collecting fault data for electromagnetic pulse valves is not only time-consuming but also costly,making it difficult to obtain sufficient fault data in advance,which poses challenges for small sample fault diagnosis.To address this issue,this paper proposes a fault diagnosis method for electromagnetic pulse valves based on deep transfer learning and simulated data.This method achieves effective transfer from simulated data to real data through four parameter transfer strategies,which combine parameter freezing and fine-tuning operations.Furthermore,this paper identifies a parameter transfer strategy that simultaneously fine-tunes the feature extractor and classifier,and introduces an attention mechanism to integrate fault features,thereby enhancing the correlation and information complementarity among multi-sensor data.The effectiveness of the proposed method is evaluated through two fault diagnosis cases under different operating conditions.In this study,small sample data accounted for 7.9%and 8.2%of the total dataset,and the experimental results showed transfer accuracies of 93.5%and 94.2%,respectively,validating the reliability and effectiveness of the method under small sample conditions.
基金supported by Hubei Provincial Department of Education Science and Technology Plan Project(Grant No.B2022165)。
文摘To analyze the complexity of interval-valued time series(ITSs),a novel interval multiscale sample entropy(IMSE)methodology is proposed in this paper.To validate the effectiveness and feasibility of IMSE in characterizing ITS complexity,the method is initially implemented on simulated time series.The experimental results demonstrate that IMSE not only successfully identifies series complexity and long-range autocorrelation patterns but also effectively captures the intrinsic relationships between interval boundaries.Furthermore,the test results show that IMSE can also be applied to measure the complexity of multivariate time series of equal length.Subsequently,IMSE is applied to investigate interval temperature series(2000–2023)from four Chinese cities:Shanghai,Kunming,Chongqing,and Nagqu.The results show that IMSE not only distinctly differentiates temperature patterns across cities but also effectively quantifies complexity and long-term autocorrelation in ITSs.All the results indicate that IMSE is an alternative and effective method for studying the complexity of ITSs.
文摘In real industrial scenarios, equipment cannot be operated in a faulty state for a long time, resulting in a very limited number of available fault samples, and the method of data augmentation using generative adversarial networks for smallsample data has achieved a wide range of applications. However, the current generative adversarial networks applied in industrial processes do not impose realistic physical constraints on the generation of data, resulting in the generation of data that do not have realistic physical consistency. To address this problem, this paper proposes a physical consistency-based WGAN, designs a loss function containing physical constraints for industrial processes, and validates the effectiveness of the method using a common dataset in the field of industrial process fault diagnosis. The experimental results show that the proposed method not only makes the generated data consistent with the physical constraints of the industrial process, but also has better fault diagnosis performance than the existing GAN-based methods.
基金supported in part by the National Natural Science Foundation of China under Grants 61972267,and 61772070in part by the Natural Science Foundation of Hebei Province under Grant F2024210005.
文摘Face Presentation Attack Detection(fPAD)plays a vital role in securing face recognition systems against various presentation attacks.While supervised learning-based methods demonstrate effectiveness,they are prone to overfitting to known attack types and struggle to generalize to novel attack scenarios.Recent studies have explored formulating fPAD as an anomaly detection problem or one-class classification task,enabling the training of generalized models for unknown attack detection.However,conventional anomaly detection approaches encounter difficulties in precisely delineating the boundary between bonafide samples and unknown attacks.To address this challenge,we propose a novel framework focusing on unknown attack detection using exclusively bonafide facial data during training.The core innovation lies in our pseudo-negative sample synthesis(PNSS)strategy,which facilitates learning of compact decision boundaries between bonafide faces and potential attack variations.Specifically,PNSS generates synthetic negative samples within low-likelihood regions of the bonafide feature space to represent diverse unknown attack patterns.To overcome the inherent imbalance between positive and synthetic negative samples during iterative training,we implement a dual-loss mechanism combining focal loss for classification optimization with pairwise confusion loss as a regularizer.This architecture effectively mitigates model bias towards bonafide samples while maintaining discriminative power.Comprehensive evaluations across three benchmark datasets validate the framework’s superior performance.Notably,our PNSS achieves 8%–18% average classification error rate(ACER)reduction compared with state-of-the-art one-class fPAD methods in cross-dataset evaluations on Idiap Replay-Attack and MSU-MFSD datasets.
文摘Policy training against diverse opponents remains a challenge when using Multi-Agent Reinforcement Learning(MARL)in multiple Unmanned Combat Aerial Vehicle(UCAV)air combat scenarios.In view of this,this paper proposes a novel Dominant and Non-dominant strategy sample selection(DoNot)mechanism and a Local Observation Enhanced Multi-Agent Proximal Policy Optimization(LOE-MAPPO)algorithm to train the multi-UCAV air combat policy and improve its generalization.Specifically,the LOE-MAPPO algorithm adopts a mixed state that concatenates the global state and individual agent's local observation to enable efficient value function learning in multi-UCAV air combat.The DoNot mechanism classifies opponents into dominant or non-dominant strategy opponents,and samples from easier to more challenging opponents to form an adaptive training curriculum.Empirical results demonstrate that the proposed LOE-MAPPO algorithm outperforms baseline MARL algorithms in multi-UCAV air combat scenarios,and the DoNot mechanism leads to stronger policy generalization when facing diverse opponents.The results pave the way for the fast generation of cooperative strategies for air combat agents with MARLalgorithms.
文摘In the era of big data,traditional statistical inference methods are faced with great challenges.Taking the two-sample distribution test scenario of big data as an example,this paper proposes the BB-KS test based on m out of n bootstrap for solving a single-machine memory and computing constraints.It is verified to the feasibility and effectiveness of the proposed test method through theoretical analysis and numerical simulation.The results show that the BB-KS test can improve the calculation efficiency of the test to a certain extent in the single machine scenario.