With the rapid advancement of medical artificial intelligence(AI)technology,particularly the widespread adoption of AI diagnostic systems,ethical challenges in medical decision-making have garnered increasing attentio...With the rapid advancement of medical artificial intelligence(AI)technology,particularly the widespread adoption of AI diagnostic systems,ethical challenges in medical decision-making have garnered increasing attention.This paper analyzes the limitations of algorithmic ethics in medical decision-making and explores accountability mechanisms,aiming to provide theoretical support for ethically informed medical practices.The study highlights how the opacity of AI algorithms complicates the definition of decision-making responsibility,undermines doctor-patient trust,and affects informed consent.By thoroughly investigating issues such as the algorithmic“black box”problem and data privacy protection,we develop accountability assessment models to address ethical concerns related to medical resource allocation.Furthermore,this research examines the effective implementation of AI diagnostic systems through case studies of both successful and unsuccessful applications,extracting lessons on accountability mechanisms and response strategies.Finally,we emphasize that establishing a transparent accountability framework is crucial for enhancing the ethical standards of medical AI systems and protecting patients’rights and interests.展开更多
Considering the existing problems of current Thevenin equivalent algorithms,a tracing algorithm of Thevenin equivalent that is applicable to power systems with large disturbance is presented.First,the potential voltag...Considering the existing problems of current Thevenin equivalent algorithms,a tracing algorithm of Thevenin equivalent that is applicable to power systems with large disturbance is presented.First,the potential voltage amplitude of Thevenin equivalent at the moment of fault is calculated with the parameters before fault.Then the resistance before fault and the potential voltage amplitude of the moment of fault is used to calculate other parameters of the moment of fault.The main steps of this algorithm are as follows:1)The resistance and reactance of Thevenin equivalent before fault are used as initial parameters.展开更多
The trace norm of matrices plays an important role in quantum information and quantum computing. How to quantify it in today’s noisy intermediate scale quantum(NISQ) devices is a crucial task for information processi...The trace norm of matrices plays an important role in quantum information and quantum computing. How to quantify it in today’s noisy intermediate scale quantum(NISQ) devices is a crucial task for information processing. In this paper, we present three variational quantum algorithms on NISQ devices to estimate the trace norms corresponding to different situations.Compared with the previous methods, our means greatly reduce the requirement for quantum resources. Numerical experiments are provided to illustrate the effectiveness of our algorithms.展开更多
Parametric curves such as Bézier and B-splines, originally developedfor the design of automobile bodies, are now also used in image processing andcomputer vision. For example, reconstructing an object shape in an...Parametric curves such as Bézier and B-splines, originally developedfor the design of automobile bodies, are now also used in image processing andcomputer vision. For example, reconstructing an object shape in an image,including different translations, scales, and orientations, can be performedusing these parametric curves. For this, Bézier and B-spline curves can be generatedusing a point set that belongs to the outer boundary of the object. Theresulting object shape can be used in computer vision fields, such as searchingand segmentation methods and training machine learning algorithms. Theprerequisite for reconstructing the shape with parametric curves is to obtainsequentially the points in the point set. In this study, a novel algorithm hasbeen developed that sequentially obtains the pixel locations constituting theouter boundary of the object. The proposed algorithm, unlike the methods inthe literature, is implemented using a filter containing weights and an outercircle surrounding the object. In a binary format image, the starting point ofthe tracing is determined using the outer circle, and the next tracing movementand the pixel to be labeled as the boundary point is found by the filter weights.Then, control points that define the curve shape are selected by reducing thenumber of sequential points. Thus, the Bézier and B-spline curve equationsdescribing the shape are obtained using these points. In addition, differenttranslations, scales, and rotations of the object shape are easily provided bychanging the positions of the control points. It has also been shown that themissing part of the object can be completed thanks to the parametric curves.展开更多
Automated Program Repair(APR)techniques have shown significant potential in mitigating the cost and complexity associated with debugging by automatically generating corrective patches for software defects.Despite cons...Automated Program Repair(APR)techniques have shown significant potential in mitigating the cost and complexity associated with debugging by automatically generating corrective patches for software defects.Despite considerable progress in APR methodologies,existing approaches frequently lack contextual awareness of runtime behaviors and structural intricacies inherent in buggy source code.In this paper,we propose a novel APR approach that integrates attention mechanisms within an autoencoder-based framework,explicitly utilizing structural code affinity and execution context correlation derived from stack trace analysis.Our approach begins with an innovative preprocessing pipeline,where code segments and stack traces are transformed into tokenized representations.Subsequently,the BM25 ranking algorithm is employed to quantitatively measure structural code affinity and execution context correlation,identifying syntactically and semantically analogous buggy code snippets and relevant runtime error contexts from extensive repositories.These extracted features are then encoded via an attention-enhanced autoencoder model,specifically designed to capture significant patterns and correlations essential for effective patch generation.To assess the efficacy and generalizability of our proposed method,we conducted rigorous experimental comparisons against DeepFix,a state-of-the-art APR system,using a substantial dataset comprising 53,478 studentdeveloped C programs.Experimental outcomes indicate that our model achieves a notable bug repair success rate of approximately 62.36%,representing a statistically significant performance improvement of over 6%compared to the baseline.Furthermore,a thorough K-fold cross-validation reinforced the consistency,robustness,and reliability of our method across diverse subsets of the dataset.Our findings present the critical advantage of integrating attentionbased learning with code structural and execution context features in APR tasks,leading to improved accuracy and practical applicability.Future work aims to extend the model’s applicability across different programming languages,systematically optimize hyperparameters,and explore alternative feature representation methods to further enhance debugging efficiency and effectiveness.展开更多
Due to the combined influences such as ore-forming temperature,fluid and metal sources,sphalerite tends to incorporate diverse contents of trace elements during the formation of different types of Lead-zinc(Pb-Zn)depo...Due to the combined influences such as ore-forming temperature,fluid and metal sources,sphalerite tends to incorporate diverse contents of trace elements during the formation of different types of Lead-zinc(Pb-Zn)deposits.Therefore,trace elements in sphalerite have long been utilized to distinguish Pb-Zn deposit types.However,previous discriminant diagrams usually contain two or three dimensions,which are limited to revealing the complicated interrelations between trace elements of sphalerite and the types of Pb-Zn deposits.In this study,we aim to prove that the sphalerite trace elements can be used to classify the Pb-Zn deposit types and extract key factors from sphalerite trace elements that can dis-criminate Pb-Zn deposit types using machine learning algorithms.A dataset of nearly 3600 sphalerite spot analyses from 95 Pb-Zn deposits worldwide determined by LA-ICP-MS was compiled from peer-reviewed publications,containing 12 elements(Mn,Fe,Co,Cu,Ga,Ge,Ag,Cd,In,Sn,Sb,and Pb)from 5 types,including Sedimentary Exhalative(SEDEX),Mississippi Valley Type(MVT),Volcanic Massive Sulfide(VMS),skarn,and epithermal deposits.Random Forests(RF)is applied to the data processing and the results show that trace elements of sphalerite can successfully discriminate different types of Pb-Zn deposits except for VMS deposits,most of which are falsely distinguished as skarn and epithermal types.To further discriminate VMS deposits,future studies could focus on enlarging the capacity of VMS deposits in datasets and applying other geological factors along with sphalerite trace elements when con-structing the classification model.RF’s feature importance and permutation feature importance were adopted to evaluate the element significance for classification.Besides,a visualized tool,t-distributed stochastic neighbor embedding(t-SNE),was used to verify the results of both classification and evalua-tion.The results presented here show that Mn,Co,and Ge display significant impacts on classification of Pb-Zn deposits and In,Ga,Sn,Cd,and Fe also have relatively important effects compared to the rest ele-ments,confirming that Pb-Zn deposits discrimination is mainly controlled by multi-elements in spha-lerite.Our study hence shows that machine learning algorithm can provide new insights into conventional geochemical analyses,inspiring future research on constructing classification models of mineral deposits using mineral geochemistry data.展开更多
Amplitude-integrated EEG (aEEG) is a popular method for monitoring cerebral function. Although various commercial aEEG recorders have been produced, a detailed aEEG algorithm currently is not available. The upper and ...Amplitude-integrated EEG (aEEG) is a popular method for monitoring cerebral function. Although various commercial aEEG recorders have been produced, a detailed aEEG algorithm currently is not available. The upper and lower margins in the aEEG tracing are the discriminating features for data inspection and tracing classification. However, most aEEG devices require that these margins be measured semi-subjectively. This paper proposes a step-by-step signal-processing method to calculate a compact aEEG tracing and the upper/lower margin using raw EEG data. The high accuracy of the algorithm was verified by comparison with a recognized commercial aEEG device based on a representative testing dataset composed of 72 aEEG data. The introduced digital algorithm achieved compact aEEG tracing with a small data size. Moreover, the algorithm precisely represented the upper and lower margins in the tracing for objective data interpretation. The described method should facilitate aEEG signal processing and further establish the clinical and experimental application of aEEG methods.展开更多
Making use of the theory of continuous homotopy and the relation betweensymmetric polynomtal and polynomtal in one variable the arthors devoted ims article to constructing a regularly homotopic curve with probability ...Making use of the theory of continuous homotopy and the relation betweensymmetric polynomtal and polynomtal in one variable the arthors devoted ims article to constructing a regularly homotopic curve with probability one. Discrete tracingalong this honlotopic curve leads 10 a class of Durand-Kerner algorithm with stepparameters. The convergernce of this class of algorithms is given, which solves theconjecture about the global property of Durand-Kerner algorithm. The.problem forsteplength selection is thoroughly discussed Finally, sufficient numerical examples areused to verify our theory展开更多
The recursive least-square (RLS) algorithm has been extensively used in adaptive identification, prediction, filtering, and many other fields. This paper proposes adding a second-difference term to the standard recurr...The recursive least-square (RLS) algorithm has been extensively used in adaptive identification, prediction, filtering, and many other fields. This paper proposes adding a second-difference term to the standard recurrent formula to create a novel method for improving tracing capabilities. Test results show that this can greatly improve the convergence capability of RLS algorithms.展开更多
Traceability precept is a broadcast encryption technique that content suppliers can trace malicious authorized users who leak the decryption key to an unauthorized user. To protect the data from eavesdropping, the con...Traceability precept is a broadcast encryption technique that content suppliers can trace malicious authorized users who leak the decryption key to an unauthorized user. To protect the data from eavesdropping, the content supplier encrypts the data and broadcast the cryptograph that only its subscribers can decrypt. However, a traitor may clone his decoder and sell the pirate decoders for profits. The traitor can modify the private key and the decryption program inside the pirate decoder to avoid divulging his identity. Furthermore, some traitors may fabricate a new legal private key together that cannot be traced to the creators. So in this paper, a renewed precept is proposed to achieve both revocation at a different level of capacity in each distribution and black-box tracing against self-protective pirate decoders. The rigorous mathematical deduction shows that our algorithm possess security property.展开更多
文摘With the rapid advancement of medical artificial intelligence(AI)technology,particularly the widespread adoption of AI diagnostic systems,ethical challenges in medical decision-making have garnered increasing attention.This paper analyzes the limitations of algorithmic ethics in medical decision-making and explores accountability mechanisms,aiming to provide theoretical support for ethically informed medical practices.The study highlights how the opacity of AI algorithms complicates the definition of decision-making responsibility,undermines doctor-patient trust,and affects informed consent.By thoroughly investigating issues such as the algorithmic“black box”problem and data privacy protection,we develop accountability assessment models to address ethical concerns related to medical resource allocation.Furthermore,this research examines the effective implementation of AI diagnostic systems through case studies of both successful and unsuccessful applications,extracting lessons on accountability mechanisms and response strategies.Finally,we emphasize that establishing a transparent accountability framework is crucial for enhancing the ethical standards of medical AI systems and protecting patients’rights and interests.
文摘Considering the existing problems of current Thevenin equivalent algorithms,a tracing algorithm of Thevenin equivalent that is applicable to power systems with large disturbance is presented.First,the potential voltage amplitude of Thevenin equivalent at the moment of fault is calculated with the parameters before fault.Then the resistance before fault and the potential voltage amplitude of the moment of fault is used to calculate other parameters of the moment of fault.The main steps of this algorithm are as follows:1)The resistance and reactance of Thevenin equivalent before fault are used as initial parameters.
文摘The trace norm of matrices plays an important role in quantum information and quantum computing. How to quantify it in today’s noisy intermediate scale quantum(NISQ) devices is a crucial task for information processing. In this paper, we present three variational quantum algorithms on NISQ devices to estimate the trace norms corresponding to different situations.Compared with the previous methods, our means greatly reduce the requirement for quantum resources. Numerical experiments are provided to illustrate the effectiveness of our algorithms.
文摘Parametric curves such as Bézier and B-splines, originally developedfor the design of automobile bodies, are now also used in image processing andcomputer vision. For example, reconstructing an object shape in an image,including different translations, scales, and orientations, can be performedusing these parametric curves. For this, Bézier and B-spline curves can be generatedusing a point set that belongs to the outer boundary of the object. Theresulting object shape can be used in computer vision fields, such as searchingand segmentation methods and training machine learning algorithms. Theprerequisite for reconstructing the shape with parametric curves is to obtainsequentially the points in the point set. In this study, a novel algorithm hasbeen developed that sequentially obtains the pixel locations constituting theouter boundary of the object. The proposed algorithm, unlike the methods inthe literature, is implemented using a filter containing weights and an outercircle surrounding the object. In a binary format image, the starting point ofthe tracing is determined using the outer circle, and the next tracing movementand the pixel to be labeled as the boundary point is found by the filter weights.Then, control points that define the curve shape are selected by reducing thenumber of sequential points. Thus, the Bézier and B-spline curve equationsdescribing the shape are obtained using these points. In addition, differenttranslations, scales, and rotations of the object shape are easily provided bychanging the positions of the control points. It has also been shown that themissing part of the object can be completed thanks to the parametric curves.
文摘Automated Program Repair(APR)techniques have shown significant potential in mitigating the cost and complexity associated with debugging by automatically generating corrective patches for software defects.Despite considerable progress in APR methodologies,existing approaches frequently lack contextual awareness of runtime behaviors and structural intricacies inherent in buggy source code.In this paper,we propose a novel APR approach that integrates attention mechanisms within an autoencoder-based framework,explicitly utilizing structural code affinity and execution context correlation derived from stack trace analysis.Our approach begins with an innovative preprocessing pipeline,where code segments and stack traces are transformed into tokenized representations.Subsequently,the BM25 ranking algorithm is employed to quantitatively measure structural code affinity and execution context correlation,identifying syntactically and semantically analogous buggy code snippets and relevant runtime error contexts from extensive repositories.These extracted features are then encoded via an attention-enhanced autoencoder model,specifically designed to capture significant patterns and correlations essential for effective patch generation.To assess the efficacy and generalizability of our proposed method,we conducted rigorous experimental comparisons against DeepFix,a state-of-the-art APR system,using a substantial dataset comprising 53,478 studentdeveloped C programs.Experimental outcomes indicate that our model achieves a notable bug repair success rate of approximately 62.36%,representing a statistically significant performance improvement of over 6%compared to the baseline.Furthermore,a thorough K-fold cross-validation reinforced the consistency,robustness,and reliability of our method across diverse subsets of the dataset.Our findings present the critical advantage of integrating attentionbased learning with code structural and execution context features in APR tasks,leading to improved accuracy and practical applicability.Future work aims to extend the model’s applicability across different programming languages,systematically optimize hyperparameters,and explore alternative feature representation methods to further enhance debugging efficiency and effectiveness.
基金We would like to acknowledge the financial support of the Ministry of Science and Technology of China(Grant No.2021YFC2900300)the National Natural Science Foundation of China(Grant Nos.41772074 and 42172103).
文摘Due to the combined influences such as ore-forming temperature,fluid and metal sources,sphalerite tends to incorporate diverse contents of trace elements during the formation of different types of Lead-zinc(Pb-Zn)deposits.Therefore,trace elements in sphalerite have long been utilized to distinguish Pb-Zn deposit types.However,previous discriminant diagrams usually contain two or three dimensions,which are limited to revealing the complicated interrelations between trace elements of sphalerite and the types of Pb-Zn deposits.In this study,we aim to prove that the sphalerite trace elements can be used to classify the Pb-Zn deposit types and extract key factors from sphalerite trace elements that can dis-criminate Pb-Zn deposit types using machine learning algorithms.A dataset of nearly 3600 sphalerite spot analyses from 95 Pb-Zn deposits worldwide determined by LA-ICP-MS was compiled from peer-reviewed publications,containing 12 elements(Mn,Fe,Co,Cu,Ga,Ge,Ag,Cd,In,Sn,Sb,and Pb)from 5 types,including Sedimentary Exhalative(SEDEX),Mississippi Valley Type(MVT),Volcanic Massive Sulfide(VMS),skarn,and epithermal deposits.Random Forests(RF)is applied to the data processing and the results show that trace elements of sphalerite can successfully discriminate different types of Pb-Zn deposits except for VMS deposits,most of which are falsely distinguished as skarn and epithermal types.To further discriminate VMS deposits,future studies could focus on enlarging the capacity of VMS deposits in datasets and applying other geological factors along with sphalerite trace elements when con-structing the classification model.RF’s feature importance and permutation feature importance were adopted to evaluate the element significance for classification.Besides,a visualized tool,t-distributed stochastic neighbor embedding(t-SNE),was used to verify the results of both classification and evalua-tion.The results presented here show that Mn,Co,and Ge display significant impacts on classification of Pb-Zn deposits and In,Ga,Sn,Cd,and Fe also have relatively important effects compared to the rest ele-ments,confirming that Pb-Zn deposits discrimination is mainly controlled by multi-elements in spha-lerite.Our study hence shows that machine learning algorithm can provide new insights into conventional geochemical analyses,inspiring future research on constructing classification models of mineral deposits using mineral geochemistry data.
文摘Amplitude-integrated EEG (aEEG) is a popular method for monitoring cerebral function. Although various commercial aEEG recorders have been produced, a detailed aEEG algorithm currently is not available. The upper and lower margins in the aEEG tracing are the discriminating features for data inspection and tracing classification. However, most aEEG devices require that these margins be measured semi-subjectively. This paper proposes a step-by-step signal-processing method to calculate a compact aEEG tracing and the upper/lower margin using raw EEG data. The high accuracy of the algorithm was verified by comparison with a recognized commercial aEEG device based on a representative testing dataset composed of 72 aEEG data. The introduced digital algorithm achieved compact aEEG tracing with a small data size. Moreover, the algorithm precisely represented the upper and lower margins in the tracing for objective data interpretation. The described method should facilitate aEEG signal processing and further establish the clinical and experimental application of aEEG methods.
文摘Making use of the theory of continuous homotopy and the relation betweensymmetric polynomtal and polynomtal in one variable the arthors devoted ims article to constructing a regularly homotopic curve with probability one. Discrete tracingalong this honlotopic curve leads 10 a class of Durand-Kerner algorithm with stepparameters. The convergernce of this class of algorithms is given, which solves theconjecture about the global property of Durand-Kerner algorithm. The.problem forsteplength selection is thoroughly discussed Finally, sufficient numerical examples areused to verify our theory
文摘The recursive least-square (RLS) algorithm has been extensively used in adaptive identification, prediction, filtering, and many other fields. This paper proposes adding a second-difference term to the standard recurrent formula to create a novel method for improving tracing capabilities. Test results show that this can greatly improve the convergence capability of RLS algorithms.
基金This work was supported by the Large-Scale Security SoC Project of Wuhan Science and Technology Bureau of China under Grand No. 20061005119.
文摘Traceability precept is a broadcast encryption technique that content suppliers can trace malicious authorized users who leak the decryption key to an unauthorized user. To protect the data from eavesdropping, the content supplier encrypts the data and broadcast the cryptograph that only its subscribers can decrypt. However, a traitor may clone his decoder and sell the pirate decoders for profits. The traitor can modify the private key and the decryption program inside the pirate decoder to avoid divulging his identity. Furthermore, some traitors may fabricate a new legal private key together that cannot be traced to the creators. So in this paper, a renewed precept is proposed to achieve both revocation at a different level of capacity in each distribution and black-box tracing against self-protective pirate decoders. The rigorous mathematical deduction shows that our algorithm possess security property.