This paper explores the data theory of value along the line of reasoning epochal characteristics of data-theoretical innovation-paradigmatic transformation and,through a comparison of hard and soft factors and observa...This paper explores the data theory of value along the line of reasoning epochal characteristics of data-theoretical innovation-paradigmatic transformation and,through a comparison of hard and soft factors and observation of data peculiar features,it draws the conclusion that data have the epochal characteristics of non-competitiveness and non-exclusivity,decreasing marginal cost and increasing marginal return,non-physical and intangible form,and non-finiteness and non-scarcity.It is the epochal characteristics of data that undermine the traditional theory of value and innovate the“production-exchange”theory,including data value generation,data value realization,data value rights determination and data value pricing.From the perspective of data value generation,the levels of data quality,processing,use and connectivity,data application scenarios and data openness will influence data value.From the perspective of data value realization,data,as independent factors of production,show value creation effect,create a value multiplier effect by empowering other factors of production,and substitute other factors of production to create a zero-price effect.From the perspective of data value rights determination,based on the theory of property,the tragedy of the private outweighs the comedy of the private with respect to data,and based on the theory of sharing economy,the comedy of the commons outweighs the tragedy of the commons with respect to data.From the perspective of data pricing,standardized data products can be priced according to the physical product attributes,and non-standardized data products can be priced according to the virtual product attributes.Based on the epochal characteristics of data and theoretical innovation,the“production-exchange”paradigm has undergone a transformation from“using tangible factors to produce tangible products and exchanging tangible products for tangible products”to“using intangible factors to produce tangible products and exchanging intangible products for tangible products”and ultimately to“using intangible factors to produce intangible products and exchanging intangible products for intangible products”.展开更多
Although the existing legal norms and judicial practic-es can provide basic guidance for the right to personal data portabili-ty, it can be concluded that there are obstacles to the realization of this right through e...Although the existing legal norms and judicial practic-es can provide basic guidance for the right to personal data portabili-ty, it can be concluded that there are obstacles to the realization of this right through empirical research of the privacy policies of 66 mobile apps, such as whether they have stipulations on the right to personal data portability, whether they are able to derive copies of personal in-formation automatically, whether there are textual examples, whether ID verification is required, whether the copied documents are encrypt-ed, and whether the scope of personal information involved is consis-tent. This gap in practice, on the one hand, reflects the misunderstand-ing of the right to personal data portability, and on the other hand, is a result of the negative externalities, practical costs and technical lim-itations of the right to personal data portability. Based on rethinking the right to data portability, we can somehow solve practical problems concerning the right to personal data portability through multiple measures such as promoting the fulfillment of this right by legislation, optimizing technology-oriented operations, refining response process mechanisms, and enhancing system interoperability.展开更多
In the digital era,the free cross-border flow of data and the development of digital trade are complementary.Consequently,as an inherent demand for data privacy,trade liberalization is closely linked to the right to d...In the digital era,the free cross-border flow of data and the development of digital trade are complementary.Consequently,as an inherent demand for data privacy,trade liberalization is closely linked to the right to data privacy,and data privacy protection is increasingly becoming a trade issue.However,conflicting rule settings between the two create discrepancies and result in differing rule-making approaches.The concept of the right to data privacy provides guidance and evaluative functions for the development of trade liberalization,facilitating the healthy development of digital trade.It is appropriate to treat the interaction between trade liberalization and data privacy protection in a rational way and to place them within independent systems at this stage.Data localization measures are an effective way to balance digital trade liberalization with the right to data privacy.As a data privacy protection measure,data localization has legitimacy within the trade law framework.Looking ahead,to achieve a harmonious advancement of digital trade liberalization and protection of the right to data privacy,all parties should uphold the premise of the national regulatory autonomy,and respect the data localization measures adopted by countries based on their own national conditions and personal data protection considerations.展开更多
That the world is a global village is no longer news through the tremendous advancement in the Information Communication Technology (ICT). The metamorphosis of the human data storage and analysis from analogue through...That the world is a global village is no longer news through the tremendous advancement in the Information Communication Technology (ICT). The metamorphosis of the human data storage and analysis from analogue through the jaguars-loom mainframe computer to the present modern high power processing computers with sextillion bytes storage capacity has prompted discussion of Big Data concept as a tool in managing hitherto all human challenges of complex human system multiplier effects. The supply chain management (SCM) that deals with spatial service delivery that must be safe, efficient, reliable, cheap, transparent, and foreseeable to meet customers’ needs cannot but employ bid data tools in its operation. This study employs secondary data online to review the importance of big data in supply chain management and the levels of adoption in Nigeria. The study revealed that the application of big data tools in SCM and other industrial sectors is synonymous to human and national development. It is therefore recommended that both private and governmental bodies should key into e-transactions for easy data assemblage and analysis for profitable forecasting and policy formation.展开更多
The right to the protection of personal data is an important human right in the era of big data and a constitutional right based on the national protection obligation and the theory of human dignity,making it of speci...The right to the protection of personal data is an important human right in the era of big data and a constitutional right based on the national protection obligation and the theory of human dignity,making it of special significance for the realization of citizenship in a digital society.It can be seen from an examination of the constitutional texts of various countries in the world that the right to the protection of personal data as a constitutional right has rich normative connotations,and the key legal link to realize this right lies in the national legislature actively fulfilling its obligation to shape and specify the protection of personal data in accordance with the entrustment of the constitutional norms.Given the constitutional principles of fundamental rights protection,i.e.,realizing the constitutional status of the right to the protection of personal data as a basic right by means of institutional guarantees,the legislature should first adhere to the constitutionality principle of data protection legislation.Second,a multi-level data protection legal system centered on the right to the protection of personal data should be established.Finally,the institutional guarantee mechanism for the protection of personal data should be continuously improved through constitutional interpretation.展开更多
The right to data portability is an essential part of personal data protection in the booming era of big data,which is closely related to our work and lives,as it may play a crucial role in safeguarding self-determina...The right to data portability is an essential part of personal data protection in the booming era of big data,which is closely related to our work and lives,as it may play a crucial role in safeguarding self-determination right of the data subject and foster a favorable environment for the players in a fair competition market.However,the implementation of the right to data portability in China is still in its infancy,fraught with complexities and uncertainties.This paper studies the right to data portability in China based on the Personal Information Protection Law,and explores its development,current status and potential impact in China.Moreover,it conducts a comparative analysis of the EU and US experience,mostly from the legislative perspective,to better understand practices in the world.In addition,this paper puts forward some specific suggestions on implementing the right to data portability,hoping that the right to data portability can be fully guaranteed in our real life.展开更多
This paper presents a hierarchical Bayesian approach to the estimation of components’ reliability (survival) using a Weibull model for each of them. The proposed method can be used to estimation with general survival...This paper presents a hierarchical Bayesian approach to the estimation of components’ reliability (survival) using a Weibull model for each of them. The proposed method can be used to estimation with general survival censored data, because the estimation of a component’s reliability in a series (parallel) system is equivalent to the estimation of its survival function with right- (left-) censored data. Besides the Weibull parametric model for reliability data, independent gamma distributions are considered at the first hierarchical level for the Weibull parameters and independent uniform distributions over the real line as priors for the parameters of the gammas. In order to evaluate the model, an example and a simulation study are discussed.展开更多
The problem of data right confirmation is a long-term bottleneck in data sharing.Existing methods for confirming data rights lack credibility owing to poor supervision,and work only with specific data types because of...The problem of data right confirmation is a long-term bottleneck in data sharing.Existing methods for confirming data rights lack credibility owing to poor supervision,and work only with specific data types because of their technical limitations.The emergence of blockchain is followed by some new data-sharing models that may provide improved data security.However,few of these models perform well enough in confirming data rights because the data access could not be fully under the control of the blockchain facility.In view of this,we propose a right-confirmable data-sharing model named RCDS that features symbol mapping coding(SMC)and blockchain.With SMC,each party encodes its digital identity into the byte sequence of the shared data by generating a unique symbol mapping table,whereby declaration of data rights can be content-independent for any type and any volume of data.With blockchain,all data-sharing participants jointly supervise the delivery and the access to shared data,so that granting of data rights can be openly verified.The evaluation results show that RCDS is effective and practical in data-sharing applications that are conscientious about data right confirmation.展开更多
We assume T1,..., Tn are i.i.d. data sampled from distribution function F with density function f and C1,...,Cn are i.i.d. data sampled from distribution function G. Observed data consists of pairs (Xi, δi), em= 1,...We assume T1,..., Tn are i.i.d. data sampled from distribution function F with density function f and C1,...,Cn are i.i.d. data sampled from distribution function G. Observed data consists of pairs (Xi, δi), em= 1,..., n, where Xi = min{Ti,Ci}, δi = I(Ti 6 Ci), I(A) denotes the indicator function of the set A. Based on the right censored data {Xi, δi}, em=1,..., n, we consider the problem of estimating the level set {f 〉 c} of an unknown one-dimensional density function f and study the asymptotic behavior of the plug-in level set estimators. Under some regularity conditions, we establish the asymptotic normality and the exact convergence rate of the λg-measure of the symmetric difference between the level set {f ≥ c} and its plug-in estimator {fn ≥ c}, where f is the density function of F, and fn is a kernel-type density estimator of f. Simulation studies demonstrate that the proposed method is feasible. Illustration with a real data example is also provided.展开更多
It is of great interest to estimate quantile residual lifetime in medical science and many other fields. In survival analysis, Kaplan-Meier(K-M) estimator has been widely used to estimate the survival distribution. ...It is of great interest to estimate quantile residual lifetime in medical science and many other fields. In survival analysis, Kaplan-Meier(K-M) estimator has been widely used to estimate the survival distribution. However, it is well-known that the K-M estimator is not continuous, thus it can not always be used to calculate quantile residual lifetime. In this paper, the authors propose a kernel smoothing method to give an estimator of quantile residual lifetime. By using modern empirical process techniques, the consistency and the asymptotic normality of the proposed estimator are provided neatly.The authors also present the empirical small sample performances of the estimator. Deficiency is introduced to compare the performance of the proposed estimator with the naive unsmoothed estimator of the quantile residaul lifetime. Further simulation studies indicate that the proposed estimator performs very well.展开更多
The maximum entropy method has been widely used in many fields, such as statistical mechanics,economics, etc. Its crucial idea is that when we make inference based on partial information, we must use the distribution ...The maximum entropy method has been widely used in many fields, such as statistical mechanics,economics, etc. Its crucial idea is that when we make inference based on partial information, we must use the distribution with maximum entropy subject to whatever is known. In this paper, we investigate the empirical entropy method for right censored data and use simulation to compare the empirical entropy method with the empirical likelihood method. Simulations indicate that the empirical entropy method gives better coverage probability than that of the empirical likelihood method for contaminated and censored lifetime data.展开更多
Increasingly,algorithms challenge legal regulations,and also challenge the right to explanation,personal privacy and freedom,and individual equal protection.As decision-making mechanisms for human-machine interaction,...Increasingly,algorithms challenge legal regulations,and also challenge the right to explanation,personal privacy and freedom,and individual equal protection.As decision-making mechanisms for human-machine interaction,algorithms are not value-neutral and should be legally regulated.Algorithm disclosure,personal data empowerment,and anti-algorithmic discrimination are traditional regulatory methods relating to algorithms,but mechanically using these methods presents difficulties in feasibility and desirability.Algorithm disclosure faces difficulties such as technical infeasibility,meaningless disclosure,user gaming and intellectual property right infringement.And personal data empowerment faces difficulties such as personal difficulty in exercising data rights and excessive personal data empowerment,making it difficult for big data and algorithms to operate effectively.Anti-algorithmic discrimination faces difficulties such as non-machine algorithmic discrimination,impossible status neutrality,and difficult realization of social equality.Taking scenarios of algorithms lightly is the root cause of the traditional algorithm regulation path dilemma.Algorithms may differ in attributes due to specific algorithmic subjects,objects and domains involved.Therefore,algorithm regulation should be developed and employed based on a case-by-case approach to the development of accountable algorithms.Following these development principles,specific rules can be enacted to regulate algorithm disclosure,data empowerment,and anti-algorithmic discrimination.展开更多
In clinical studies,it is often that the medical treatments take a period of time before having an effect on patients and the delayed time may vary from person to person.Even though there exists a rich literature deve...In clinical studies,it is often that the medical treatments take a period of time before having an effect on patients and the delayed time may vary from person to person.Even though there exists a rich literature developing methods to estimate the time-lag period and treatment effects after lag time,most of these existing studies assume a fixed lag time.In this paper,we propose a hazard model incorporating a random treatment time-lag effect to describe the heterogeneous treatment effect among subjects.The EM algorithm is used to obtain the maximum likelihood estimator.We give the asymptotic properties of the proposed estimator and evaluate its performance via simulation studies.An application of the proposed method to real data is provided.展开更多
The random weighting method is an emerging computing method in statistics.In this paper,we propose a novel estimation of the survival function for right censored data based on the random weighting method.Under some re...The random weighting method is an emerging computing method in statistics.In this paper,we propose a novel estimation of the survival function for right censored data based on the random weighting method.Under some regularity conditions,we prove the strong consistency of this estimation.展开更多
基金funded by“Management Model Innovation of Chinese Enterprises”Research Project,Institute of Industrial Economics,CASS(Grant No.2019-gjs-06)Project under the Graduate Student Scientific and Research Innovation Support Program,University of Chinese Academy of Social Sciences(Graduate School)(Grant No.2022-KY-118).
文摘This paper explores the data theory of value along the line of reasoning epochal characteristics of data-theoretical innovation-paradigmatic transformation and,through a comparison of hard and soft factors and observation of data peculiar features,it draws the conclusion that data have the epochal characteristics of non-competitiveness and non-exclusivity,decreasing marginal cost and increasing marginal return,non-physical and intangible form,and non-finiteness and non-scarcity.It is the epochal characteristics of data that undermine the traditional theory of value and innovate the“production-exchange”theory,including data value generation,data value realization,data value rights determination and data value pricing.From the perspective of data value generation,the levels of data quality,processing,use and connectivity,data application scenarios and data openness will influence data value.From the perspective of data value realization,data,as independent factors of production,show value creation effect,create a value multiplier effect by empowering other factors of production,and substitute other factors of production to create a zero-price effect.From the perspective of data value rights determination,based on the theory of property,the tragedy of the private outweighs the comedy of the private with respect to data,and based on the theory of sharing economy,the comedy of the commons outweighs the tragedy of the commons with respect to data.From the perspective of data pricing,standardized data products can be priced according to the physical product attributes,and non-standardized data products can be priced according to the virtual product attributes.Based on the epochal characteristics of data and theoretical innovation,the“production-exchange”paradigm has undergone a transformation from“using tangible factors to produce tangible products and exchanging tangible products for tangible products”to“using intangible factors to produce tangible products and exchanging intangible products for tangible products”and ultimately to“using intangible factors to produce intangible products and exchanging intangible products for intangible products”.
基金the current result of the “research on the basic category system of contemporary Chinese digital law” (23&ZD154), a major project of the National Social Science Fund of China.
文摘Although the existing legal norms and judicial practic-es can provide basic guidance for the right to personal data portabili-ty, it can be concluded that there are obstacles to the realization of this right through empirical research of the privacy policies of 66 mobile apps, such as whether they have stipulations on the right to personal data portability, whether they are able to derive copies of personal in-formation automatically, whether there are textual examples, whether ID verification is required, whether the copied documents are encrypt-ed, and whether the scope of personal information involved is consis-tent. This gap in practice, on the one hand, reflects the misunderstand-ing of the right to personal data portability, and on the other hand, is a result of the negative externalities, practical costs and technical lim-itations of the right to personal data portability. Based on rethinking the right to data portability, we can somehow solve practical problems concerning the right to personal data portability through multiple measures such as promoting the fulfillment of this right by legislation, optimizing technology-oriented operations, refining response process mechanisms, and enhancing system interoperability.
基金the phased outcome of the project“Research on China’s Rule of Law Path for Maintaining the Security and Stability of Global Supply Chain”(Approval Number 2024M751358)that received funding from the 75th general grant of the China Postdoctoral Science Foundation.
文摘In the digital era,the free cross-border flow of data and the development of digital trade are complementary.Consequently,as an inherent demand for data privacy,trade liberalization is closely linked to the right to data privacy,and data privacy protection is increasingly becoming a trade issue.However,conflicting rule settings between the two create discrepancies and result in differing rule-making approaches.The concept of the right to data privacy provides guidance and evaluative functions for the development of trade liberalization,facilitating the healthy development of digital trade.It is appropriate to treat the interaction between trade liberalization and data privacy protection in a rational way and to place them within independent systems at this stage.Data localization measures are an effective way to balance digital trade liberalization with the right to data privacy.As a data privacy protection measure,data localization has legitimacy within the trade law framework.Looking ahead,to achieve a harmonious advancement of digital trade liberalization and protection of the right to data privacy,all parties should uphold the premise of the national regulatory autonomy,and respect the data localization measures adopted by countries based on their own national conditions and personal data protection considerations.
文摘That the world is a global village is no longer news through the tremendous advancement in the Information Communication Technology (ICT). The metamorphosis of the human data storage and analysis from analogue through the jaguars-loom mainframe computer to the present modern high power processing computers with sextillion bytes storage capacity has prompted discussion of Big Data concept as a tool in managing hitherto all human challenges of complex human system multiplier effects. The supply chain management (SCM) that deals with spatial service delivery that must be safe, efficient, reliable, cheap, transparent, and foreseeable to meet customers’ needs cannot but employ bid data tools in its operation. This study employs secondary data online to review the importance of big data in supply chain management and the levels of adoption in Nigeria. The study revealed that the application of big data tools in SCM and other industrial sectors is synonymous to human and national development. It is therefore recommended that both private and governmental bodies should key into e-transactions for easy data assemblage and analysis for profitable forecasting and policy formation.
基金the provincial key academic project Research of the Grassroots Negotiation and Governance Modernization Viewing from the Angle of State Governance(2019-GDXK-0005)
文摘The right to the protection of personal data is an important human right in the era of big data and a constitutional right based on the national protection obligation and the theory of human dignity,making it of special significance for the realization of citizenship in a digital society.It can be seen from an examination of the constitutional texts of various countries in the world that the right to the protection of personal data as a constitutional right has rich normative connotations,and the key legal link to realize this right lies in the national legislature actively fulfilling its obligation to shape and specify the protection of personal data in accordance with the entrustment of the constitutional norms.Given the constitutional principles of fundamental rights protection,i.e.,realizing the constitutional status of the right to the protection of personal data as a basic right by means of institutional guarantees,the legislature should first adhere to the constitutionality principle of data protection legislation.Second,a multi-level data protection legal system centered on the right to the protection of personal data should be established.Finally,the institutional guarantee mechanism for the protection of personal data should be continuously improved through constitutional interpretation.
文摘The right to data portability is an essential part of personal data protection in the booming era of big data,which is closely related to our work and lives,as it may play a crucial role in safeguarding self-determination right of the data subject and foster a favorable environment for the players in a fair competition market.However,the implementation of the right to data portability in China is still in its infancy,fraught with complexities and uncertainties.This paper studies the right to data portability in China based on the Personal Information Protection Law,and explores its development,current status and potential impact in China.Moreover,it conducts a comparative analysis of the EU and US experience,mostly from the legislative perspective,to better understand practices in the world.In addition,this paper puts forward some specific suggestions on implementing the right to data portability,hoping that the right to data portability can be fully guaranteed in our real life.
文摘This paper presents a hierarchical Bayesian approach to the estimation of components’ reliability (survival) using a Weibull model for each of them. The proposed method can be used to estimation with general survival censored data, because the estimation of a component’s reliability in a series (parallel) system is equivalent to the estimation of its survival function with right- (left-) censored data. Besides the Weibull parametric model for reliability data, independent gamma distributions are considered at the first hierarchical level for the Weibull parameters and independent uniform distributions over the real line as priors for the parameters of the gammas. In order to evaluate the model, an example and a simulation study are discussed.
基金Project supported by the Natural Science Foundation of Hebei Province,China(No.F2023201032)the S&T Program of Hebei Province,China(No.20310105D)。
文摘The problem of data right confirmation is a long-term bottleneck in data sharing.Existing methods for confirming data rights lack credibility owing to poor supervision,and work only with specific data types because of their technical limitations.The emergence of blockchain is followed by some new data-sharing models that may provide improved data security.However,few of these models perform well enough in confirming data rights because the data access could not be fully under the control of the blockchain facility.In view of this,we propose a right-confirmable data-sharing model named RCDS that features symbol mapping coding(SMC)and blockchain.With SMC,each party encodes its digital identity into the byte sequence of the shared data by generating a unique symbol mapping table,whereby declaration of data rights can be content-independent for any type and any volume of data.With blockchain,all data-sharing participants jointly supervise the delivery and the access to shared data,so that granting of data rights can be openly verified.The evaluation results show that RCDS is effective and practical in data-sharing applications that are conscientious about data right confirmation.
基金supposed by National Natural Science Foundation of China (Grant Nos. 11071137 and 11371215)Tsinghua Yue-Yuen Medical Science Fund
文摘We assume T1,..., Tn are i.i.d. data sampled from distribution function F with density function f and C1,...,Cn are i.i.d. data sampled from distribution function G. Observed data consists of pairs (Xi, δi), em= 1,..., n, where Xi = min{Ti,Ci}, δi = I(Ti 6 Ci), I(A) denotes the indicator function of the set A. Based on the right censored data {Xi, δi}, em=1,..., n, we consider the problem of estimating the level set {f 〉 c} of an unknown one-dimensional density function f and study the asymptotic behavior of the plug-in level set estimators. Under some regularity conditions, we establish the asymptotic normality and the exact convergence rate of the λg-measure of the symmetric difference between the level set {f ≥ c} and its plug-in estimator {fn ≥ c}, where f is the density function of F, and fn is a kernel-type density estimator of f. Simulation studies demonstrate that the proposed method is feasible. Illustration with a real data example is also provided.
基金supported by the National Natural Science Foundation of China under Grant No.71271128the State Key Program of National Natural Science Foundation of China under Grant No.71331006+4 种基金NCMISKey Laboratory of RCSDSCAS and IRTSHUFEPCSIRT(IRT13077)supported by Graduate Innovation Fund of Shanghai University of Finance and Economics under Grant No.CXJJ-2011-429
文摘It is of great interest to estimate quantile residual lifetime in medical science and many other fields. In survival analysis, Kaplan-Meier(K-M) estimator has been widely used to estimate the survival distribution. However, it is well-known that the K-M estimator is not continuous, thus it can not always be used to calculate quantile residual lifetime. In this paper, the authors propose a kernel smoothing method to give an estimator of quantile residual lifetime. By using modern empirical process techniques, the consistency and the asymptotic normality of the proposed estimator are provided neatly.The authors also present the empirical small sample performances of the estimator. Deficiency is introduced to compare the performance of the proposed estimator with the naive unsmoothed estimator of the quantile residaul lifetime. Further simulation studies indicate that the proposed estimator performs very well.
基金Supported by the National Natural Science Foundation of China(No.11171230,11231010)
文摘The maximum entropy method has been widely used in many fields, such as statistical mechanics,economics, etc. Its crucial idea is that when we make inference based on partial information, we must use the distribution with maximum entropy subject to whatever is known. In this paper, we investigate the empirical entropy method for right censored data and use simulation to compare the empirical entropy method with the empirical likelihood method. Simulations indicate that the empirical entropy method gives better coverage probability than that of the empirical likelihood method for contaminated and censored lifetime data.
文摘Increasingly,algorithms challenge legal regulations,and also challenge the right to explanation,personal privacy and freedom,and individual equal protection.As decision-making mechanisms for human-machine interaction,algorithms are not value-neutral and should be legally regulated.Algorithm disclosure,personal data empowerment,and anti-algorithmic discrimination are traditional regulatory methods relating to algorithms,but mechanically using these methods presents difficulties in feasibility and desirability.Algorithm disclosure faces difficulties such as technical infeasibility,meaningless disclosure,user gaming and intellectual property right infringement.And personal data empowerment faces difficulties such as personal difficulty in exercising data rights and excessive personal data empowerment,making it difficult for big data and algorithms to operate effectively.Anti-algorithmic discrimination faces difficulties such as non-machine algorithmic discrimination,impossible status neutrality,and difficult realization of social equality.Taking scenarios of algorithms lightly is the root cause of the traditional algorithm regulation path dilemma.Algorithms may differ in attributes due to specific algorithmic subjects,objects and domains involved.Therefore,algorithm regulation should be developed and employed based on a case-by-case approach to the development of accountable algorithms.Following these development principles,specific rules can be enacted to regulate algorithm disclosure,data empowerment,and anti-algorithmic discrimination.
基金Supported by the National Natural Science Foundation of China(Grant No.11971362)。
文摘In clinical studies,it is often that the medical treatments take a period of time before having an effect on patients and the delayed time may vary from person to person.Even though there exists a rich literature developing methods to estimate the time-lag period and treatment effects after lag time,most of these existing studies assume a fixed lag time.In this paper,we propose a hazard model incorporating a random treatment time-lag effect to describe the heterogeneous treatment effect among subjects.The EM algorithm is used to obtain the maximum likelihood estimator.We give the asymptotic properties of the proposed estimator and evaluate its performance via simulation studies.An application of the proposed method to real data is provided.
文摘The random weighting method is an emerging computing method in statistics.In this paper,we propose a novel estimation of the survival function for right censored data based on the random weighting method.Under some regularity conditions,we prove the strong consistency of this estimation.