October 26-27th 2016
Faculty of Economic and Business Sciences | Seminar 7
Regression models for the restricted residual mean life for right-censored and left-truncated data
2016/10/26 – 16:00 h | Thomas Scheike, Dept. Biostatistics, University of Copenhaguen
Abstract
The resulting hazard ratios from a Cox’s regression hazards model are hard to interpret and to be converted into prolonged survival time. As the main goal is often to study survival functions, there is increasing interest in summary measures based on the survival function that are easier to interpret than the hazard ratio. The residual mean time is an important example of those measures. However, due to the presence of right censoring, the tail of the survival distribution is often difficult to be correctly estimated. Therefore, we consider the restricted residual mean time, which represents a partial area under the survival function, given any time horizon τ, and is interpreted as the residual life expectancy up to τ of a subject surviving up to time t. This is joint work with Giuliana Cortese, Stine Holmboe.
Causal interpretation in multistate models via a counting process representation
2016/10/26 – 16:45 h | Daniel Commenges, Bordeaux Population Health Research Center
Abstract
There is a duality between states and events. It follows that multistate models can be formulated as counting processes models, often in a more concise way, and this allows us to rigorously derive the likelihood. This also allows to applying a definition of influence of one process on another, which is the basis of what I call the stochastic system approach to causality. Death has a special meaning in this approach, in that all the other processes are defined only for living subjects. This should be clearly stated in the definition of the model. The case of treatments raises special problems for causal interpretation in observational studies. A study about the effect of institutionalization of elder subject on dementia and death will be briefly revisited.
Significance testing on nonparametric mixture cure models
2016/10/27 – 10:00 h | M. Amalia Jácome Pumar, Dept. Mathematics, University of A Coruña
Abstract
Cure models are special survival models that take into account the possibility that some of the subjects will never experience the event of interest, and therefore the lifetime is considered infinite. In such situation, standard survival models are inappropriate. Specifically, cure models are used to estimate the probability of cure (incidence) and the survival function of the uncured population (latency). A completely nonparametric method for the estimation of mixture cure models was proposed by López-Cheda et al. (2016, 2017). In this work, a nonparametric covariate significance test on the incidence is proposed for selecting explanatory variables. It is based on the test by Delgado and González-Manteiga (2001), which does not need to estimate the conditional expectation function given all the variables, but only those which are significant under the null hypothesis. Its efficiency is evaluated in a simulation study with Monte Carlo, in which the distribution of the test is approximated by bootstrap. Finally, the proposed methods are applied to a database of colorectal cancer from the University Hospital of A Coruña (CHUAC).
Delgado, M.A. and González-Manteiga, W. (2001). Significance testing in nonparametric regression based on the bootstrap. The Annals of Statistics 29, 1469-1507.
López-Cheda, A., Cao, R., Jácome, M.A. and Van Keilegom, I. (2017). Nonparametric incidence estimation and bootstrap bandwidth selection in mixture cure models. Computational Statistics and Data Analysis 105, 144-165.
López-Cheda, A., Jácome, M.A. and Cao, R. (2016). Nonparametric latency estimation for mixture cure models. Paper in second revision