Survival analysis is an interesting problem in machine learning, but it doesn’t get nearly as much attention as the usual classification and regression tasks, so there aren’t as many tools for it. Here I describe a nifty reduction that allows us to bring more traditional machinelearning tools to bear on the problem. Combined with the view of decision trees as greedy piecewiseconstant lossminimizing classifiers, it enables a number of powerful and flexible algorithms for largescale discrete survival analysis.
Setup: discrete survival analysis
Discrete survival analysis is motivated by problems like disease prognosis. Suppose we’re trying to predict survival times from a rare diseas. We enroll a bunch of subjects who were just diagnosed, and we do check up on them monthly until they die. Then we try to predict the time until recurrence, \(Y\), from data about the patient, \(X\).
We could use standard regression algorithms, but there are a couple wrinkles:

We only know the recurrence times up to the month, because that’s how often we check up on them, so \(Y\) isn’t actually continuous. This is where the “discrete” part of the name comes from. (We could probably get the exact death date if we tried, but that’s too complex for us right now. Plus, it’ll make the analysis harder later, as we’ll see.)

\(Y\) is known to be greater than zero. Most regression algorithms don’t have a natural way to take this into account.

Most importantly, we won’t be able to observe a recurrence for everyone. Some people will be lost to followup. Some people will be cured. Even some of those who die might outlast the study period. This leads to a problem called rightcensored data, where for certain patients we know that, say, \(Y > 60\) (months), but we don’t know the exact value of \(Y\).
We could get arount the last one by discarding all data points that don’t experience a recurrence, but this would introduce severe selection bias. Plus, it would throw away a lot of info on healthy people. So instead we’ll do something more clever.
The hazard function
If we were doing ordinary regression, where we had observed the \(Y\) for each \(X\), we would try to learn a function \(f(X) = E(YX)\) (or sometimes some other property of the conditional distribution of \(Y\), like the median). But we don’t have enough info about \(Y\) for that to work.
So instead of thinking in terms of \(E(YX)\), we’ll instead think about something more granular called the hazard function. The hazard function is $$h(X, t) = P(Y=t  Y \ge t, X),$$ –the probability that, if someone has survived up until month \(t\), they will die in that month.
In fact, learning \(h\) will tell us more about the disease than learning \(f\), because it will give us not only the expected survival time but the entire distribution of survival times! You can verify for yourself (it’s not hard) that $$P(Y = tX) = h(X, t)\prod_{i=1}^{t1} (1  h(X, i)),$$ so we can recover the entire probability mass function.
A loss function
So we want to learn some approximation \(\hat h(X, t) \approx h(X, t)\). But how do we decide what a good approximation is? The obvious thing to try is maximizing the expected loglikelihood.
The other wrinkle is that we don’t actually see \(Y\). Instead we see some related variables–how long each subject was in the study (\(T\)), and whether they exited by dying or being censored (\(D\)). (\(D\) is a Boolean variable, so \(D=1\) if \(T=Y\), and \(D=0\) if \(T < Y\).)
We could try to evaluate the loglikelihood of the exact data we see–that is, \(P(T, DX)\). But that would require us to build a model not only of the time of death but also the time of censorship (because we’d need the likelihood of getting censored at time \(T\)). Usually the censorship distribution is a property of the datacollection mechanism, not the underlying reality–for instance, in clinical studies it’s a property of the followup time. So a prediction of it probably won’t generalize well to other settings.
For that reason, instead of evaluating the likelihood of the entire data point \((T, D)\), we pretend that censorship just exogenously removes some information about \(Y\). So we’ll split our likelihood into two pieces: \(P(Y=T)\) for uncensored data, and \(P(Y > T)\) for censored data. Given this, we can write down the loglikelihood of seeing a particular datum if \(\hat h\) were correct:
$$L(T, DX, h = \hat h) = \log \left[ \prod_{i=1}^{T1}(1  \hat h(X, i)) \right]\left[\hat h(X, T)^D (1  \hat h(X, T))^{1D}\right]$$
where the first factor in the log is the probability that the patient survives until month \(T\), and the second is \(\hat h(X, T) = P(Y=TX)\) if \(D=1\) (the probability under \(\hat h\) that the patient died, if we observed them to die at \(T\)) and \((1  \hat h(X, T))\) if \(D=0\) (the probability under \(\hat h\) that the patient survived, if we observed them to survive at \(T\), i.e., they exited by censoring).
Now that we know what the loglikelihood of a particular datum looks like, we can use our dataset to calculate the empirical expected loglikelihood of \(\hat h\) by taking the average loglikelihood over each data point. With some simplification, this becomes:
$$L(\hat h) = \frac1N \sum_{i=1}^N \left[ D_i \log \hat h(X_i, T_i) + (1  D_i) \log (1  \hat h(X_i, T_i)) + \sum_{t=0}^{T_i1} \log (1  \hat h(X_i, T_i)) \right]$$
The trick
Of course, finding the \(\hat h\) that maximizes this loglikelihood loss function is still a challenge. But we can make one key observation, which is that if you treat \(T\) as another covariate, \(\hat h\) and \(L(\hat h)\) have the exact same structure as the classifier function and loss, \(\hat f\) and \(L(\hat f)\), of a maximumlikelihood estimator for a classifier. In other words, this problem is just classificationindisguise! Each datapoint in the hidden classification problem is the combination of an \(X_i\) in our original dataset plus some month \(t\), and the classification problem is “did point \(X_i\) die in month \(t\)”. We’ll call this new variable \(D_{it}\), and rewrite our loss function in terms of it:
$$L(\hat h) = \frac1N \sum_{i=1}^N \sum_{t=1}^{T_i} D_{it} \log \hat h(X_i, t) + (1  D_{it}) \log (1  \hat h(X_i, t))$$
It’s now obvious that this is proportional to the classifier likelihood function I wrote down earlier. This means that we can transform our original data set into a new one, with one row for each month that each \(X_i\) is in the sample; train a standard maximumlikelihood classifier on this new dataset with \(D_{it}\) as the target; and extract a perfectly good survival model for our original dataset!
The punchline
This reduction will work with any classifiertraining algorithm that tries to maximize the likelihood of the data. Combined with my earlier reframing of decision trees as greedy piecewiseconstant maximumlikelihood approximations, this means that decision trees will work with it–and by extension, any algorithm that uses decision trees as a building block, like like random forests or gradient machines. But you could also use the same transformation with logistic regression or any other maximumlikelihood method.
Some caveats need to be applied, though. With baggingbased methods like random forests or subsampled gradient trees, the default behaviour will be to bag the transformed dataset, but you probably want to make sure that each bag either contains every month’s point for each \(X_i\), or no points at all for that \(X_i\). With logistic regression, you should make sure to include interaction terms between \(t\) and the other covariates; otherwise the hazard function will be forced to have the same general shape for every datapoint.
Once those caveats are taken care of, though, this reduction is extremely helpful for developing survival models quickly. Survival analysis doesn’t get as much attention as standard classification or regression tasks, so it’s convenient that so many more frequentlyused models can be essentially reused wholesale.
Appendix: pseudocode for the reduction
def train(X, T, D)
// X, T, D are the original dataset
X' = []
D' = []
// the transformation
for each index i in X:
for t=1 to T[i]:
new_D = (0 if t < T[i], else D[i])
append new_D to D'
new_X = (X[i], t)
append new_X to X'
return a decision tree trained on (X', D')
def pmf(h, X)
// X is a single datapoint
// returns an array A where A[i] = P(Y = i  X)
A = []
p_so_far = 1 // this is p(T >= t  X)
for t = 1 to (the last month where h has any data):
// h knows p(T = t  T >= t, X), we call this p_cur
p_cur = h's prediction for (X, t)
append (p_so_far * p_cur) to A
p_so_far *= (1  p_cur)