I am on the road giving a guest lecture at UBC today. One of the topics I was going to cover in today's discussion was propensity score calibration (by the ever brilliant Til Sturmer). But I wonder -- if you have a true random subset of the overall population -- why not just use it? Or, if as Til assumes, the sample is too small, why not use multiple imputation? Wouldn't that be an equivalent technique that is more flexible for things like sub group analysis?
Or is it the complexity of the imputation in data sets of the size Til worked with that was the issue? It's certainly a point to ponder.
Driverless cars and robots are ahead of schedule
33 minutes ago