Part of my daily routine is to check the weather in my neighborhood and various nearby destinations (between its size and, more to the point, topography, LA has an extraordinary range of conditions. It's the only place I've ever lived in where people call each other up and ask other residents of the same city how's the weather where you are?).
Last Saturday, I checked Google and saw the forecast for a week from that day was a 75% chance of rain. That would have been very good news – – it's been dry in Southern California this winter – – perhaps too good to be true. I checked a couple of competing sites and saw no indications of rain in the next seven days anywhere in the vicinity. A couple of hours later I checked back in and Google was now in line with all the other forecasts with 5% or less predicted.
As of Thursday, Google is down to 0% for Saturday while the Weather Channel has 18%.
We've talked a lot about what it means for a continuously updated prediction such as election outcomes, navigation app travel time estimates, and weather forecasts to be accurate. It's a complicated question without an objectively true answer. There are many valid metrics, none of which gives us the definitive answer
Obviously, accuracy is the main objective, but there are other indicators of model quality we can and should keep an eye on. Barring big new data (a major shift in the polls, a recently reported accident on your route), we don't expect to see huge swings between updates, and if there are a number of competing models largely running off the same data, we expect a certain amount of consistency. If we have a prediction that is inaccurate, displays sudden swings, and makes forecasts wildly divergent from its competitors, that raises some questions.
No comments:
Post a Comment