For example, suppose you're a sociologist interested in studying sex ratios. A quick review of the literature will tell you that the differences in %girl births, comparing race of mother, age of mother, birth order, etc, are less than 1%. So if you want to study, say, the correlation between parental beauty and sex ratio, you're gonna be expecting very small effects which you'll need very large sample sizes to find. Statistical significance has nothing to do with it: Unless you have huge samples and good measurements, you can pretty much forget about it, whether your p-value is 0.02 or 0.05 or 0.01.
I think that this is correct and ties into a larger narrative: there are a lot of small effects of great theoretical interest that are really hard to show in actual data due to the limitations of realistic sample sizes. Observational epidemiologists often try to handle this by looking at prescription claims databases (for example). But these studies create a new problem: there is a serious concern of unmeasured confounding due to factors that are not measured in these data sources (e.g. smoking, alcohol use, diet, exercise). It's not really a big step forward to replicate lack of power with potential bias due to confounding.
I think the real issue is simply that small effects are difficult to study, no matter how interesting that they are. So I think that Andrew Gelman is right to call for the interpretation of these studies to done in the context of the larger science.