When you reject a data point as an outlier, you’re saying that the point is unlikely to occur again, despite the fact that you’ve already seen it. This puts you in the curious position of believing that some values you have not seen are more likely than one of the values you have in fact seen.This is especially problematic in the case of rare but important outcomes and it can be very hard to decide what to do in these cases. Imagine a randomized controlled trial for the effectiveness of a new medication for a rare disease (maybe something memory improvement in older adults). One of the treated participants experiences sudden cardiac death whereas nobody in the placebo group does.
One one hand, if the sudden cardiac death had occured in the placebo group, we would be extremely reluctant to advance this as evidence that the medication in question prevents death. On the other hand, rare but serious drug adverse events both exist and can do a great deal of damage. The true but trivial answer is "get more data points". Obviously, if this is a feasible option it should be pursued.
But these questions get really tricky when there is simply a dearth of data. Under these circumstances, I do not think that any statistical approach (frequentist, Bayesian or other) is going to give consistently useful answers, as we don't know if the outlier is a mistake (a recording error, for example) or if it is the most important feature of the data.
It's not a fun problem.