We live with mathematical models all around us, whether we realize it or not. We apply them at work as if those are deterministic formulae. One such approach is the application of normal distribution which is a common choice for modeling large number of independent or random variables.

Models based on normal distribution is almost considered like a universal truth and we apply it even when the variables are not really random; or even the sample size is clearly small. It may be complexities of work products, productivity figures, appraisal ratings - taken at micro level.

Luckily, I see another normal distribution governing all these. It is the fact that the random errors also follow a normal distribution. So while we apply the such models erroneously, without any obvious bias, the errors align to a normal distribution. And chances of us going grossly wrong, either way, happen to be far less than somehow getting it through. We also use our will, might and a lot of other factors, which were beyond the scope of original model, to get our results established - seemingly validating the model behavior, and passing it on as knowledge.