Sorry thats a load of crap. You sounded like you had some knowledge on statistics and modeling, but maybe your expertise is just in the coding side. If you have enough actual data and try to fit your model to it and it is outside your desired interval, you have reason to believe the model is not an accurate representation. You do not know this for a 100% fact because there is a chance. If I generate 1 million numbers from 0 to 1 and all 1 million are equal to 1, can you state with 100% certainty that the distribution isn't normal? Nope.
Thats why I said I have reason to believe. The data produced, even if you allow a lenient amount for mechanics over what the other specs are showing, was not close to what simc's initial expected values showed. I don't need to see the source code to say that. Now yeah, if I was arguing that it was flat out wrong, I would have to be able to point out an error in the source code or that would be meaningless.
You do have a good point about using multiple tools. However, if I had multiple tools and I had reason to believe one of them wasn't accurate reflecting real data in a specific instance, I certainly wouldn't be recommending that tool be used in that specific instance.