|
Home > Statistics Every Writer Should Know > The Stats Board > Discusssion
n vs. n-1
In calculating standard deviations, and in particular RSD's of data sets, when is it proper to use n rather than n-1. I seem to remember being taught that "n" was used for small populations, I believe the number given was 20 sample points. At the time, this was related as 'degrees of freedom', which I now believe may have been explained to me incorrectly. For references' sake, the RSD being measured is of a known and finite set of measurement (area counts from chromatographic injections). Any help here or via private email will be appreciated. Thanks, Rich
READERS RESPOND: Re: n vs. n-1 You will notice that as "n" increases, the difference between using n or n-1 becomes almost negilible. Many people teach that after about n=25 or n=30, you can use n instead of n-1 without a problem. For the t-statistic, n-1 is called the "degrees-of-freedom." Unless you a statistics major, don't worry about why.
Re: n vs. n-1 If the mean happens to be known, then dividing the sum of squares about this known mean by n is an unbiased estimate of sigma^2. Thus, you have several choices, each one having some desirable property.
Your $5 contribution helps cover part the $500 annual cost of keeping this site online.
|
|||||||||
|