In the literal meaning of the terms, a parametric statistical test is one that makes assumptions about the parameters (defining properties) of the population distribution(s) from which one’s data are drawn, while a non-parametric test is one that makes no such assumptions. In this strict sense, “non-parametric” is essentially a null category, since virtually all statistical tests assume one thing or another about the properties of the source population(s).

For practical purposes, you can think of “parametric” as referring to tests, such as t-tests and the analysis of variance, that assume the underlying source population(s) to be normally distributed; they generally also assume that one’s measures derive from an equal-interval scale. And you can think of “non-parametric” as referring to tests that do not make on these particular assumptions. Examples of non-parametric tests include

- the various forms of chi-square tests (Chapter 8),
- the Fisher Exact Probability test (Subchapter 8a),
- the Mann-Whitney Test (Subchapter 11a),
- the Wilcoxon Signed-Rank Test (Subchapter 12a),
- the Kruskal-Wallis Test (Subchapter 14a),
- and the Friedman Test (Subchapter 15a).

Non-parametric tests are sometimes spoken of as “distribution-free” tests, although this too is something of a misnomer.

Source = http://vassarstats.net/textbook/parametric.html