What is Non-Patametric Tests

Guide: Non-patametric Tests (Mann-Whitney, Kruskal-Wallis)

Author's Avatar

Daniel Croft

Daniel Croft is an experienced continuous improvement manager with a Lean Six Sigma Black Belt and a Bachelor's degree in Business Management. With more than ten years of experience applying his skills across various industries, Daniel specializes in optimizing processes and improving efficiency. His approach combines practical experience with a deep understanding of business fundamentals to drive meaningful change.

Non-parametric tests represent a significant aspect of statistical analysis, providing a flexible alternative to traditional parametric methods. These tests are particularly useful when dealing with data that does not conform to standard distribution assumptions, like normality, or when handling small sample sizes.

They are adaptable to various data types, including ordinal and nominal, and are less affected by violations of typical parametric assumptions such as homogeneity of variance. This text explores the intricacies of non-parametric tests, delving into examples like the Mann-Whitney U Test and Kruskal-Wallis H Test, highlighting their applications, advantages, and procedures.

Table of Contents

What Are Non-Parametric Tests?

Non-parametric tests are a class of statistical tests that offer a more flexible approach to data analysis compared to parametric tests. Let’s break down what makes them different and why they might be a go-to choice in certain scenarios.

No Assumption of Distribution

In statistical testing, it’s common to assume that the data follows a specific distribution, like a normal distribution. These assumptions can be problematic if your data doesn’t fit the mold. Non-parametric tests sidestep this issue by not requiring you to assume any specific distribution for your data.

For example, imagine you’re analyzing customer wait times at a service center. If the data doesn’t form a nice bell-shaped curve (normal distribution), a parametric test could give misleading results. Non-parametric tests, however, are more robust in this situation.

Small Sample Size

Parametric tests generally require a large sample size to produce reliable results. This is because a bigger sample size helps the data to approximate a specific distribution better. Non-parametric tests, on the other hand, are less reliant on sample size, making them useful for “small data” situations where gathering a large sample is impractical or costly.

Imagine you’re testing a new procedure in a small clinic and can only collect data from 10 patients. A non-parametric test would be more appropriate here because it doesn’t require a large sample size for validity.

Not sure if your data sample size is too small, check with our Sample Size Calculator.

Types of Data

Parametric tests usually require interval or ratio data, where the distances between data points are uniform and meaningful. Non-parametric tests are more flexible, allowing for ordinal or nominal data.

  • Ordinal Data: Data that can be ordered but the intervals between data points are not uniform (e.g., customer satisfaction ratings like “poor,” “average,” “good”).

  • Nominal Data: Data that can be categorized but not ordered (e.g., types of fruits, gender).

Violation of Assumptions

In real-world applications, data often violate the assumptions required for parametric tests, such as homogeneity of variance (equality of variances across groups) or independence. Non-parametric tests are less sensitive to these violations, making them more robust and widely applicable.

Non-parametric tests are valuable tools for data analysis, particularly when your data is not well-suited for parametric tests. They are more flexible with the type of data, distribution, and sample size, making them extremely useful in practical, real-world scenarios where data is often messy and imperfect.

Mann-Whitney U Test

The Mann-Whitney U Test, often referred to as the Wilcoxon Rank-Sum Test, is a non-parametric statistical test that provides a robust way to compare two sets of data. Below, we delve into what the test is, when to use it, and why it’s beneficial in certain situations.

The primary goal of the Mann-Whitney U Test is to determine if two independent samples come from populations with the same distribution. Unlike parametric tests like the t-test, the Mann-Whitney U Test doesn’t assume that the data follows any specific distribution.

The test works by ranking all the values from both samples together from smallest to largest. Then, it calculates a U statistic based on these ranks. This U statistic is used to test the hypothesis that the samples are from different distributions.

For example, suppose you are a manager at a manufacturing plant, and you want to know if two assembly lines produce products with different levels of defects. The Mann-Whitney U Test can help you determine if there’s a significant difference in the quality levels between the two lines.

When to Use?

Here are the key scenarios where the Mann-Whitney U Test is appropriate:

Two Independent Samples

The test is designed for comparing two separate groups that do not affect each other. For instance, you might compare customer satisfaction ratings between two different branches of a retail store.

At Least Ordinal Data

The data should be at least ordinal, meaning it can be ordered in a meaningful way. For example, exam scores or customer ratings are ordinal; you can definitively say which is higher, lower, or the same, even if the intervals between the scores are not consistent.

Violation of Parametric Assumptions

If your data violates the assumptions required for parametric tests like the t-test—such as non-normal distribution or unequal variances—the Mann-Whitney U Test serves as an excellent alternative.

Why Use Mann-Whitney U Test?

The Mann-Whitney U Test is particularly useful for small sample sizes and when you suspect that your data may not meet the strict assumptions of parametric tests. It allows for greater flexibility in the type of data you can analyze while still providing a robust way to determine differences between groups.

The Mann-Whitney U Test is a valuable tool when you’re dealing with two independent sets of data and you want to know if they’re significantly different, particularly when the data is not normally distributed or when you have a small sample size. It’s a more versatile test that can handle a wider variety of data types, making it a handy tool for your statistical toolbox.

Steps to Perform the Mann-Whitney U Test

Step 1: Rank the Data

First, we combine the lead time data from Supplier A and Supplier B, and then rank them from smallest to largest.

For Supplier A:

[12,15,14,10]

For Supplier B:

[9,11,13,14]

The combined and sorted data would be:

[9,10,11,12,13,14,14,15]

The ranks are assigned as follows:

  • 9 gets rank 1
  • 10 gets rank 2
  • 11 gets rank 3
  • 12 gets rank 4
  • 13 gets rank 5
  • 14 gets ranks 6 and 7 (average of 6 and 7 because of the tie)
  • 15 gets rank 8

Step 2: Calculate U

The Mann-Whitney U statistic is calculated for each group using these ranks. The U statistic is essentially a measure of how the ranks in one group compare to the ranks in the other group. In our example, the U statistic is 10.5.

Step 3: Determine Significance

Finally, we compare the calculated U statistic against a critical value from the U distribution. If the U statistic is less than the critical value, or if the p-value is less than a chosen significance level (usually 0.05), we reject the null hypothesis and conclude that there’s a significant difference between the groups.

In our example, the p-value is approximately 0.561, which is greater than 0.05. Therefore, we fail to reject the null hypothesis, suggesting that there’s no statistically significant difference in lead time between Supplier A and Supplier B.

Mann-Whitney U Test

The boxplot above provides a visual representation of the lead times for Supplier A and Supplier B. Here, the teal box represents Supplier A, and the orange line within the box shows the median value. Similarly, Supplier B is represented.

The Mann-Whitney U Test helps us understand whether two independent samples have significant differences in their distributions. In this example, we found that there’s no significant difference in lead time between the two suppliers.

Kruskal-Wallis H Test

The Kruskal-Wallis H Test is a non-parametric statistical method that serves as an extension of the Mann-Whitney U Test when you have more than two independent samples to compare. Let’s dive into its features, uses, and advantages.

The Kruskal-Wallis H Test is essentially the non-parametric counterpart to the one-way ANOVA. It is designed to test the null hypothesis that multiple independent samples come from the same distribution. Unlike the one-way ANOVA, it doesn’t require the data to be normally distributed, nor does it assume homogeneity of variances.

In simpler terms, the Kruskal-Wallis H Test helps you determine if three or more different groups are actually different in some statistical sense. For example, if you manage multiple warehouses and you want to know if the efficiency rates differ significantly among them, this test can provide that insight.

When to Use?

Here are the primary scenarios in which the Kruskal-Wallis H Test is useful:

Three or More Independent Samples

The test is specifically designed for scenarios where you have three or more independent groups to compare. For instance, you could use it to compare customer satisfaction ratings across three or more stores.

At Least Ordinal Data

The data you’re working with should be at least ordinal in nature. This means that the data can be ranked in a meaningful way, even if the distance between data points isn’t uniform.

Violation of Parametric Assumptions

If your data doesn’t meet the assumptions required for a parametric test like the one-way ANOVA, such as normal distribution or homogeneity of variances, then the Kruskal-Wallis H Test is a robust alternative.

Why Use the Kruskal-Wallis H Test?

  1. Flexibility: It accommodates a wider variety of data types and distributions.
  2. Robustness: It’s less sensitive to outliers and skewed data.
  3. Simplicity: Despite its robustness, the test is relatively straightforward to perform and interpret.

Steps to Perform the Kruskal-Wallis H Test

Step 1: Rank the Data

Firstly, we combine the efficiency data from Line A, Line B, and Line C, and then rank them from smallest to largest.

For Line A:

[80%,82%,78%]


For Line B:

[75%,77%,76%]


For Line C:

[88%,90%,87%]

 

The combined and sorted data would be:

[75%,76%,77%,78%,80%,82%,87%,88%,90%]

 

The ranks are assigned as follows:

  • 75% gets rank 1
  • 76% gets rank 2
  • 77% gets rank 3
  • 78% gets rank 4
    … and so on.

Step 2: Calculate H

Using these ranks, we calculate the Kruskal-Wallis H statistic. The H statistic gives us an idea of whether the distribution of efficiencies across the three lines is similar or not. In our example, the H statistic is approximately 7.2.

Step 3: Determine Significance

We then compare this H statistic against a chi-square distribution to determine if the difference between the lines is statistically significant. If the p-value is less than a chosen significance level (commonly 0.05), then we can reject the null hypothesis that the samples come from the same distribution.

In our example, the p-value is approximately 0.027, which is less than 0.05. Therefore, we reject the null hypothesis, suggesting that there is a statistically significant difference in efficiency between the three production lines.

Kruskal-Wallis H Test

The Kruskal-Wallis H Test allows us to compare the efficiencies of three different production lines and concludes that there is a statistically significant difference between them. This insight can be valuable for managers looking to optimize production processes.

Conclusion

In conclusion, non-parametric tests like the Mann-Whitney U Test and Kruskal-Wallis H Test offer robust and versatile tools in statistical analysis, especially in scenarios where parametric test assumptions are not met.

These tests excel in handling smaller sample sizes, different data types, and instances of assumption violations, making them invaluable for real-world data analysis where imperfections are common. By understanding and applying these tests, analysts can derive more accurate and reliable insights from a broader range of data, enhancing decision-making processes in various fields.

References

  • Cleophas, T.J., Zwinderman, A.H., Cleophas, T.J. and Zwinderman, A.H., 2011. Non-parametric tests. Statistical Analysis of Clinical Data on a Pocket Calculator: Statistics on a Pocket Calculator, pp.9-13.
  • Pappas, P.A. and DePuy, V., 2004. An overview of non-parametric tests in SAS: when, why, and how. Paper TU04. Duke Clinical Research Institute, Durham, pp.1-5.

A: Non-parametric tests are statistical methods that do not require the data to follow a specific distribution. They are useful when you have small sample sizes, ordinal or nominal data, or when the assumptions of parametric tests are not met.

A: Use the Mann-Whitney U Test when you have two independent samples, the data is at least ordinal, and the assumptions for parametric tests like the t-test are violated. It’s a robust alternative for comparing two groups.

A: No, the Mann-Whitney U Test is designed for two independent samples. For paired or related samples, you might want to use the Wilcoxon Signed-Rank Test instead.

A: The Kruskal-Wallis H Test is used to compare the distributions of three or more independent samples. It’s the non-parametric alternative to the one-way ANOVA.

A: Technically, you could, but it’s generally more efficient to use the Mann-Whitney U Test for comparing two groups. Kruskal-Wallis is more applicable when you have three or more groups to compare.

Author

Picture of Daniel Croft

Daniel Croft

Daniel Croft is a seasoned continuous improvement manager with a Black Belt in Lean Six Sigma. With over 10 years of real-world application experience across diverse sectors, Daniel has a passion for optimizing processes and fostering a culture of efficiency. He's not just a practitioner but also an avid learner, constantly seeking to expand his knowledge. Outside of his professional life, Daniel has a keen Investing, statistics and knowledge-sharing, which led him to create the website learnleansigma.com, a platform dedicated to Lean Six Sigma and process improvement insights.

All Posts

Free Lean Six Sigma Templates

Improve your Lean Six Sigma projects with our free templates. They're designed to make implementation and management easier, helping you achieve better results.

Other Guides