t-test Calculator

When to use a t-test, which t-test, how to do a t-test, p-value from t-test, t-test critical values, how to use our t-test calculator, one-sample t-test, two-sample t-test, paired t-test, t-test vs z-test.

Welcome to our t-test calculator! Here you can not only easily perform one-sample t-tests , but also two-sample t-tests , as well as paired t-tests .

Do you prefer to find the p-value from t-test, or would you rather find the t-test critical values? Well, this t-test calculator can do both! 😊

What does a t-test tell you? Take a look at the text below, where we explain what actually gets tested when various types of t-tests are performed. Also, we explain when to use t-tests (in particular, whether to use the z-test vs. t-test) and what assumptions your data should satisfy for the results of a t-test to be valid. If you've ever wanted to know how to do a t-test by hand, we provide the necessary t-test formula, as well as tell you how to determine the number of degrees of freedom in a t-test.

A t-test is one of the most popular statistical tests for location , i.e., it deals with the population(s) mean value(s).

There are different types of t-tests that you can perform:

  • A one-sample t-test;
  • A two-sample t-test; and
  • A paired t-test.

In the next section , we explain when to use which. Remember that a t-test can only be used for one or two groups . If you need to compare three (or more) means, use the analysis of variance ( ANOVA ) method.

The t-test is a parametric test, meaning that your data has to fulfill some assumptions :

  • The data points are independent; AND
  • The data, at least approximately, follow a normal distribution .

If your sample doesn't fit these assumptions, you can resort to nonparametric alternatives. Visit our Mann–Whitney U test calculator or the Wilcoxon rank-sum test calculator to learn more. Other possibilities include the Wilcoxon signed-rank test or the sign test.

Your choice of t-test depends on whether you are studying one group or two groups:

One sample t-test

Choose the one-sample t-test to check if the mean of a population is equal to some pre-set hypothesized value .

The average volume of a drink sold in 0.33 l cans β€” is it really equal to 330 ml?

The average weight of people from a specific city β€” is it different from the national average?

Choose the two-sample t-test to check if the difference between the means of two populations is equal to some pre-determined value when the two samples have been chosen independently of each other.

In particular, you can use this test to check whether the two groups are different from one another .

The average difference in weight gain in two groups of people: one group was on a high-carb diet and the other on a high-fat diet.

The average difference in the results of a math test from students at two different universities.

This test is sometimes referred to as an independent samples t-test , or an unpaired samples t-test .

A paired t-test is used to investigate the change in the mean of a population before and after some experimental intervention , based on a paired sample, i.e., when each subject has been measured twice: before and after treatment.

In particular, you can use this test to check whether, on average, the treatment has had any effect on the population .

The change in student test performance before and after taking a course.

The change in blood pressure in patients before and after administering some drug.

So, you've decided which t-test to perform. These next steps will tell you how to calculate the p-value from t-test or its critical values, and then which decision to make about the null hypothesis.

Decide on the alternative hypothesis :

Use a two-tailed t-test if you only care whether the population's mean (or, in the case of two populations, the difference between the populations' means) agrees or disagrees with the pre-set value.

Use a one-tailed t-test if you want to test whether this mean (or difference in means) is greater/less than the pre-set value.

Compute your T-score value :

Formulas for the test statistic in t-tests include the sample size , as well as its mean and standard deviation . The exact formula depends on the t-test type β€” check the sections dedicated to each particular test for more details.

Determine the degrees of freedom for the t-test:

The degrees of freedom are the number of observations in a sample that are free to vary as we estimate statistical parameters. In the simplest case, the number of degrees of freedom equals your sample size minus the number of parameters you need to estimate . Again, the exact formula depends on the t-test you want to perform β€” check the sections below for details.

The degrees of freedom are essential, as they determine the distribution followed by your T-score (under the null hypothesis). If there are d degrees of freedom, then the distribution of the test statistics is the t-Student distribution with d degrees of freedom . This distribution has a shape similar to N(0,1) (bell-shaped and symmetric) but has heavier tails . If the number of degrees of freedom is large (>30), which generically happens for large samples, the t-Student distribution is practically indistinguishable from N(0,1).

πŸ’‘ The t-Student distribution owes its name to William Sealy Gosset, who, in 1908, published his paper on the t-test under the pseudonym "Student". Gosset worked at the famous Guinness Brewery in Dublin, Ireland, and devised the t-test as an economical way to monitor the quality of beer. Cheers! 🍺🍺🍺

Recall that the p-value is the probability (calculated under the assumption that the null hypothesis is true) that the test statistic will produce values at least as extreme as the T-score produced for your sample . As probabilities correspond to areas under the density function, p-value from t-test can be nicely illustrated with the help of the following pictures:

p-value from t-test

The following formulae say how to calculate p-value from t-test. By cdf t,d we denote the cumulative distribution function of the t-Student distribution with d degrees of freedom:

p-value from left-tailed t-test:

p-value = cdf t,d (t score )

p-value from right-tailed t-test:

p-value = 1 βˆ’ cdf t,d (t score )

p-value from two-tailed t-test:

p-value = 2 Γ— cdf t,d (βˆ’|t score |)

or, equivalently: p-value = 2 βˆ’ 2 Γ— cdf t,d (|t score |)

However, the cdf of the t-distribution is given by a somewhat complicated formula. To find the p-value by hand, you would need to resort to statistical tables, where approximate cdf values are collected, or to specialized statistical software. Fortunately, our t-test calculator determines the p-value from t-test for you in the blink of an eye!

Recall, that in the critical values approach to hypothesis testing, you need to set a significance level, Ξ±, before computing the critical values , which in turn give rise to critical regions (a.k.a. rejection regions).

Formulas for critical values employ the quantile function of t-distribution, i.e., the inverse of the cdf :

Critical value for left-tailed t-test: cdf t,d -1 (Ξ±)

critical region:

(-∞, cdf t,d -1 (α)]

Critical value for right-tailed t-test: cdf t,d -1 (1-Ξ±)

[cdf t,d -1 (1-α), ∞)

Critical values for two-tailed t-test: Β±cdf t,d -1 (1-Ξ±/2)

(-∞, -cdf t,d -1 (1-Ξ±/2)] βˆͺ [cdf t,d -1 (1-Ξ±/2), ∞)

To decide the fate of the null hypothesis, just check if your T-score lies within the critical region:

If your T-score belongs to the critical region , reject the null hypothesis and accept the alternative hypothesis.

If your T-score is outside the critical region , then you don't have enough evidence to reject the null hypothesis.

Choose the type of t-test you wish to perform:

A one-sample t-test (to test the mean of a single group against a hypothesized mean);

A two-sample t-test (to compare the means for two groups); or

A paired t-test (to check how the mean from the same group changes after some intervention).

Two-tailed;

Left-tailed; or

Right-tailed.

This t-test calculator allows you to use either the p-value approach or the critical regions approach to hypothesis testing!

Enter your T-score and the number of degrees of freedom . If you don't know them, provide some data about your sample(s): sample size, mean, and standard deviation, and our t-test calculator will compute the T-score and degrees of freedom for you .

Once all the parameters are present, the p-value, or critical region, will immediately appear underneath the t-test calculator, along with an interpretation!

The null hypothesis is that the population mean is equal to some value ΞΌ 0 \mu_0 ΞΌ 0 ​ .

The alternative hypothesis is that the population mean is:

  • different from ΞΌ 0 \mu_0 ΞΌ 0 ​ ;
  • smaller than ΞΌ 0 \mu_0 ΞΌ 0 ​ ; or
  • greater than ΞΌ 0 \mu_0 ΞΌ 0 ​ .

One-sample t-test formula :

  • ΞΌ 0 \mu_0 ΞΌ 0 ​ β€” Mean postulated in the null hypothesis;
  • n n n β€” Sample size;
  • x Λ‰ \bar{x} x Λ‰ β€” Sample mean; and
  • s s s β€” Sample standard deviation.

Number of degrees of freedom in t-test (one-sample) = n βˆ’ 1 n-1 n βˆ’ 1 .

The null hypothesis is that the actual difference between these groups' means, ΞΌ 1 \mu_1 ΞΌ 1 ​ , and ΞΌ 2 \mu_2 ΞΌ 2 ​ , is equal to some pre-set value, Ξ” \Delta Ξ” .

The alternative hypothesis is that the difference ΞΌ 1 βˆ’ ΞΌ 2 \mu_1 - \mu_2 ΞΌ 1 ​ βˆ’ ΞΌ 2 ​ is:

  • Different from Ξ” \Delta Ξ” ;
  • Smaller than Ξ” \Delta Ξ” ; or
  • Greater than Ξ” \Delta Ξ” .

In particular, if this pre-determined difference is zero ( Ξ” = 0 \Delta = 0 Ξ” = 0 ):

The null hypothesis is that the population means are equal.

The alternate hypothesis is that the population means are:

  • ΞΌ 1 \mu_1 ΞΌ 1 ​ and ΞΌ 2 \mu_2 ΞΌ 2 ​ are different from one another;
  • ΞΌ 1 \mu_1 ΞΌ 1 ​ is smaller than ΞΌ 2 \mu_2 ΞΌ 2 ​ ; and
  • ΞΌ 1 \mu_1 ΞΌ 1 ​ is greater than ΞΌ 2 \mu_2 ΞΌ 2 ​ .

Formally, to perform a t-test, we should additionally assume that the variances of the two populations are equal (this assumption is called the homogeneity of variance ).

There is a version of a t-test that can be applied without the assumption of homogeneity of variance: it is called a Welch's t-test . For your convenience, we describe both versions.

Two-sample t-test if variances are equal

Use this test if you know that the two populations' variances are the same (or very similar).

Two-sample t-test formula (with equal variances) :

where s p s_p s p ​ is the so-called pooled standard deviation , which we compute as:

  • Ξ” \Delta Ξ” β€” Mean difference postulated in the null hypothesis;
  • n 1 n_1 n 1 ​ β€” First sample size;
  • x Λ‰ 1 \bar{x}_1 x Λ‰ 1 ​ β€” Mean for the first sample;
  • s 1 s_1 s 1 ​ β€” Standard deviation in the first sample;
  • n 2 n_2 n 2 ​ β€” Second sample size;
  • x Λ‰ 2 \bar{x}_2 x Λ‰ 2 ​ β€” Mean for the second sample; and
  • s 2 s_2 s 2 ​ β€” Standard deviation in the second sample.

Number of degrees of freedom in t-test (two samples, equal variances) = n 1 + n 2 βˆ’ 2 n_1 + n_2 - 2 n 1 ​ + n 2 ​ βˆ’ 2 .

Two-sample t-test if variances are unequal (Welch's t-test)

Use this test if the variances of your populations are different.

Two-sample Welch's t-test formula if variances are unequal:

  • s 1 s_1 s 1 ​ β€” Standard deviation in the first sample;
  • s 2 s_2 s 2 ​ β€” Standard deviation in the second sample.

The number of degrees of freedom in a Welch's t-test (two-sample t-test with unequal variances) is very difficult to count. We can approximate it with the help of the following Satterthwaite formula :

Alternatively, you can take the smaller of n 1 βˆ’ 1 n_1 - 1 n 1 ​ βˆ’ 1 and n 2 βˆ’ 1 n_2 - 1 n 2 ​ βˆ’ 1 as a conservative estimate for the number of degrees of freedom.

πŸ”Ž The Satterthwaite formula for the degrees of freedom can be rewritten as a scaled weighted harmonic mean of the degrees of freedom of the respective samples: n 1 βˆ’ 1 n_1 - 1 n 1 ​ βˆ’ 1 and n 2 βˆ’ 1 n_2 - 1 n 2 ​ βˆ’ 1 , and the weights are proportional to the standard deviations of the corresponding samples.

As we commonly perform a paired t-test when we have data about the same subjects measured twice (before and after some treatment), let us adopt the convention of referring to the samples as the pre-group and post-group.

The null hypothesis is that the true difference between the means of pre- and post-populations is equal to some pre-set value, Ξ” \Delta Ξ” .

The alternative hypothesis is that the actual difference between these means is:

Typically, this pre-determined difference is zero. We can then reformulate the hypotheses as follows:

The null hypothesis is that the pre- and post-means are the same, i.e., the treatment has no impact on the population .

The alternative hypothesis:

  • The pre- and post-means are different from one another (treatment has some effect);
  • The pre-mean is smaller than the post-mean (treatment increases the result); or
  • The pre-mean is greater than the post-mean (treatment decreases the result).

Paired t-test formula

In fact, a paired t-test is technically the same as a one-sample t-test! Let us see why it is so. Let x 1 , . . . , x n x_1, ... , x_n x 1 ​ , ... , x n ​ be the pre observations and y 1 , . . . , y n y_1, ... , y_n y 1 ​ , ... , y n ​ the respective post observations. That is, x i , y i x_i, y_i x i ​ , y i ​ are the before and after measurements of the i -th subject.

For each subject, compute the difference, d i : = x i βˆ’ y i d_i := x_i - y_i d i ​ := x i ​ βˆ’ y i ​ . All that happens next is just a one-sample t-test performed on the sample of differences d 1 , . . . , d n d_1, ... , d_n d 1 ​ , ... , d n ​ . Take a look at the formula for the T-score :

Ξ” \Delta Ξ” β€” Mean difference postulated in the null hypothesis;

n n n β€” Size of the sample of differences, i.e., the number of pairs;

x Λ‰ \bar{x} x Λ‰ β€” Mean of the sample of differences; and

s s s  β€” Standard deviation of the sample of differences.

Number of degrees of freedom in t-test (paired): n βˆ’ 1 n - 1 n βˆ’ 1

We use a Z-test when we want to test the population mean of a normally distributed dataset, which has a known population variance . If the number of degrees of freedom is large, then the t-Student distribution is very close to N(0,1).

Hence, if there are many data points (at least 30), you may swap a t-test for a Z-test, and the results will be almost identical. However, for small samples with unknown variance, remember to use the t-test because, in such cases, the t-Student distribution differs significantly from the N(0,1)!

πŸ™‹ Have you concluded you need to perform the z-test? Head straight to our z-test calculator !

What is a t-test?

A t-test is a widely used statistical test that analyzes the means of one or two groups of data. For instance, a t-test is performed on medical data to determine whether a new drug really helps.

What are different types of t-tests?

Different types of t-tests are:

  • One-sample t-test;
  • Two-sample t-test; and
  • Paired t-test.

How to find the t value in a one sample t-test?

To find the t-value:

  • Subtract the null hypothesis mean from the sample mean value.
  • Divide the difference by the standard deviation of the sample.
  • Multiply the resultant with the square root of the sample size.

Black Friday

Least to greatest, linear regression, meat footprint.

  • Biology (99)
  • Chemistry (98)
  • Construction (144)
  • Conversion (292)
  • Ecology (30)
  • Everyday life (261)
  • Finance (569)
  • Health (440)
  • Physics (508)
  • Sports (104)
  • Statistics (182)
  • Other (181)
  • Discover Omni (40)

Two Population Calculator

Related: hypothesis testing calculator, confidence interval, hypothesis testing.

When computing confidence intervals for two population means, we are interested in the difference between the population means ($ \mu_1 - \mu_2 $). A confidence interval is made up of two parts, the point estimate and the margin of error. The point estimate of the difference between two population means is simply the difference between two sample means ($ \bar{x}_1 - \bar{x}_2 $). The standard error of $ \bar{x}_1 - \bar{x}_2 $, which is used in computing the margin of error, is given by the formula below.

The formula for the margin of error depends on whether the population standard deviations ($\sigma_1$ and $\sigma_2$) are known or unknown. If the population standard deviations are known, then they are used in the formula. If they are unknown, then the sample standard deviations ($s_1$ and $s_2$)are used in their place. To change from $\sigma$ known to $\sigma$ unknown, click on $\boxed{Οƒ}$ and select $\boxed{s}$ in the Two Population Calculator.

While the formulas for the margin of error in the two population case are similar to those in the one population case, the formula for the degrees of freedom is quite a bit more complicated. Although this formula does seem intimidating at first sight, there is a shortcut to get the answer faster. Notice that the terms $\frac{s_1^2}{n_1}$ and $\frac{s_2^2}{n_2}$ each repeat twice. The terms are actually computed previously when finding the margin of error so they don't need to be calculated again.

If the two population variances are assumed to be equal, an alternative formula for computing the degrees of freedom is used. It's simply df = n1 + n2 - 2. This is a simple extension of the formula for the one population case. In the one population case the degrees of freedom is given by df = n - 1. If we add up the degrees of freedom for the two samples we would get df = (n1 - 1) + (n2 - 1) = n1 + n2 - 2. This formula gives a pretty good approximation of the more complicated formula above.

Just like in hypothesis tests about a single population mean, there are lower-tail, upper-tail and two tailed tests. However, the null and alternative are slightly different. First of all, instead of having mu on the left side of the equality, we have $\mu_1 - \mu_2$. On the right side of the equality, we don't have $\mu_0$, the hypothesized value of the population mean. Instead we have $D_0$, the hypothesized difference between the population means. To switch from a lower tail test to an upper tail or two-tailed test, click on $\boxed{\geq}$ and select $\boxed{\leq}$ or $\boxed{=}$, respectively.

Again, hypothesis testing for a single population mean is very similar to hypothesis testing for two population means. For a single population mean, the test statistics is the difference between mu and mu0 dividied by the standard error. For two population means, the test statistic is the difference between $\bar{x}_1 - \bar{x}_2$ and $D_0$ divided by the standard error. The procedure after computing the test statistic is identical to the one population case. That is, you proceed with the p-value approach or critical value approach in the same exact way.

The calculator above computes confidence intervals and hypothesis tests for the difference between two population means. The simpler version of this is confidence intervals and hypothesis tests for a single population mean. For confidence intervals about a single population mean, visit the Confidence Interval Calculator . For hypothesis tests about a single population mean, visit the Hypothesis Testing Calculator .

IMAGES

  1. Student's t-Test (t0, te & H0) Calculator, Formulas & Examples

    mean difference hypothesis test calculator

  2. Hypothesis Test for a Difference Between 2 Means, Statistics Given

    mean difference hypothesis test calculator

  3. Hypothesis Testing

    mean difference hypothesis test calculator

  4. Hypothesis Test for a Paired Difference, LibreTexts Calculator

    mean difference hypothesis test calculator

  5. Hypothesis Testing Formula

    mean difference hypothesis test calculator

  6. Ch8: Hypothesis Testing (2 Samples)

    mean difference hypothesis test calculator

VIDEO

  1. Mean Hypothesis Test with Sample Data by hand

  2. Conduct a Hypothesis Test for the Difference in Two Proportions

  3. Components of an Hypothesis Test

  4. Hypothesis Testing Using TI 84

  5. Hypothesis Tests About the Mean for TWO Populations

  6. Hypothesis Tests About the Mean for Single Population

COMMENTS

  1. t-test Calculator

    A two-sample t-test (to compare the means for two groups); or. A paired t-test (to check how the mean from the same group changes after some intervention). Decide on the alternative hypothesis: Two-tailed; Left-tailed; or. Right-tailed. This t-test calculator allows you to use either the p-value approach or the critical regions approach to ...

  2. Hypothesis Testing Calculator with Steps

    Hypothesis Testing Calculator. The first step in hypothesis testing is to calculate the test statistic. The formula for the test statistic depends on whether the population standard deviation (Οƒ) is known or unknown. If Οƒ is known, our hypothesis test is known as a z test and we use the z distribution. If Οƒ is unknown, our hypothesis test is ...

  3. Two Population Calculator with Steps

    The calculator above computes confidence intervals and hypothesis tests for the difference between two population means. The simpler version of this is confidence intervals and hypothesis tests for a single population mean. For confidence intervals about a single population mean, visit the Confidence Interval Calculator.