The t-Table

 

Table 1 shows a t-table, which is used by many semiconductor process engineers to understand their processes based on a finite number of process output data that they can collect.  For more info on how to use this t-table, refer to: The t-Distribution

   

Table 1.  Percentage Points of the T-Distribution

a-One Tail

0.10

0.05

0.025

0.01

0.005

0.001

0.0005

a-One Tail

a-Two Tails

0.20

0.10

0.05

0.02

0.01

0.002

0.001

a-Two Tails

d.f.= 1

3.078

6.314

12.71

31.82

63.66

318.3

637

d.f.= 1

d.f.=2

1.886

2.920

4.303

6.965

9.925

22.33

31.6

d.f.=2

d.f.=3

1.638

2.353

3.182

4.541

5.841

10.210

12.92

d.f.=3

d.f.=4

1.533

2.132

2.776

3.747

4.604

7.173

8.610

d.f.=4

d.f.=5

1.476

2.015

2.571

3.365

4.032

5.893

6.869

d.f.=5

d.f.=6

1.440

1.943

2.447

3.143

3.707

5.208

5.959

d.f.=6

d.f.=7

1.415

1.895

2.365

2.998

3.499

4.785

5.408

d.f.=7

d.f.=8

 1.397

1.860

2.306

2.896 

3.355

4.501

5.041

d.f.=8

d.f.=9

1.383

1.833

2.262 

2.821 

3.250 

4.297 

4.781

d.f.=9

d.f.=10

1.372 

1.812

2.228 

2.764 

3.169 

4.144 

4.587

d.f.=10

d.f.=11

 1.363 

1.796

2.201 

2.718 

3.106 

4.025 

4.437

d.f.=11

d.f.=12

1.356

1.782 

2.179 

2.681 

3.055 

3.930 

4.318

d.f.=12

d.f.=13

 1.350  

1.771 

2.160 

2.650 

3.012 

3.852 

4.221

d.f.=13

d.f.=14

 1.345  

1.761 

2.145 

2.624 

2.977 

3.787 

4.140

d.f.=14

d.f.=15

 1.341  

1.753 

2.131 

2.602 

2.947 

3.733 

4.073

d.f.=15

d.f.=16

 1.337  

1.746 

2.120 

2.583 

2.921 

3.686 

4.015

d.f.=16

d.f.=17

 1.333  

1.740 

2.110 

2.567 

2.898 

3.646 

3.965

d.f.=17

d.f.=18

1.330 

1.734 

2.101 

2.552 

2.878 

3.610 

3.922

d.f.=18

d.f.=19

1.328 

1.729 

2.093 

2.539 

2.861 

3.579 

3.883

d.f.=19

d.f.=20

 1.325  

1.725 

2.086 

2.528 

2.845 

3.552 

3.850

d.f.=20

d.f.=21

 1.323  

1.721 

2.080 

2.518 

2.831 

3.527 

3.819

d.f.=21

d.f.=22

1.321 

1.717 

2.074 

2.508 

2.819 

3.505 

3.792

d.f.=22

d.f.=23

1.319 

1.714 

2.069 

2.500 

2.807 

3.485 

3.768

d.f.=23

d.f.=24

1.318 

1.711 

2.064

2.492 

2.797 

3.467 

3.745

d.f.=24

d.f.=25

 1.316 

1.708 

2.060 

2.485

2.787 

3.450 

3.725

d.f.=25

d.f.=26

1.315 

1.706 

2.056 

2.479 

2.779 

3.435 

3.707

d.f.=26

d.f.=27

  1.314  

1.703 

2.052 

2.473 

2.771 

3.421 

3.690

d.f.=27

d.f.=28

 1.313  

1.701 

2.048 

2.467 

2.763 

3.408 

3.674

d.f.=28

d.f.=29

  1.311  

1.699 

2.045 

2.462 

2.756 

3.396 

3.659

d.f.=29

d.f.=30

1.310 

1.697 

2.042 

2.457 

2.750 

3.385 

3.646

d.f.=30

d.f.=32

 1.309  

1.694 

2.037 

2.449 

2.738 

3.365 

3.622

d.f.=32

d.f.=34

1.307 

1.691 

2.032 

2.441 

2.728 

3.348 

3.601

d.f.=34

d.f.=36

 1.306  

1.688 

2.028 

2.434 

2.719 

3.333 

3.582

d.f.=36

d.f.=38

 1.304  

1.686 

2.024 

2.429 

2.712 

3.319 

3.566

d.f.=38

d.f.=40

1.303 

1.684 

2.021 

2.423 

2.704 

3.307 

3.551

d.f.=40

d.f.=42

1.302 

1.682 

2.018 

2.418 

2.698 

3.296 

3.538

d.f.=42

d.f.=44

1.301 

1.680 

2.015 

2.414 

2.692 

3.286 

3.526

d.f.=44

d.f.=46

 1.300  

1.679 

2.013 

2.410 

2.687 

3.277 

3.515

d.f.=46

d.f.=48

1.299 

1.677 

2.011 

2.407 

2.682 

3.269 

3.505

d.f.=48

d.f.=50

1.299 

1.676 

2.009 

2.403 

2.678 

3.261 

3.496

d.f.=50

d.f.=55

1.297 

1.673 

2.004 

2.396 

2.668 

3.245 

3.476

d.f.=55

d.f.=60

1.296 

1.671 

2.000 

2.390 

2.660 

3.232 

3.460

d.f.=60

d.f.=65

 1.295  

1.669 

1.997 

2.385 

2.654 

3.220 

3.447

d.f.=65

d.f.=70

 1.294  

1.667 

1.994 

2.381 

2.648 

3.211 

3.435

d.f.=70

d.f.=80

1.292 

1.664 

1.990 

2.374 

2.639 

3.195

3.416

d.f.=80

d.f.=100

 1.290  

1.660 

1.984 

2.364 

2.626 

3.174 

3.390

d.f.=100

d.f.=150

 1.287  

1.655 

1.976 

2.351 

2.609 

3.145 

3.357

d.f.=150

d.f.=200

1.286 

1.653 

1.972 

2.345 

2.601 

3.131 

3.340

d.f.=200

Two Tails

0.20

0.10

0.05

0.02

0.01

0.002

0.001

Two Tails

One Tail

0.10

0.05

0.025

0.01

0.005

0.001

0.0005

One Tail

   

Many engineers need to perform experiments that consist of measuring the value of an output response y of a normal process for a given setting of an input variable x.  In such experiments,  we know from experience that even if we can precisely set the input variable x to the same value, we'll probably get a different value for the output response y every time we measure it.  This is because of the occurrence  of measurement errors, a reality of engineering life that we all need to contend with.

        

If several measurements of the value of output y for a given setting of x are taken, we can compute for the mean Y and the variance s of these sample measurements. These sample mean and variance values, however, are different from the actual values of the mean µ and variance σ of the entire normal distribution of y values that can be taken given the set value of x.  Thus, Y and s are simply sample estimates for the actual population mean  µ and variance σ of the process output y, respectively. 

        

In most real life problems, it is not always possible to know the actual values of  µ and σ with absolute certainty, so as engineers we have nothing to work with except Y and s, which we can get from the finite number of measurements that we can do. The question now is, how close is  Y to µ?

           

To answer this question, let us define a quantity, t, as follows: t = (Y - µ) / (s / sqrt(n)) where n is the number of random measurements taken. A population of t-values forms a t-distribution, which also looks like a normal distribution.  It is, however, a bit wider than a normal distribution, given that it uses s to define its dispersion.

   

As the number of degrees of freedom (defined as n-1 if n is the number of independent measurements taken)  of the t-distribution increases, it becomes more and more identical to the normal distribution.  In fact, at a high enough number of degrees of freedom (say, > 100), the t-distribution becomes practically indistinguishable from the normal distribution.

        

The t-distribution can therefore be used to analyze normal distributions in cases where the true value of σ can not be obtained, such as when the sample size is limited. Increasing the number of measurements taken for y increases the degrees of freedom used to estimate σ,  bringing the values of Y and s closer to µ and  σ of the entire normal population, respectively.

         

One of the uses for the t-distribution is in determining the probability that the actual population mean µ of a group of measurements  will fall between two values, provided that the experiment was performed in a randomized manner.  This is done by calculating the sample population mean Y and variance s from measurements taken from an experiment with a number of runs.  The following equation is then applied:

         

Values between µ which will fall at a given probability (1-a) = Y +/- [ρ(1-a, d.f.)] [s / sqrt(n)]  with the value of ρ (1-a, d.f.) coming from the t-table (Table 1).

          

Note that Table 1 shows the a values instead of the probability values, so these have to be subtracted from 1 in order to obtain their corresponding probability levels, e.g., 95% probability level corresponds to the column where a  = 0.05.

   

As an example, suppose that you obtained 5 independent measurements of the output of your process for the same set of inputs, from which you calculated a sample mean Y = 4.5 and a sample variance s = 0.1.  If you're interested in the range of values between which there's a probability of 95% that the real population mean µ will lie, then (referring to the t-Table) you'd have to use a value of 2.776 for ρ (for d.f. = 4 and  a = 0.05).  Thus, given your data, there's 95% probability that µ will lie between 4.5 - [2.776 (0.1/2.236)] and 4.5 + [2.776 (0.1/2.236)], i.e., 4.376 < µ < 4.624.

       

LINKS:  Normal DistributionCpk - ppm Table

   

HOME

Copyright © 2005 EESemi.com. All Rights Reserved.