Ana səhifə

Reading: Section 2


Yüklə 0.53 Mb.
tarix24.06.2016
ölçüsü0.53 Mb.
Equation Chapter 1 Section 1Statistics 550 Notes 21
Reading: Section 4.2
I. Neyman-Pearson Lemma (Section 4.2)
Define the likelihood ratio statistic by

,

where is the probability mass function or probability density function of the data X.


The statistic L takes on the value when and by convention equals 0 when both .
We call a likelihood ratio test if


Theorem 4.2.1 (Neyman-Pearson Lemma): Consider testing vs.

(a) If and is a size likelihood ratio test, then is a most powerful level test

(b) For each , there exists a most powerful size likelihood ratio test.

(c) If is a most powerful level test, then it must be a level likelihood ratio test except perhaps on a set A satisfying


Example 1: iid . , , where .

The likelihood ratio statistic is



Rejecting the null hypothesis for large values of is equivalent to rejecting the null hypothesis for large values of (using the fact that ).


What should the cutoff be? The distribution of under the null hypothesis is so the most powerful level tests rejects for

where is the CDF of a standard normal.

For the data considered in Notes 20, (1.1064, 1.1568, -0.1602, 1.0343, -0.1079), with vs. , the most powerful level test rejects for . We have so we accept (retain) .


Example 2:




P(X=x)




0

1

2

3

4



0.1

0.1

0.1

0.2

0.5



0.3

0.3

0.2

0.1

0.1



3

3

2

0.5

0.2

The most powerful level 0.2 test rejects if and only if X=0 or 1.

There are multiple most powerful level 0.1 tests, e.g., 1) reject the null hypothesis if and only if X=0; 2) reject the null hypothesis if and only if X=1; 3) flip a coin to decide whether to reject the null hypothesis when X=0 or X=1.

Proof of Neyman-Pearson Lemma:

(a) We prove the result here for continuous random variables X. The proof for discrete random variables follows by replacing integrals with sums.

Let be the test function of any other level test besides . Because is level , . We want to show that .

We examine and show that . From this, we conclude that



The latter integral is because

Hence, we conclude that as desired.
To show that , let

Suppose . This implies which implies that . Thus,



.

Also, similarly,



and

(since for ).

Thus,

and this shows that

as argued above.
(b) Let where is the cdf of under . By the properties of CDFs, is nonincreasing in c and right continuous.
By the right continuity of , there exists such that

. So define



Then,


So we can take to be .


(c) Let be the test function for any most powerful level test. By parts (a) and (b), a likelihood ratio test with size can be found that is most powerful. Since and are both most powerful, it follows that

MERGEFORMAT


Following the proof in part (a),  implies that

which can be the case if and only if when (i.e., ) and when (i.e., ) except on a set A satisfying .


Connection of likelihood ratio tests to Bayes tests: Consider the 0-1 loss for hypothesis testing. The Bayes test chooses over if , which is equivalent to . The likelihood ratio test

is a Bayes test for the prior , because for this prior is greater than 1 if and only if .


The difference between the Bayes approach and the Neyman-Pearson approach is that the Bayes approach starts with a prior and this determines the cutoff for the likelihood ratio test, while the Neyman-Pearson approach starts with a significance level (a maximum acceptable Type I error rate) and this determines the cutoff for the likelihood ratio test.
II. Uniformly Most Powerful Tests (Section 4.3)
When the alternative hypothesis is composite, , then the power can be different for different alternatives. For each particular alternative , a test is the most powerful level test for the alternative if the test is most powerful for the simple alternative .
If a particular test function is the most powerful level test for all alternatives , then we say that is a uniformly most powerful (UMP) level test and we should clearly use as our test function under the Neyman-Pearson paradigm.
In notation, let denote the power of a test at the alternative . A level test is UMP if

for any other level test .


Example of uniformly most powerful test:

Let be iid and suppose we want to test versus . For each , the most powerful level test of versus rejects the null hypothesis for . Since this same test function is most powerful for each , this test function is UMP.


But suppose we consider the alternative hypothesis . Then there is no UMP test. The most powerful test for each , where , rejects the null hypothesis for , but the most powerful test for each , where , rejects for . Note that the test that rejects the null hypothesis for cannot be most powerful for an alternative by part (c) (necessity) of the Neyman-Pearson Lemma since it is not a likelihood ratio test for .

It is rare for UMP tests to exist. However, for one-sided alternative, they exist in some problems.


A condition under which UMP tests exist is when the family of distributions being considered possesses a property called monotone likelihood ratio.
Definition: The one-parameter family of distributions is said to be a monotone likelihood ratio family in the one-dimensional statistic if for

(a) for all x (identifiability)

(b) is an increasing function of .
Examples of families with monotone likelihood ratio:

(a) For iid Exponential (),




For , is an increasing function of so the family has monotone likelihood ratio in .

(b) Consider the one-parameter exponential family model



If is strictly increasing in , then the family is monotone likelihood ratio in . If is strictly decreasing in , then the family is monotone likelihood ratio in .


Example 2: Let .

This is a one-parameter exponential family with



Since is strictly increasing in , the family is monotone likelihood ratio in .
Theorem (4.3.1 and Corollary): If the one-parameter family of distributions has monotone likelihood ratio in , then there exists a UMP level test for testing versus and it is given by

where c and are determined so that .

Proof: Fix an alternative . Because the family is monotone likelihood ratio in , the likelihood ratio statistic

is an increasing function of . The Neyman-Pearson lemma gives that is a most powerful test for testing versus . Since this holds for all , we have that is UMP.


Note: Not all one-parameter families have monotone likelihood ratio. For example, the Cauchy distribution is not monotone likelihood ratio in x:

.

For ,



,

which is not increasing in x.


Now consider versus . Is the above test still UMP?
Fact: The test is level for (i.e., for every , the probability of rejection is at most ).
Before proving this fact, we prove Corollary 4.2.1.
Corollary 4.2.1: If is a most powerful level test of vs. , then the power of at is greater than or equal to with equality if and only if

with probability one (under both and ).
Proof: The test for all x (i.e., reject with probability regardless of x) has level and power . By the necessity part of the Neyman-Pearson lemma, if were most powerful, then with probability one. Therefore, with probability one which implies that k=1 and consequently that with probability one.
Proof of fact: Let . We want to show for . Fix . Then is the most powerful test for testing vs. at level . But from Corollary 4.2.1,

.
Back to the question of is UMP level for testing versus ?
Yes, it is UMP. Proof: Consider a level test for .

must also be a level test for . Then, because is UMP level for testing versus ,

.
Summary statement of Theorem 4.3.1 and Corollary:

If the one-parameter family of distributions has monotone likelihood ratio in , then there exists a UMP level test for testing versus and it is given by



where c and are determined so that .


Example 2 continued: For and testing

vs. , the UMP level test is

where is the smallest integer such that and



.






Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©atelim.com 2016
rəhbərliyinə müraciət