## Two-Sample Confidence Interval: Example #2

Objective: This example compares the A-level count for a sample of Oxford University students getting BA and BSc degrees. The researcher believes the mean A-level count for BA students will be higher than that of BSc students. The two-sample assumptions of normality and equality of variances are checked before the two-sample confidence interval is computed. Example #1 in the two-sample hypothesis testing module will examine this problem from a testing perspective.

Problem Description: Measurements were made on a random sample of 40 students. The raw data is given in Appendix A of Rees and is reproduced as a set of variables in StatObjects. The following variables were recorded:

ID: Student identification
Sex: the gender of the student (M = male; F = female)
Height: the height of the student (cm)
Siblings: number of siblings
Distance: the distance form home to Oxford (km)
Degree: type of degree (BA and BSc)
A-Level: A-level count

The assumptions are assessed initially using the A-level histogram conditioning on the Degree Type. Click on the Moments triangular reveal button and select Normal Density from the Histogram popup menu. Finally, click on the student with the highest A-level count and then red in the color palette to obtain the following plot for students seeking a BA degree:

The relatively high positive skewness value and the potential outlier suggests that normality is not a viable assumption for the A-level count of BA degree students. Likewise, the data is not well approximated by the normal curve. Select normalPlot from the Histogram popup menu to display the normal quantile plot. This plot provides a less fallible way of assessing normality. The following plot is obtained by clicking on the Quantiles triangular reveal button and selecting Robust Fit and Quantile Lines from the QuantPlot popup menu.

The outlier is identified by clicking on the Label tool and the high A-level count. The normal quantile plot shows positive skewness by the upward curvature. Student #2 is an outlier with 32 A counts (the maximum in the Quantiles report). The higher sample mean (16.2) in comparison with the sample median (14.0) also is consistent with the positive skewness measure.

The assumption of normality for students in BSc programs can be assessed by clicking on the right of the conditioning slider in each of the above two plots. Drag over the high A-count values forming a cluster and click on red in the color palette to obtain the following plot:

The sample distribution for BSc student A-counts is not well approximated by the normal curve, but it has somewhat less skweness than that of BA students. Nonetheless, a cluster of students appear to have high A-level counts. This is confirmed by looking at the BSc A-level count normal quantile plot which is displayed by clicking on the right-hand side of the conditioning slider.

The BSc A-count data appears to follow a follow a straight line until the right-hand tail is reached. The four student's with the highest A-counts are indeed outliers and student #38 is a borderline extreme outlier.

Normality does not seem to be a viable assumption for the A-counts of either the two student groups. What about the assumption of homogeneity of variances? The Moments reports show that the sample standard deviation of BA student A-counts (sBA = 7.10) is higher that that of BSc students (sBSc = 4.78). The normal quantile plot has a more revealing way of checking the efficacy of the equality of variances assumption. Since the slope of the robust fit line in the normal quantile plot estimates the standard deviation, the user can click back and forth on the ends of the conditioning slider to see if the slope changes. Try it. The slope changes very little (although the BA slope is somewhat steeper) which would indicate that the estimates of the standard deviations are not that different. Why then are the actual sample standard deviations so different. The robust fit lines are not affected by outliers whereas the standard deviations are. As a result, the problem causing inequality of variances and lack of normality is at least partially due to outliers.

The assumptions of the two-sample CI are not satisfied and a major problem is the presence of outliers. Nonetheless, the A-counts representing outliers are real. What should we do? We could delete or downweight the outliers; we could try transforming the data; we could modify the standard two-sample CI to account for inequality of variances; or we could use nonparametric approaches. In the future, the approaches will be tried. For now, we will supplement the CI with graphical analyses. A fuller graphical analysis is done the the hypothesis testing module.

We want to determine if the distribution of the BA A-level counts shifts relative to that of the BSc A-level counts. If normality holds for each group (and it doesn't) and equality of variances holds (and it doesn't), a shift in the distribution is equivalent to a difference in the means (i.e., µBA &ne µBSc). We can explore this possiblity by using a two-sample CI.

The two-sample confidence interval is constructed from the A-level by Degree Type dot plot.
The following plot is constructed by clicking on the Means and Pairwise CIs triangular reveal buttons and by selecting Fit Means from the DotPlot popup menu.

The outliers are highlighted automatically in this plot from the linked histogram plots. Notice that only three outlier values are shown for the BSc students. This is because some of the value are repeated which in this case means that two of the outliers have the same value (students #6 and #28 from the normal quantile plot). In the future, the plot will support jittering which will allow the x-values to be separated slightly.

The means and standard errors are given in the Means (StdErrs) report and they are visualized in the dot plot as means and error bars (shown in green). The error bar (sample mean ± the standard error) for the A-level count of BA students is completely above the overall mean reference line, whereas the error bar for BSc students is completely below the reference line. However, this does by itself not provide adequate support for the notion that the population means differ.

A formal way of assessing population mean differences is by constructing a two-sample CI on µBA - µBSc. The default 95% CI does not contain 0 which means that 0 is not a plausible value of the population mean difference. Thus it appears that the population means do differ statistically (i.e., in hypothesis testing terms the null value of 0 is rejected) and this is indicated by the red limits (which are red if 0 is not in the interval). Select 99% from the Level popup menu in the Pairwise CIs report and observe that the lower limit is still above but very close to 0.

The estimated population mean difference is greated than 0 using any reasonable confidence level. However, this interpretation is weakened by our knowledge that the assumptions are violated. The saga will continue in future examples.