Solari ( 1969) later showed that this unexpected result could be explained by noticing that the maximum likelihood solution was a saddle-point and not a maximum of the likelihood function. He concluded that, when the two variance components are unknown, the method of maximum likelihood cannot be used for estimating β 1, as the resulting estimator must satisfy the relation β ˆ 1, σ ˆ u 2 = σ ˆ e 2. Lindley ( 1947) presented maximum likelihood estimators for the slope in a simple linear regression model with additive measurement error for the functional model. These authors addressed issues of identifiability and consistency in measurement error models, and proposed various approaches for parameter estimation in those models. The pace of research in this area picked up in the late 1940s, with papers by Lindley ( 1947), Neyman and Scott ( 1951), and Kiefel and Wolfowitz ( 1956). It was not until the mid-1930s that the terms errors in variables models and measurement error models were coined, and a systematic study of these types of model was undertaken. The extension to the p-variate case was presented by Pearson ( 1901). Adcock proposed an estimator for the slope in the regression line that accounted for the measurement error in the covariate for the special case where σ 2 a= σ 2 e. The problem of fitting a simple linear regression model in the presence of measurement error was first considered by Adcock ( 1877). Carriquiry, in International Encyclopedia of the Social & Behavioral Sciences, 2001 6 A Brief Historical Overview Clearly, the larger the value of R 2, the more confidence we can have that there really is a linear relationship between X and Y.Ī.L. while a value of 100% means there is a perfect linear dependence. A value of 0% means that there is no linear dependence between the sample values of X and Y. R 2: This value, called the coefficient of determination, is the square of the sample correlation coefficient, written as a percent. This means that the data strongly implies a linear dependence of Y on X, but it is not improbable that the constant is 0. So we can be very confident b 1 ≠ 0, while we cannot be so confident that b 0 ≠ 0. 12, while if b 1 = 0, the probability of obtaining data more unlikely than the result is 0. In Example 2.45, if b 0 = 0, then the probability of obtaining data more unlikely than the result is. P: If the true value of the parameter ( b 0 or b 1) is 0, then this is the probability of obtaining data more unlikely than the result. Notice in Example 2.45 that T is quite large for b 1 and not very large for b 0. T = coefficient SE coefficentThe larger the value of T, the more confident we can be that the estimate is close to the true value.