Synopsis: Lectures 1-2 (Introductory Stuff)

Sunday, December 11, 2022 at 11:46 PM
- written by user ΑΡΒΑΝΙΤΗΣ ΣΤΥΛΙΑΝΟΣ

We are interested in developing a quite general theory of optimization based estimation and testing procedures. To this end we begun the construction of an appropriate framework consisting of the notion of the sample, the notion of the parametric and semi-parametric statistical model and the subsequent notions of the estimator and testing procedure in such-like models. 

Given the complexity incurred in models that even slightly deviate from the standard forms of the linear model; this among others implies that several estimators and testing procedures are only approximable via numerical optimization, resampling, and non-analytically derivable, and are in any case quite non-linear functions of the sample, the general properties upon which we will rely in order to evaluate our inferential procedures are asymptotic and minimal; namely (weak) consistency, rates and asymptotic normality for estimators, and asymptotic conservativeness and consistency for tests.

We begun exploring the class of optimization based estimators by noting that in the standard form of the linear model, due to its structure and under the usual identification condition, there exists a function that is uniquely minimized at the unknown parameter value. If this function were hence analytically known, the unknown value of the parameter would be accurately retrieved. However this function is not analytically known since it also depends on the unknown parameter value. It can be however empirically approximated by its sample counterpart, the optimization of which leads to the OLS estimator.

Notes for the above can be found here (v.24/1/23).

 

Comments (0)