MLA -- Least Square method
"A standard approach in regression analysis to approximate the solution of overdetermined systems."
The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems (sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the residuals made in the results of every single equation. It is used to do model fitting and find the extreme value of the loss function.
1. Least Square principle
The general form of least squares is simple:
f is the objective function
y is the observation value (actual value)
is the theoretical value predicted from the model
The objective function is also the loss function commonly used in the machine learning. Our goal is to get a fitting model when the objective function is minimized. Give a simple example of the simplest linear regression, we have m samples with only one feature:
Hypothesis model is:
Loss function is:
Least Square method needs to get that minimizes the value of
2. Least Squre solution
2.1 Algebraic way
If we need to minimumize , our method is to calculate the partial derivatives of and respectively, and make their partial derivatives all 0 to form a linear equation set for two variables.
-
Partial derivation of to :
-
Partial derivation of to :
Now:
Then:
If the sample has more than 1 features, which is Multiple Linear Regression. We still use the rule that partial derivatives equal to 0 to form parametric equation set, then calculate these unknown parameters. The principle remains the same.
2.2 Matrix way
The matrix expression is more concise than the algebra expression, and it can also replace loops (In essence, the nature of computing has not changed). So many books and machine learning libraries now use the matrix method to do the least square.
Here we use the above multiple linear regression example to describe the matrix method solution. . If in matrix expression:
X is a matrix (m records and n features).
is a vector (n coefficients of n features).
is a vector (m prediction y value of m sample records)
The loss function is:
Y is a vector (m actual y value of m sample records)
1/2 is mainly to facilitate the calculation of later derivatives.
Let the derivative to be 0:
It based on two Matrix derivative chain rule formula:
Formula 1:
Formula 2: $ \nabla_xf(AX+B) = A^T $
Then:
is our answer.
3. Limitations and applicable scenarios of least square method
Least Square answer is simple and efficient, it has no iterations camparing with gradient descent. But it still has some limitations when we use it:
-
Least Square needs to calculate the inverse matrix of , however, it is possible that inverse matrix does not exist. In this case, there is no way to use Least Square directly, but Gradient Descent. Or, we can also try to remove redundant features by clean the sample data. Let the determinant of not be 0, and then continue to use least squares.
-
When the features n of sample is very large, the calculating the inverse matrix of is a very time-consuming work, even not feasible. In this case, it's better to use Gradient Descent.
(If n > 1000, suggest not use Least Square)
. Or, we can also use Principal Component Analysis (PCA) to reduce the dimension of features and then use the least square method. -
If the fitting function (objective model) is not linear, Least Square cannot be used. But Gradient Descent can still be used. Or, we can convert the fitting function to linearity through some techniques before Least Square can be used.
-
There are some special situation:
- When the sample size m is small and less than the feature number n, then the fitting equation is underdetermined, and the commonly used optimization methods cannot fit the data.
- When the sample size m is equal to the feature number n, it can be solved by the equation system.
- When the sample size m is greater than n, the fitting equation is overdetermined, which is the scenario where we usually use least squares.