The Concept of Principal Component Analysis

Wonuola Abimbola
4 min readMay 19, 2021

--

What is Principal Component Analysis?

Principal component analysis (or PCA) is a dimensionality-reduction technique that allows us to reduce the number of features in a large dataset as much as possible while still keeping most of the information from the original dataset. It can also be explained as projecting a dataset from a larger dimensional space to a smaller dimensional space while preserving as much information as possible in the process.

Why do we use PCA?

One reason we use PCA is to combat the ‘Curse of Dimensionality’. The curse of dimensionality simply refers to when we have a dataset with an absurdly large amount of features such that they can’t possibly be used in an efficient manner to achieve a goal (regression, classification, etc.). PCA helps with this by reducing the amount of features while keeping as much information as possible from the original dataset in order to efficiently perform a task.

At what stage do we use PCA?

Principal component analysis is done at the preprocessing stage of the data science process before modelling takes place.

Note: It is advised to use PCA only on continuous variables.

Steps

  1. Standardization:

After going through the exploratory and data cleaning process. The next step is to standardize your data. Standardization is all about scaling your data in such a way that all the variables and their values lie within a similar range. This can be achieved by subtracting the variable mean from the value and dividing by the variable standard deviation.

2. Computing Variance Ratio:

This step is the reason we perform standardization. This is because, rather than using the correlation matrix, PCA uses the covariance matrix which makes it very sensitive to the scale of the variables. What this step accomplishes is that it lets us know if there is a relationship between the variables of the dataset by showing us how they vary from the mean with respect to each other. If it has a positive covariance then the variables are positively related and if it is negative then the variables are negatively related

The values on the diagonal from top left to bottom right show the covariance of a variable with itself so it makes sense for it to be one as it is perfectly correlated with itself.

3. Perform Eigendecomposition of the covariance matrix in order to create new dimensions:

When we perform eigendecomposition, we get our eigenvalues and eigenvectors. Eigenvalues represent the related amount of variance explained by each of the newly created dimensions (each eigenvector). Eigenvectors represent the newly created dimensions. Each value in an eigenvector is known as a component weight which is used to transform our scaled data to create our Principal Components.

Aside : Principal components are new variables that are constructed as linear combinations of the original variables. These combinations are done in such a way that the new variables are uncorrelated and most of the information from the original variables is compressed into the first component. So basically, a dataset with 15 dimensions will give you 15 principal components but maximum possible information will be in the first, then maximum from the remaining information left will be in the second and this goes on until we have 15 principal components.

4. Transform the Data:

Now we are going to create our new features. Each variable from the original data corresponds to the component weights within an eigenvector. That means, the first variable from the OG dataset corresponds to the first component weight within an eigenvector and so on.

Our first feature/principal component will be calculated as follows:

PC1 = first component weight* first original feature+ second component weight* second original feature+ third component weight* third original feature and so on.

This is done for each data point of the scaled dataset.

Reducing the number of variables of a data set naturally comes at the expense of accuracy, but the idea here is to trade a little accuracy for simplicity. This is because smaller data sets are easier to explore and visualize and this makes analyzing data much easier and faster for machine learning algorithms without a ridiculous amount of variables to process.

--

--