All Projects → gianlucamalato → collinearity

gianlucamalato / collinearity

Licence: MIT License
A Python libreary for removing collinearity

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Introduction

This library implements some functionf for removing collinearity from a dataset of features. It can be used both for supervised and for unsupervised machine learning problems.

Collinearity is evaluated calculating Pearson's linear correlation coefficient between the features. The user sets a threshold, which is the maximum absolute value allowed for the correlation coefficients in the correlation matrix.

For unsupervised problems, the algorithm selects only those features that produce a correlation matrix whose off-diagonal elements are, in absolute value, less than the threshold.

For supervised problems, the importance of the features with respect to the target variable is calculated using a univariate approach. Then, the features are added with the same unsupervised approach, starting from the most important ones.

Objects

The main object is SelectNonCollinear. It can be imported this way:

from collinearity import SelectNonCollinear

collinearity.SelectNonCollinear(correlation_threshold=0.4, scoring=f_classif)

Parameters:

correlation_threshold : float (between 0 and 1), default = 0.4

Only those features that produce a correlation matrix with off-diagonal elements that are, in absolute value, less than this threshold will be chosen.

scoring : callable, default=f_classif

The scoring function for supervised problems. It must be the same accepted by sklearn.feature_selection.SelectKBest.

Methods

This object supports the main methods of scikit-learn Estimators:

fit(X,y=None)

Identifies the features to consider. For supervised problems, y is the target array and the algorithm is:

  • Sort the features by scoring descending
  • Take the most important feature (i.e. the first feature)
  • Take the next feature if it shows a linear correlation coefficient with the already selected feature that is, in absolute value, lower than the threshold
  • Keep adding features as long as the correlation constraint holds

For unsupervised problems, we have y = None and the algorithm is:

  • Take the couple of features that have the lowest absolute value of the linear correlation coefficient.
  • If it's lower than the threshold, consider these features
  • Keep adding features as long as the correlation matrix doesn't show off-diagonal elements whose absolute value is greater than the threshold.

transform(X)

Selects the features according to the result of fit. It must be called after fit.

fit_transform(X,y=None)

Calls fit and then transform

get_support()

Returns an array of True and False of size X.shape[1]. A feature is selected if the value on this array corresponding to its index is True, otherwise it's not selected.

Examples

The following examples explain how the main objects work. The code to run in advance for initializing the environment is:

from collinearity import SelectNonCollinear
from sklearn.feature_selection import f_regression
import numpy as np
from sklearn.datasets import load_diabetes

X,y = load_diabetes(return_X_y=True)

Unsupervised problems

This example shows how to perform selection according to minimum collinearity in unsupervised problems.

Let's consider, for this example, a threshold equal to 0.3.

selector = SelectNonCollinear(0.3)

If we apply the selection to the features and calculate the correlation matrix, we have:

np.corrcoef(selector.fit_transform(X),rowvar=False)

# array([[1.       , 0.1737371 , 0.18508467, 0.26006082],
#       [0.1737371 , 1.        , 0.0881614 , 0.03527682],
#       [0.18508467, 0.0881614 , 1.        , 0.24977742],
#       [0.26006082, 0.03527682, 0.24977742, 1.        ]])

As we can see, no off-diagonal element is greater than the threshold.

Supervised problems

For this problem, we must set the value of the scoring argument in the constructor.

Let's consider a threshold equal to 0.4 and a scoring equal to f_regression.

selector = SelectNonCollinear(correlation_threshold=0.4,scoring=f_regression)

selector.fit(X,y)

The correlation matrix is:

np.corrcoef(selector.transform(X),rowvar=False)

# array([[ 1.       ,  0.1737371 ,  0.18508467,  0.33542671,  0.26006082,
#        -0.07518097,  0.30173101],
#       [ 0.1737371 ,  1.        ,  0.0881614 ,  0.24101317,  0.03527682,
#        -0.37908963,  0.20813322],
#       [ 0.18508467,  0.0881614 ,  1.        ,  0.39541532,  0.24977742,
#        -0.36681098,  0.38867999],
#       [ 0.33542671,  0.24101317,  0.39541532,  1.        ,  0.24246971,
#        -0.17876121,  0.39042938],
#       [ 0.26006082,  0.03527682,  0.24977742,  0.24246971,  1.        ,
#         0.05151936,  0.32571675],
#       [-0.07518097, -0.37908963, -0.36681098, -0.17876121,  0.05151936,
#         1.        , -0.2736973 ],
#       [ 0.30173101,  0.20813322,  0.38867999,  0.39042938,  0.32571675,
#        -0.2736973 ,  1.        ]])

Again, no off-diagonal element is greater than the threshold in absolute value.

Use in pipelines

It's possible to use SelectNonCollinear inside a pipeline, if necessary.

pipeline = make_pipeline(SelectNonCollinear(correlation_threshold=0.4, scoring=f_regression), LinearRegression())

Contact the author

For any questions, you can contact me at [email protected]

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].