All Projects → atif-hassan → PyImpetus

atif-hassan / PyImpetus

Licence: MIT License
PyImpetus is a Markov Blanket based feature subset selection algorithm that considers features both separately and together as a group in order to provide not just the best set of features but also the best combination of features

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to PyImpetus

Mlr
Machine Learning in R
Stars: ✭ 1,542 (+1757.83%)
Mutual labels:  statistics, feature-selection
Stat Cookbook
📙 The probability and statistics cookbook
Stars: ✭ 1,990 (+2297.59%)
Mutual labels:  statistics, probability
Ml Dl Scripts
The repository provides usefull python scripts for ML and data analysis
Stars: ✭ 119 (+43.37%)
Mutual labels:  statistics, machine-learning-algorithms
Ethzcheatsheets
Stars: ✭ 92 (+10.84%)
Mutual labels:  statistics, probability
pyHSICLasso
Versatile Nonlinear Feature Selection Algorithm for High-dimensional Data
Stars: ✭ 125 (+50.6%)
Mutual labels:  machine-learning-algorithms, feature-selection
Papers Literature Ml Dl Rl Ai
Highly cited and useful papers related to machine learning, deep learning, AI, game theory, reinforcement learning
Stars: ✭ 1,341 (+1515.66%)
Mutual labels:  statistics, machine-learning-algorithms
Math Php
Powerful modern math library for PHP: Features descriptive statistics and regressions; Continuous and discrete probability distributions; Linear algebra with matrices and vectors, Numerical analysis; special mathematical functions; Algebra
Stars: ✭ 2,009 (+2320.48%)
Mutual labels:  statistics, probability
Uc Davis Cs Exams Analysis
📈 Regression and Classification with UC Davis student quiz data and exam data
Stars: ✭ 33 (-60.24%)
Mutual labels:  statistics, probability
data-science-notes
Open-source project hosted at https://makeuseofdata.com to crowdsource a robust collection of notes related to data science (math, visualization, modeling, etc)
Stars: ✭ 52 (-37.35%)
Mutual labels:  statistics, probability
Stanford Cme 106 Probability And Statistics
VIP cheatsheets for Stanford's CME 106 Probability and Statistics for Engineers
Stars: ✭ 242 (+191.57%)
Mutual labels:  statistics, probability
Openml R
R package to interface with OpenML
Stars: ✭ 81 (-2.41%)
Mutual labels:  statistics, machine-learning-algorithms
Data-Scientist-In-Python
This repository contains notes and projects of Data scientist track from dataquest course work.
Stars: ✭ 23 (-72.29%)
Mutual labels:  machine-learning-algorithms, probability
Hackerrank
This is the Repository where you can find all the solution of the Problems which you solve on competitive platforms mainly HackerRank and HackerEarth
Stars: ✭ 68 (-18.07%)
Mutual labels:  statistics, machine-learning-algorithms
Ptstat
Probabilistic Programming and Statistical Inference in PyTorch
Stars: ✭ 108 (+30.12%)
Mutual labels:  statistics, probability
25daysinmachinelearning
I will update this repository to learn Machine learning with python with statistics content and materials
Stars: ✭ 53 (-36.14%)
Mutual labels:  statistics, machine-learning-algorithms
Flex
Probabilistic deep learning for data streams.
Stars: ✭ 127 (+53.01%)
Mutual labels:  statistics, probability
Teaching
Teaching Materials for Dr. Waleed A. Yousef
Stars: ✭ 435 (+424.1%)
Mutual labels:  statistics, probability
Python For Probability Statistics And Machine Learning
Jupyter Notebooks for Springer book "Python for Probability, Statistics, and Machine Learning"
Stars: ✭ 481 (+479.52%)
Mutual labels:  statistics, probability
Quant Notes
Quantitative Interview Preparation Guide, updated version here ==>
Stars: ✭ 180 (+116.87%)
Mutual labels:  statistics, probability
zoofs
zoofs is a python library for performing feature selection using a variety of nature-inspired wrapper algorithms. The algorithms range from swarm-intelligence to physics-based to Evolutionary. It's easy to use , flexible and powerful tool to reduce your feature size.
Stars: ✭ 142 (+71.08%)
Mutual labels:  machine-learning-algorithms, feature-selection

forthebadge made-with-python ForTheBadge built-with-love

PyPI version shields.io Downloads Maintenance

PyImpetus

PyImpetus is a Markov Blanket based feature selection algorithm that selects a subset of features by considering their performance both individually as well as a group. This allows the algorithm to not only select the best set of features, but also select the best set of features that play well with each other. For example, the best performing feature might not play well with others while the remaining features, when taken together could out-perform the best feature. PyImpetus takes this into account and produces the best possible combination. Thus, the algorithm provides a minimal feature subset. So, you do not have to decide on how many features to take. PyImpetus selects the optimal set for you.

PyImpetus has been completely revamped and now supports binary classification, multi-class classification and regression tasks. It has been tested on 14 datasets and outperformed state-of-the-art Markov Blanket learning algorithms on all of them along with traditional feature selection algorithms such as Forward Feature Selection, Backward Feature Elimination and Recursive Feature Elimination.

How to install?

pip install PyImpetus

Functions and parameters

# The initialization of PyImpetus takes in multiple parameters as input
# PPIMBC is for classification
model = PPIMBC(model, p_val_thresh, num_simul, simul_size, simul_type, sig_test_type, cv, verbose, random_state, n_jobs)
  • model - estimator object, default=DecisionTreeClassifier() The model which is used to perform classification in order to find feature importance via significance-test.
  • p_val_thresh - float, default=0.05 The p-value (in this case, feature importance) below which a feature will be considered as a candidate for the final MB.
  • num_simul - int, default=30 (This feature has huge impact on speed) Number of train-test splits to perform to check usefulness of each feature. For large datasets, the value should be considerably reduced though do not go below 5.
  • simul_size - float, default=0.2 The size of the test set in each train-test split
  • simul_type - boolean, default=0 To apply stratification or not
    • 0 means train-test splits are not stratified.
    • 1 means the train-test splits will be stratified.
  • sig_test_type - string, default="non-parametric" This determines the type of significance test to use.
    • "parametric" means a parametric significance test will be used (Note: This test selects very few features)
    • "non-parametric" means a non-parametric significance test will be used
  • cv - cv object/int, default=0 Determines the number of splits for cross-validation. Sklearn CV object can also be passed. A value of 0 means CV is disabled.
  • verbose - int, default=2 Controls the verbosity: the higher, more the messages.
  • random_state - int or RandomState instance, default=None Pass an int for reproducible output across multiple function calls.
  • n_jobs - int, default=-1 The number of CPUs to use to do the computation.
    • None means 1 unless in a :obj:joblib.parallel_backend context.
    • -1 means using all processors.
# The initialization of PyImpetus takes in multiple parameters as input
# PPIMBR is for regression
model = PPIMBR(model, p_val_thresh, num_simul, simul_size, sig_test_type, cv, verbose, random_state, n_jobs)
  • model - estimator object, default=DecisionTreeRegressor() The model which is used to perform regression in order to find feature importance via significance-test.
  • p_val_thresh - float, default=0.05 The p-value (in this case, feature importance) below which a feature will be considered as a candidate for the final MB.
  • num_simul - int, default=30 (This feature has huge impact on speed) Number of train-test splits to perform to check usefulness of each feature. For large datasets, the value should be considerably reduced though do not go below 5.
  • simul_size - float, default=0.2 The size of the test set in each train-test split
  • sig_test_type - string, default="non-parametric" This determines the type of significance test to use.
    • "parametric" means a parametric significance test will be used (Note: This test selects very few features)
    • "non-parametric" means a non-parametric significance test will be used
  • cv - cv object/int, default=0 Determines the number of splits for cross-validation. Sklearn CV object can also be passed. A value of 0 means CV is disabled.
  • verbose - int, default=2 Controls the verbosity: the higher, more the messages.
  • random_state - int or RandomState instance, default=None Pass an int for reproducible output across multiple function calls.
  • n_jobs - int, default=-1 The number of CPUs to use to do the computation.
    • None means 1 unless in a :obj:joblib.parallel_backend context.
    • -1 means using all processors.
# To fit PyImpetus on provided dataset and find recommended features
fit(data, target)
  • data - A pandas dataframe or a numpy matrix upon which feature selection is to be applied
    (Passing pandas dataframe allows using correct column names. Numpy matrix will apply default column names)
  • target - A numpy array, denoting the target variable
# This function returns the names of the columns that form the MB (These are the recommended features)
transform(data)
  • data - A pandas dataframe or a numpy matrix which needs to be pruned
    (Passing pandas dataframe allows using correct column names. Numpy matrix will apply default column names)
# To fit PyImpetus on provided dataset and return pruned data
fit_transform(data, target)
  • data - A pandas dataframe or numpy matrix upon which feature selection is to be applied
    (Passing pandas dataframe allows using correct column names. Numpy matrix will apply default column names)
  • target - A numpy array, denoting the target variable
# To plot XGBoost style feature importance
feature_importance()

How to import?

from PyImpetus import PPIMBC, PPIMBR

Usage

If data is a pandas dataframe

# Import the algorithm. PPIMBC is for classification and PPIMBR is for regression
from PyImeptus import PPIMBC, PPIMBR
# Initialize the PyImpetus object
model = PPIMBC(model=SVC(random_state=27, class_weight="balanced"), p_val_thresh=0.05, num_simul=30, simul_size=0.2, simul_type=0, sig_test_type="non-parametric", cv=5, random_state=27, n_jobs=-1, verbose=2)
# The fit_transform function is a wrapper for the fit and transform functions, individually.
# The fit function finds the MB for given data while transform function provides the pruned form of the dataset
df_train = model.fit_transform(df_train.drop("Response", axis=1), df_train["Response"].values)
df_test = model.transform(df_test)
# Check out the MB
print(model.MB)
# Check out the feature importance scores for the selected feature subset
print(model.feat_imp_scores)
# Get a plot of the feature importance scores
model.feature_importance()

If data is a numpy matrix

# Import the algorithm. PPIMBC is for classification and PPIMBR is for regression
from PyImeptus import PPIMBC, PPIMBR
# Initialize the PyImpetus object
model = PPIMBC(model=SVC(random_state=27, class_weight="balanced"), p_val_thresh=0.05, num_simul=30, simul_size=0.2, simul_type=0, sig_test_type="non-parametric", cv=5, random_state=27, n_jobs=-1, verbose=2)
# The fit_transform function is a wrapper for the fit and transform functions, individually.
# The fit function finds the MB for given data while transform function provides the pruned form of the dataset
df_train = model.fit_transform(x_train, y_train)
df_test = model.transform(x_test)
# Check out the MB
print(model.MB)
# Check out the feature importance scores for the selected feature subset
print(model.feat_imp_scores)
# Get a plot of the feature importance scores
model.feature_importance()

For better accuracy

Note: Play with the values of num_simul, simul_size, simul_type and p_val_thresh because sometimes a specific combination of these values will end up giving best results

  • Increase the cv value In all experiments, cv did not help in getting better accuracy. Use this only when you have extremely small dataset
  • Increase the num_simul value
  • Try one of these values for simul_size = {0.1, 0.2, 0.3, 0.4}
  • Use non-linear models for feature selection. Apply hyper-parameter tuning on models
  • Increase value of p_val_thresh in order to increase the number of features to include in thre Markov Blanket

For better speeds

  • Decrease the cv value. For large datasets cv might not be required. Therefore, set cv=0 to disable the aggregation step. This will result in less robust feature subset selection but at much faster speeds
  • Decrease the num_simul value but don't decrease it below 5
  • Set n_jobs to -1
  • Use linear models

For selection of less features

  • Try reducing the p_val_thresh value
  • Try out sig_test_type = "parametric"

Performance in terms of Accuracy (classification) and MSE (regression)

Dataset # of samples # of features Task Type Score using all features Score using featurewiz Score using PyImpetus # of features selected % of features selected Tutorial
Ionosphere 351 34 Classification 88.01% 92.86% 14 42.42% tutorial here
Arcene 100 10000 Classification 82% 84.72% 304 3.04%
AlonDS2000 62 2000 Classification 80.55% 86.98% 88.49% 75 3.75%
slice_localization_data 53500 384 Regression 6.54 5.69 259 67.45% tutorial here

Note: Here, for the first, second and third tasks, a higher accuracy score is better while for the fourth task, a lower MSE (Mean Squared Error) is better.

Performance in terms of Time (in seconds)

Dataset # of samples # of features Time (with PyImpetus)
Ionosphere 351 34 35.37
Arcene 100 10000 1570
AlonDS2000 62 2000 125.511
slice_localization_data 53500 384 1296.13

Future Ideas

  • Let me know

Feature Request

Drop me an email at [email protected] if you want any particular feature

Please cite this work as

@article{hassan2021ppfs, title={PPFS: Predictive Permutation Feature Selection}, author={Hassan, Atif and Paik, Jiaul H and Khare, Swanand and Hassan, Syed Asif}, journal={arXiv preprint arXiv:2110.10713}, year={2021} }

Corresponding paper

PPFS: Predictive Permutation Feature Selection

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].