All Projects → flatironinstitute → sparse_dot

flatironinstitute / sparse_dot

Licence: MIT License
Python wrapper for Intel Math Kernel Library (MKL) matrix multiplication

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to sparse dot

Algorithmic-Trading
I have been deeply interested in algorithmic trading and systematic trading algorithms. This Repository contains the code of what I have learnt on the way. It starts form some basic simple statistics and will lead up to complex machine learning algorithms.
Stars: ✭ 47 (+23.68%)
Mutual labels:  numpy, scipy
object-detection-with-deep-learning
demonstrating use of convolution neural networks to detect objects in a video
Stars: ✭ 17 (-55.26%)
Mutual labels:  numpy, scipy
introduction to ml with python
도서 "[개정판] 파이썬 라이브러리를 활용한 머신 러닝"의 주피터 노트북과 코드입니다.
Stars: ✭ 211 (+455.26%)
Mutual labels:  numpy, scipy
sparse
Sparse matrix formats for linear algebra supporting scientific and machine learning applications
Stars: ✭ 136 (+257.89%)
Mutual labels:  matrix-multiplication, sparse-matrix
polytope
Geometric operations on polytopes of any dimension
Stars: ✭ 51 (+34.21%)
Mutual labels:  numpy, scipy
skinner
Skin export / import tools for Autodesk Maya
Stars: ✭ 68 (+78.95%)
Mutual labels:  numpy, scipy
dsp-theory
Theory of digital signal processing (DSP): signals, filtration (IIR, FIR, CIC, MAF), transforms (FFT, DFT, Hilbert, Z-transform) etc.
Stars: ✭ 643 (+1592.11%)
Mutual labels:  numpy, scipy
Cheatsheets Ai
Essential Cheat Sheets for deep learning and machine learning researchers https://medium.com/@kailashahirwar/essential-cheat-sheets-for-machine-learning-and-deep-learning-researchers-efb6a8ebd2e5
Stars: ✭ 14,095 (+36992.11%)
Mutual labels:  numpy, scipy
dpnp
NumPy drop-in replacement for Intel(R) XPUs
Stars: ✭ 42 (+10.53%)
Mutual labels:  numpy, mkl
grblas
Python wrapper around GraphBLAS
Stars: ✭ 22 (-42.11%)
Mutual labels:  sparse, sparse-matrix
MKLSparse.jl
Make available to Julia the sparse functionality in MKL
Stars: ✭ 42 (+10.53%)
Mutual labels:  sparse, mkl
Scipy-Bordeaux-2017
Course taught at the University of Bordeaux in the academic year 2017 for PhD students.
Stars: ✭ 16 (-57.89%)
Mutual labels:  numpy, scipy
Orange3
🍊 📊 💡 Orange: Interactive data analysis
Stars: ✭ 3,152 (+8194.74%)
Mutual labels:  numpy, scipy
PVMismatch
An explicit Python PV system IV & PV curve trace calculator which can also calculate mismatch.
Stars: ✭ 51 (+34.21%)
Mutual labels:  numpy, scipy
Ml Feynman Experience
A collection of analytics methods implemented with Python on Google Colab
Stars: ✭ 217 (+471.05%)
Mutual labels:  numpy, scipy
audiophile
Audio fingerprinting and recognition
Stars: ✭ 17 (-55.26%)
Mutual labels:  numpy, scipy
Tftb
A Python module for time-frequency analysis
Stars: ✭ 185 (+386.84%)
Mutual labels:  numpy, scipy
Pybotics
The Python Toolbox for Robotics
Stars: ✭ 192 (+405.26%)
Mutual labels:  numpy, scipy
SciCompforChemists
Scientific Computing for Chemists text for teaching basic computing skills to chemistry students using Python, Jupyter notebooks, and the SciPy stack. This text makes use of a variety of packages including NumPy, SciPy, matplotlib, pandas, seaborn, NMRglue, SymPy, scikit-image, and scikit-learn.
Stars: ✭ 65 (+71.05%)
Mutual labels:  numpy, scipy
dbcsr
DBCSR: Distributed Block Compressed Sparse Row matrix library
Stars: ✭ 65 (+71.05%)
Mutual labels:  matrix-multiplication, sparse-matrix

sparse_dot_mkl

Build Status codecov PyPI version Conda version

This is a wrapper for the sparse matrix multiplication in the intel MKL library. It is implemented entirely in native python using ctypes. The main advantage to MKL (which motivated this) is multithreaded sparse matrix multiplication. The scipy sparse implementation is single-threaded at the time of writing (2020-01-03). A secondary advantage is the direct multiplication of a sparse and a dense matrix without requiring any intermediate conversion (also multithreaded).

Three functions are explicitly available - dot_product_mkl, gram_matrix_mkl, and sparse_qr_solve_mkl:

dot_product_mkl

dot_product_mkl(matrix_a, matrix_b, cast=False, copy=True, reorder_output=False, dense=False, debug=False, out=None, out_scalar=None)

matrix_a and matrix_b are either numpy arrays (1d or 2d) or scipy sparse matrices (CSR, CSC, or BSR). BSR matrices are supported for matrix-matrix multiplication only if one matrix is a dense array or both sparse matrices are BSR. Sparse COO matrices are not supported. Numpy arrays must be contiguous. Non-contiguous arrays should be copied to a contiguous array prior to calling this function.

This package only works with float or complex float data. cast=True will convert data to double-precision floats or complex floats by making an internal copy if necessary. If A and B are both single-precision floats or complex floats they will be used as is. cast=False will raise a ValueError if the input arrays are not both double-precision or both single-precision. This defaults to False on the principle that potentially unsafe dtype conversions should not occur without explicit instruction.

The output will be a dense array, unless both inputs are sparse, in which case the output will be a sparse matrix. The sparse matrix output format will be the same as the left (A) input sparse matrix. dense=True will directly produce a dense array during sparse matrix multiplication. dense has no effect if a dense array would be produced anyway. Dense array outputs may be row-ordered or column-ordered, depending on input ordering.

copy is deprecated and has no effect.

reorder_output=True will order sparse matrix indices in the output matrix. It has no effect if the output is a dense array. Input sparse matrices may be reordered without warning in place. This will not change data, only the way it is stored. Scipy matrix multiplication does not produce ordered outputs, so this defaults to False.

out is an optional reference to a dense output array to which the product of the matrix multiplication will be added. This must be identical in attributes to the array that would be returned if it was not used. Specifically it must have the correct shape, dtype, and column- or row-major order and it must be contiguous. A ValueError will be raised if any attribute of this array is incorrect. This function will return a reference to the same array object when out is set.

out_scalar is an optional element-wise scaling of out, if out is provided. It will multiply out prior to adding the matrix multiplication such that out := matrix_a * matrix_b + out_scalar * out

sparse_qr_solve_mkl

sparse_qr_solve_mkl(matrix_a, matrix_b, cast=False, debug=False)

This is a QR solver for systems of linear equations (AX = B) where matrix_a is a sparse CSR matrix and matrix_b is a dense matrix. It will return a dense array X.

cast=True will convert data to compatible floats by making an internal copy if necessary. It will also convert a CSC matrix to a CSR matrix if necessary.

gram_matrix_mkl

gram_matrix_mkl(matrix, transpose=False, cast=False, dense=False, debug=False, reorder_output=False)

This will calculate the gram matrix ATA for matrix A, where matrix A is dense or a sparse CSR matrix. It will return the upper triangular portion of the resulting symmetric matrix. If A is sparse, it will return a sparse matrix unless dense=True is set.

transpose=True will instead return AAT

reorder_output=True will order sparse matrix indices in the output matrix.

cast=True will convert data to compatible floats by making an internal copy if necessary. It will also convert a CSC matrix to a CSR matrix if necessary.

Requirements

This package requires the MKL runtime linking library libmkl_rt.so (or libmkl_rt.dylib for OSX, or mkl_rt.dll for WIN). If the MKL library cannot be loaded an ImportError will be raised when the package is first imported. MKL is distributed with the full version of conda, and can be installed into Miniconda with conda install -c intel mkl. Alternatively, you may add need to add the path to MKL shared objects to LD_LIBRARY_PATH (e.g. export LD_LIBRARY_PATH=/opt/intel/mkl/lib/intel64:$LD_LIBRARY_PATH). There are some known bugs in MKL v2019 which may cause intermittent segfaults. Updating to MKL v2020 is highly recommended if you encounter any problems.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].