All Projects → allisterb → Compute.NET

allisterb / Compute.NET

Licence: MIT License
.NET bindings for native numerical computing

Programming Languages

C#
18002 projects

Projects that are alternatives of or similar to Compute.NET

mfi
Modern Fortran Interfaces to BLAS and LAPACK
Stars: ✭ 31 (+3.33%)
Mutual labels:  blas, lapack
mir-glas
[Experimental] LLVM-accelerated Generic Linear Algebra Subprograms
Stars: ✭ 99 (+230%)
Mutual labels:  blas, lapack
intel-mkl-src
Redistribute Intel MKL as a crate
Stars: ✭ 52 (+73.33%)
Mutual labels:  blas, lapack
MatlabJuliaMatrixOperationsBenchmark
Benchmark MATLAB & Julia for Matrix Operations
Stars: ✭ 21 (-30%)
Mutual labels:  blas, lapack
Eigen Git Mirror
THIS MIRROR IS DEPRECATED -- New url: https://gitlab.com/libeigen/eigen
Stars: ✭ 1,659 (+5430%)
Mutual labels:  blas, lapack
analisis-numerico-computo-cientifico
Análisis numérico y cómputo científico
Stars: ✭ 42 (+40%)
Mutual labels:  blas, lapack
linnea
Linnea is an experimental tool for the automatic generation of optimized code for linear algebra problems.
Stars: ✭ 60 (+100%)
Mutual labels:  blas, lapack
monolish
monolish: MONOlithic LInear equation Solvers for Highly-parallel architecture
Stars: ✭ 166 (+453.33%)
Mutual labels:  blas, lapack
Tensor
A library and extension that provides objects for scientific computing in PHP.
Stars: ✭ 146 (+386.67%)
Mutual labels:  lapack
sblas
Scala Native BLAS (Basic Linear Algebra Subprograms) supporting Linux and macOS
Stars: ✭ 25 (-16.67%)
Mutual labels:  blas
qmc
A Quasi-Monte-Carlo Integrator Library with CUDA Support
Stars: ✭ 17 (-43.33%)
Mutual labels:  numerical
URT
Fast Unit Root Tests and OLS regression in C++ with wrappers for R and Python
Stars: ✭ 70 (+133.33%)
Mutual labels:  blas
optimath
A #[no_std] LinAlg library
Stars: ✭ 47 (+56.67%)
Mutual labels:  blas
blas-benchmarks
Timing results for BLAS (Basic Linear Algebra Subprograms) libraries in R
Stars: ✭ 24 (-20%)
Mutual labels:  blas
dbcsr
DBCSR: Distributed Block Compressed Sparse Row matrix library
Stars: ✭ 65 (+116.67%)
Mutual labels:  blas
Finite-Difference-Method
A Finite Difference Method Engine in C++
Stars: ✭ 15 (-50%)
Mutual labels:  numerical
scalapack
ScaLAPACK development repository
Stars: ✭ 57 (+90%)
Mutual labels:  lapack
SLICOT-Reference
SLICOT - A Fortran subroutines library for systems and control
Stars: ✭ 19 (-36.67%)
Mutual labels:  lapack
react-numeric
A react component for formatted number form fields
Stars: ✭ 30 (+0%)
Mutual labels:  numeric
what-is
Important concepts in numerical analysis and related areas
Stars: ✭ 436 (+1353.33%)
Mutual labels:  numerical

Compute.NET: .NET bindings for native numerical computing

bind program screenshot

Get the latest release from the Compute.NET package feed.

About

Compute.NET provides auto-generated bindings for native numerical computing libraries like Intel Math Kernel Library, AMD Core Math Library (and its successors), NVIDIA CUDA, AMD clBLAS, cl* and others. The bindings are auto-generated from the library's C headers using the excellent CppSharp library. The generator is a CLI program that be can used to generate individual modules of each library as well as customize key aspects of the generated code, such as the use of .NET structs instead of classes for complex data types, and marshalling array parameters in native code functions (either as managed arrays or pointers.)

Status

  • CLI Bindings Generator: Works on Windows.

  • Bindings:

    • Compute.Bindings.IntelMKL package available on Myget feed. This library is not Windows-specific but I haven't tested it on Linux or other platforms yet. The following modules are available:
      • BLAS, CBLAS, SpBLAS, and PBLAS
      • LAPACK and SCALAPACK
      • VML
      • VSL
    • Compute.Bindings.CUDA package available on NuGet and MyGet. This library is not Windows-specific but I haven't tested it on Linux or other platforms yet. The entire runtime API is bound together with the following modules:
      • cuBLAS
  • Native Library Packages:

    • Compute.Winx64.IntelMKL package available on MyGet feed.
    • Compute.Winx64.CUDA package available on MyGet and NuGet.

Usage

Intel MKL Bindings

  1. Add the Compute.NET package feed to your NuGet package sources: https://www.myget.org/F/computedotnet/api/v2
  2. Install the bindings package into your project: Install-Package Compute.Bindings.IntelMKL.
  3. (Optional) Install the native library package into your project: Install-Package Compute.Winx64.IntelMKL.

Without step 2 you will need to make sure the .NET runtime can locate the native MKL library DLLs or shared library files. You can set your path to include the directory where the library files are located (typically %MKLROOT%\redist). Or you can copy the needed files into your project output directory with a build task.

With the packages installed you can use the MKL BLAS or vector math or other routines in your code. E.g the following code is translated from the Intel MKL examples for CBLAS:

using IntelMKL.ILP64;
public class BlasExamples
{
	public const int GENERAL_MATRIX = 0;
	public const int UPPER_MATRIX = 1;
	public const int LOWER_MATRIX = -1;

	public void RunBlasExample1()
	{
	    int m = 3, n = 2, i, j;
	    int lda = 3, ldb = 3, ldc = 3;
	    int rmaxa, cmaxa, rmaxb, cmaxb, rmaxc, cmaxc;
	    float alpha = 0.5f, beta = 2.0f;
	    float[] a, b, c;
	    CBLAS_LAYOUT layout = CBLAS_LAYOUT.CblasRowMajor;
	    CBLAS_SIDE side = CBLAS_SIDE.CblasLeft;
	    CBLAS_UPLO uplo = CBLAS_UPLO.CblasUpper;
	    int ma, na, typeA;
	    if (side == CBLAS_SIDE.CblasLeft)
	    {
		rmaxa = m + 1;
		cmaxa = m;
		ma = m;
		na = m;
	    }
	    else
	    {
		rmaxa = n + 1;
		cmaxa = n;
		ma = n;
		na = n;
	    }
	    rmaxb = m + 1;
	    cmaxb = n;
	    rmaxc = m + 1;
	    cmaxc = n;
	    a = new float[rmaxa * cmaxa];
	    b = new float[rmaxb * cmaxb];
	    c = new float[rmaxc * cmaxc];
	    if (layout == CBLAS_LAYOUT.CblasRowMajor)
	    {
		lda = cmaxa;
		ldb = cmaxb;
		ldc = cmaxc;
	    }
	    else
	    {
		lda = rmaxa;
		ldb = rmaxb;
		ldc = rmaxc;
	    }
	    if (uplo == CBLAS_UPLO.CblasUpper)
		typeA = UPPER_MATRIX;
	    else
		typeA = LOWER_MATRIX;
	    for (i = 0; i < m; i++)
	    {
		for (j = 0; j < m; j++)
		{
		    a[i + j * lda] = 1.0f;
		}
	    }
	    for (i = 0; i < m; i++)
	    {
		for (j = 0; j < n; j++)
		{
		    c[i + j * ldc] = 1.0f;
		    b[i + j * ldb] = 2.0f;
		}
	    } 
	    CBlas.Ssymm(layout, side, uplo, m, n, alpha, ref a[0], lda, ref b[0], ldb, beta, ref c[0], ldc);
	}
}

Enums like CBLAS_UPLO are generated from the CBLAS header file. You pass double[] and float[] arrays to the BLAS functions using a ref alias to the first element of the array which is converted to a pointer and passed to the native function. You can use either LP64 or ILP64 array indexing depending on the namespace you import.

Bindings Generator

The basic syntax is bind LIBRARY MODULE [OPTIONS] e.g bind mkl --vml --ilp64 -n IntelMKL -o .\Compute.Bindings.IntelMKL -c Vml --file vml.ilp64.cs will create bindings for the Intel MKL VML routines, with ILP64 array indexing, in the .NET class Vml and namespace IntelMKL and in the file vmk.ilp64.cs in the .\Compute.Bindings.IntelMKL output directory.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].