All Projects → CARLA → Similar Projects or Alternatives

694 Open source projects that are alternatives of or similar to CARLA

ml-fairness-framework
FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
Stars: ✭ 59 (-64.46%)
Interpret
Fit interpretable models. Explain blackbox machine learning.
Stars: ✭ 4,352 (+2521.69%)
responsible-ai-toolbox
This project provides responsible AI user interfaces for Fairlearn, interpret-community, and Error Analysis, as well as foundational building blocks that they rely on.
Stars: ✭ 615 (+270.48%)
mllp
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Stars: ✭ 15 (-90.96%)
ProtoTree
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Stars: ✭ 47 (-71.69%)
fastshap
Fast approximate Shapley values in R
Stars: ✭ 79 (-52.41%)
Mutual labels:  explainable-ai, explainable-ml
Mindsdb
Predictive AI layer for existing databases.
Stars: ✭ 4,199 (+2429.52%)
Mutual labels:  explainable-ai, explainable-ml
bench
⏱️ Reliable performance measurement for Go programs. All in one design.
Stars: ✭ 33 (-80.12%)
Mutual labels:  benchmarking, benchmark
Benchmarkdotnet
Powerful .NET library for benchmarking
Stars: ✭ 7,138 (+4200%)
Mutual labels:  benchmarking, benchmark
Bench Scripts
A compilation of Linux server benchmarking scripts.
Stars: ✭ 873 (+425.9%)
Mutual labels:  benchmarking, benchmark
Dana
Test/benchmark regression and comparison system with dashboard
Stars: ✭ 46 (-72.29%)
Mutual labels:  benchmarking, benchmark
Pytest Django Queries
Generate performance reports from your django database performance tests.
Stars: ✭ 54 (-67.47%)
Mutual labels:  benchmarking, benchmark
language-benchmarks
A simple benchmark system for compiled and interpreted languages.
Stars: ✭ 21 (-87.35%)
Mutual labels:  benchmarking, benchmark
meg
Molecular Explanation Generator
Stars: ✭ 14 (-91.57%)
Tensorwatch
Debugging, monitoring and visualization for Python Machine Learning and Data Science
Stars: ✭ 3,191 (+1822.29%)
Mutual labels:  explainable-ai, explainable-ml
SHAP FOLD
(Explainable AI) - Learning Non-Monotonic Logic Programs From Statistical Models Using High-Utility Itemset Mining
Stars: ✭ 35 (-78.92%)
Mutual labels:  explainable-ai, explainable-ml
Web Tooling Benchmark
JavaScript benchmark for common web developer workloads
Stars: ✭ 290 (+74.7%)
Mutual labels:  benchmarking, benchmark
Lzbench
lzbench is an in-memory benchmark of open-source LZ77/LZSS/LZMA compressors
Stars: ✭ 490 (+195.18%)
Mutual labels:  benchmarking, benchmark
Pibench
Benchmarking framework for index structures on persistent memory
Stars: ✭ 46 (-72.29%)
Mutual labels:  benchmarking, benchmark
Knowledge distillation via TF2.0
The codes for recent knowledge distillation algorithms and benchmark results via TF2.0 low-level API
Stars: ✭ 87 (-47.59%)
Mutual labels:  benchmark, tensorflow2
awesome-graph-explainability-papers
Papers about explainability of GNNs
Stars: ✭ 153 (-7.83%)
Mutual labels:  explainable-ai, explainability
Benchexec
BenchExec: A Framework for Reliable Benchmarking and Resource Measurement
Stars: ✭ 108 (-34.94%)
Mutual labels:  benchmarking, benchmark
Sltbench
C++ benchmark tool. Practical, stable and fast performance testing framework.
Stars: ✭ 137 (-17.47%)
Mutual labels:  benchmarking, benchmark
DataScience ArtificialIntelligence Utils
Examples of Data Science projects and Artificial Intelligence use cases
Stars: ✭ 302 (+81.93%)
Mutual labels:  explainable-ai, explainable-ml
Are We Fast Yet
Are We Fast Yet? Comparing Language Implementations with Objects, Closures, and Arrays
Stars: ✭ 161 (-3.01%)
Mutual labels:  benchmarking, benchmark
concept-based-xai
Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI
Stars: ✭ 41 (-75.3%)
Mutual labels:  explainable-ai, explainability
zennit
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Stars: ✭ 57 (-65.66%)
Mutual labels:  explainable-ai, explainability
hierarchical-dnn-interpretations
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (-33.73%)
Mutual labels:  explainable-ai, explainability
shapr
Explaining the output of machine learning models with more accurately estimated Shapley values
Stars: ✭ 95 (-42.77%)
Mutual labels:  explainable-ai, explainable-ml
Jsperf.com
jsperf.com v2. https://github.com/h5bp/lazyweb-requests/issues/174
Stars: ✭ 1,178 (+609.64%)
Mutual labels:  benchmarking, benchmark
global-attribution-mapping
GAM (Global Attribution Mapping) explains the landscape of neural network predictions across subpopulations
Stars: ✭ 18 (-89.16%)
Mutual labels:  explainable-ai, explainable-ml
php-orm-benchmark
The benchmark to compare performance of PHP ORM solutions.
Stars: ✭ 82 (-50.6%)
Mutual labels:  benchmarking, benchmark
deep-explanation-penalization
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Stars: ✭ 110 (-33.73%)
Mutual labels:  explainable-ai, explainability
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Stars: ✭ 484 (+191.57%)
Mutual labels:  explainable-ai, explainability
best
🏆 Delightful Benchmarking & Performance Testing
Stars: ✭ 73 (-56.02%)
Mutual labels:  benchmarking, benchmark
p3arsec
Parallel Patterns Implementation of PARSEC Benchmark Applications
Stars: ✭ 12 (-92.77%)
Mutual labels:  benchmarking, benchmark
Pytest Benchmark
py.test fixture for benchmarking code
Stars: ✭ 730 (+339.76%)
Mutual labels:  benchmarking, benchmark
Unchase.FluentPerformanceMeter
🔨 Make the exact performance measurements of the public methods for public classes using this NuGet Package with fluent interface. Requires .Net Standard 2.0+. It is an Open Source project under Apache-2.0 License.
Stars: ✭ 33 (-80.12%)
Mutual labels:  benchmarking, benchmark
Rtb
Benchmarking tool to stress real-time protocols
Stars: ✭ 35 (-78.92%)
Mutual labels:  benchmarking, benchmark
Sysbench Docker Hpe
Sysbench Dockerfiles and Scripts for VM and Container benchmarking MySQL
Stars: ✭ 14 (-91.57%)
Mutual labels:  benchmarking, benchmark
Jsbench Me
jsbench.me - JavaScript performance benchmarking playground
Stars: ✭ 50 (-69.88%)
Mutual labels:  benchmarking, benchmark
Tsung
Tsung is a high-performance benchmark framework for various protocols including HTTP, XMPP, LDAP, etc.
Stars: ✭ 2,185 (+1216.27%)
Mutual labels:  benchmarking, benchmark
Phoronix Test Suite
The Phoronix Test Suite open-source, cross-platform automated testing/benchmarking software.
Stars: ✭ 1,339 (+706.63%)
Mutual labels:  benchmarking, benchmark
Ezfio
Simple NVME/SAS/SATA SSD test framework for Linux and Windows
Stars: ✭ 91 (-45.18%)
Mutual labels:  benchmarking, benchmark
Gatling Dubbo
A gatling plugin for running load tests on Apache Dubbo(https://github.com/apache/incubator-dubbo) and other java ecosystem.
Stars: ✭ 131 (-21.08%)
Mutual labels:  benchmarking, benchmark
Karma Benchmark
A Karma plugin to run Benchmark.js over multiple browsers with CI compatible output.
Stars: ✭ 88 (-46.99%)
Mutual labels:  benchmarking, benchmark
3D-GuidedGradCAM-for-Medical-Imaging
This Repo containes the implemnetation of generating Guided-GradCAM for 3D medical Imaging using Nifti file in tensorflow 2.0. Different input files can be used in that case need to edit the input to the Guided-gradCAM model.
Stars: ✭ 60 (-63.86%)
Mutual labels:  explainable-ai, tensorflow2
Scalajs Benchmark
Benchmarks: write in Scala or JS, run in your browser. Live demo:
Stars: ✭ 63 (-62.05%)
Mutual labels:  benchmarking, benchmark
dlime experiments
In this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results on three different medical datasets shows the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME).
Stars: ✭ 21 (-87.35%)
Mutual labels:  explainable-ai, explainable-ml
LuaJIT-Benchmarks
LuaJIT Benchmark tests
Stars: ✭ 20 (-87.95%)
Mutual labels:  benchmarking, benchmark
beapi-bench
Tool for benchmarking apis. Uses ApacheBench(ab) to generate data and gnuplot for graphing. Adding new features almost daily
Stars: ✭ 16 (-90.36%)
Mutual labels:  benchmarking, benchmark
amazon-sagemaker-mlops-workshop
MLOps workshop with Amazon SageMaker
Stars: ✭ 39 (-76.51%)
Mutual labels:  tensorflow2
FLEXS
Fitness landscape exploration sandbox for biological sequence design.
Stars: ✭ 92 (-44.58%)
Mutual labels:  benchmarking
path explain
A repository for explaining feature attributions and feature interactions in deep neural networks.
Stars: ✭ 151 (-9.04%)
Mutual labels:  explainable-ai
gcnn keras
Graph convolution with tf.keras
Stars: ✭ 47 (-71.69%)
Mutual labels:  tensorflow2
ttt
A package for fine-tuning Transformers with TPUs, written in Tensorflow2.0+
Stars: ✭ 35 (-78.92%)
Mutual labels:  tensorflow2
benchmark-http
No description or website provided.
Stars: ✭ 15 (-90.96%)
Mutual labels:  benchmark
benchee html
Draw pretty micro benchmarking charts in HTML and allow to export them as png for benchee
Stars: ✭ 50 (-69.88%)
Mutual labels:  benchmarking
deep reinforcement learning gallery
Deep reinforcement learning with tensorflow2
Stars: ✭ 35 (-78.92%)
Mutual labels:  tensorflow2
LAP-solvers
Benchmarking Linear Assignment Problem Solvers
Stars: ✭ 69 (-58.43%)
Mutual labels:  benchmarking
1-60 of 694 similar projects