All Projects → CrowdTruth → CrowdTruth

CrowdTruth / CrowdTruth

Licence: other
Version 1.0 of the CrowdTruth Framework for crowdsourcing ground truth data, for training and evaluation of cognitive computing systems. Check out also version 2.0 at https://github.com/CrowdTruth/CrowdTruth-core. Data collected with CrowdTruth methodology: http://data.crowdtruth.org/. Our papers: http://crowdtruth.org/papers/

Programming Languages

javascript
184084 projects - #8 most used programming language
HTML
75241 projects
PHP
23972 projects - #3 most used programming language
CSS
56736 projects
python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to CrowdTruth

Radish
Behavior Driven Development tooling for Python. The root from red to green.
Stars: ✭ 153 (+146.77%)
Mutual labels:  quality
Codeclimate
Code Climate CLI
Stars: ✭ 2,273 (+3566.13%)
Mutual labels:  quality
javascript-test-reporter
DEPRECATED Code Climate test reporter client for JavaScript projects
Stars: ✭ 68 (+9.68%)
Mutual labels:  quality
Coveragechecker
Allows old code to use new standards
Stars: ✭ 159 (+156.45%)
Mutual labels:  quality
Phpmetrics
Beautiful and understandable static analysis tool for PHP
Stars: ✭ 2,180 (+3416.13%)
Mutual labels:  quality
Qulice
Quality Police for Java projects
Stars: ✭ 250 (+303.23%)
Mutual labels:  quality
Benchmarks
Comparison tools
Stars: ✭ 139 (+124.19%)
Mutual labels:  quality
resiliency
A modern PHP library that allows you to make resilient calls to external services 🔁
Stars: ✭ 79 (+27.42%)
Mutual labels:  quality
Constexpr Everything
Rewrite C++ code to automatically apply `constexpr` where possible
Stars: ✭ 178 (+187.1%)
Mutual labels:  quality
sbt-sonar
An sbt plugin which provides an easy way to integrate Scala projects with SonarQube.
Stars: ✭ 62 (+0%)
Mutual labels:  quality
Jpeek
Java Code Static Metrics (Cohesion, Coupling, etc.)
Stars: ✭ 168 (+170.97%)
Mutual labels:  quality
Vividus
Vividus is all in one test automation tool
Stars: ✭ 170 (+174.19%)
Mutual labels:  quality
introduction-nodejs
Introduction to NodeJS
Stars: ✭ 13 (-79.03%)
Mutual labels:  quality
Eslint Plugin Boundaries
Eslint plugin checking architecture boundaries between elements
Stars: ✭ 157 (+153.23%)
Mutual labels:  quality
meval-rs
Math expression parser and evaluation library for Rust
Stars: ✭ 118 (+90.32%)
Mutual labels:  evaluation
Sonar Cnes Report
Generates analysis reports from SonarQube web API.
Stars: ✭ 145 (+133.87%)
Mutual labels:  quality
Faceimagequality
Code and information for face image quality assessment with SER-FIQ
Stars: ✭ 223 (+259.68%)
Mutual labels:  quality
performance testing
Tools, articles, etc. related to performance/load/etc. testing.
Stars: ✭ 172 (+177.42%)
Mutual labels:  quality
video-quality-metrics
Test specified presets/CRF values for the x264 or x265 encoder. Compares VMAF/SSIM/PSNR numerically & via graphs.
Stars: ✭ 87 (+40.32%)
Mutual labels:  quality
php-best-practices
What I consider the best practices for web and software development.
Stars: ✭ 60 (-3.23%)
Mutual labels:  quality

Notice: This repository is no longer being maintained. Please see CrowdTruth-core for the latest release of the framework.

Latest Stable Version Build Status Code Coverage Scrutinizer Code Quality

The CrowdTruth Framework implements an approach to machine-human computing for collecting annotation data on text, images, sounds and videos. The approach is focussed specifically on collecting gold standard data for training and evaluation of cognitive computing systems. The original framework was inspired by the IBM Watson project for providing improved (multi-perspective) gold standard (medical) text annotation data for the training and evaluation of various IBM Watson components, such as Medical Relation Extraction, Medical Factor Extraction and Question-Answer passage alignment.

The CrowdTruth framework supports the composition of CrowdTruth gathering workflows, where a sequence of micro-annotation tasks can be configured and sent out to a number of crowdsourcing platforms (e.g. CrowdFlower and Amazon Mechanical Turk) and applications (e.g. Expert annotation game Dr. Detective). The CrowdTruth framework has a special focus on micro-tasks for knowledge extraction in medical text (e.g. medical documents, from various sources such as Wikipedia articles or patient case reports). The main steps involved in the CrowdTruth workflow are:

  1. Exploring & processing of input data
  2. Collecting of annotation data
  3. Applying disagreement analytics on the results

These steps are realised in an automatic end-to-end workflow, that can support a continuous collection of high quality gold standard data with feedback loop to all steps of the process. Have a look at our presentations and papers for more details on the research.

Using CrowdTruth

Start using CrowdTruth right now, completely free, and explore all its possiblities. Follow the installation guide to get started, or check out our wiki for all documentation of the platform. We have some crowdsourcing templates ready for you to start with.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].