All Projects → tidyverse → Multidplyr

tidyverse / Multidplyr

Licence: other
A dplyr backend that partitions a data frame over multiple processes

Programming Languages

r
7636 projects

Labels

Projects that are alternatives of or similar to Multidplyr

CSSS508
CSSS508: Introduction to R for Social Scientists
Stars: ✭ 28 (-94.57%)
Mutual labels:  dplyr
advanced-data-wrangling-in-R-legacy
Advanced-data-wrangling-in-R, Workshop
Stars: ✭ 14 (-97.29%)
Mutual labels:  dplyr
dplyrExtras
Some extra functionality that is not (yet) in dplyr, e.g. mutate_rows or s_filter, s_arrange ,...
Stars: ✭ 20 (-96.12%)
Mutual labels:  dplyr
Dplyr Tutorials
Repository for dplyr tutorials I made
Stars: ✭ 24 (-95.35%)
Mutual labels:  dplyr
datar
A Grammar of Data Manipulation in python
Stars: ✭ 142 (-72.48%)
Mutual labels:  dplyr
casewhen
Create reusable dplyr::case_when() functions
Stars: ✭ 64 (-87.6%)
Mutual labels:  dplyr
dplyover
Create columns by applying functions to vectors and/or columns in 'dplyr'.
Stars: ✭ 42 (-91.86%)
Mutual labels:  dplyr
Tidylog
Tidylog provides feedback about dplyr and tidyr operations. It provides wrapper functions for the most common functions, such as filter, mutate, select, and group_by, and provides detailed output for joins.
Stars: ✭ 428 (-17.05%)
Mutual labels:  dplyr
RLadies RoCur
No description or website provided.
Stars: ✭ 24 (-95.35%)
Mutual labels:  dplyr
starwarsdb
Relational Data from the Star Wars API for Learning and Teaching
Stars: ✭ 34 (-93.41%)
Mutual labels:  dplyr
implyr
SQL backend to dplyr for Impala
Stars: ✭ 74 (-85.66%)
Mutual labels:  dplyr
tutorials
Short programming tutorials pertaining to data analysis.
Stars: ✭ 14 (-97.29%)
Mutual labels:  dplyr
parcours-r
Valise pédagogique pour la formation à R
Stars: ✭ 25 (-95.16%)
Mutual labels:  dplyr
datawizard
Magic potions to clean and transform your data 🧙
Stars: ✭ 149 (-71.12%)
Mutual labels:  dplyr
Tidy
Tidy up your data with JavaScript, inspired by dplyr and the tidyverse
Stars: ✭ 307 (-40.5%)
Mutual labels:  dplyr
eeguana
A package for manipulating EEG data in R.
Stars: ✭ 16 (-96.9%)
Mutual labels:  dplyr
dplyr.teradata
A Teradata Backend for dplyr
Stars: ✭ 16 (-96.9%)
Mutual labels:  dplyr
Dtplyr
Data table backend for dplyr
Stars: ✭ 456 (-11.63%)
Mutual labels:  dplyr
Timetk
A toolkit for working with time series in R
Stars: ✭ 371 (-28.1%)
Mutual labels:  dplyr
learning R
List of resources for learning R
Stars: ✭ 32 (-93.8%)
Mutual labels:  dplyr

multidplyr

Lifecycle: experimental R-CMD-check Codecov test coverage CRAN status

Overview

multidplyr is a backend for dplyr that partitions a data frame across multiple cores. You tell multidplyr how to split the data up with partition() and then the data stays on each node until you explicitly retrieve it with collect(). This minimises the amount of time spent moving data around, and maximises parallel performance. This idea is inspired by partools by Norm Matloff and distributedR by the Vertica Analytics team.

Due to the overhead associated with communicating between the nodes, you won’t see much performance improvement with simple operations on less than ~10 million observations, and you may want to instead try dtplyr, which uses data.table. multidplyr’s strength is found parallelsing calls to slower and more complex functions.

(Note that unlike other packages in the tidyverse, multidplyr requires R 3.5 or greater. We hope to relax this requirement in the future.)

Installation

You can install the released version of multidplyr from CRAN with:

install.packages("multidplyr")

And the development version from GitHub with:

# install.packages("devtools")
devtools::install_github("tidyverse/multidplyr")

Usage

To use multidplyr, you first create a cluster of the desired number of workers. Each one of these workers is a separate R process, and the operating system will spread their execution across multiple cores:

library(multidplyr)

cluster <- new_cluster(4)
cluster_library(cluster, "dplyr")
#> 
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#> 
#>     filter, lag
#> The following objects are masked from 'package:base':
#> 
#>     intersect, setdiff, setequal, union

There are two primary ways to use multidplyr. The first, and most efficient, way is to read different files on each worker:

# Create a filename vector containing different values on each worker
cluster_assign_each(cluster, filename = c("a.csv", "b.csv", "c.csv", "d.csv"))

# Use vroom to quickly load the csvs
cluster_send(cluster, my_data <- vroom::vroom(filename))

# Create a party_df using the my_data variable on each worker
my_data <- party_df(cluster, "my_data")

Alternatively, if you already have the data loaded in the main session, you can use partition() to automatically spread it across the workers. Before calling partition(), it’s a good idea to call group_by() to ensure that all of the observations belonging to a group end up on the same worker.

library(nycflights13)

flight_dest <- flights %>% group_by(dest) %>% partition(cluster)
flight_dest
#> Source: party_df [336,776 x 19]
#> Groups: dest
#> Shards: 4 [81,594--86,548 rows]
#> 
#>    year month   day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#>   <int> <int> <int>    <int>          <int>     <dbl>    <int>          <int>
#> 1  2013     1     1      544            545        -1     1004           1022
#> 2  2013     1     1      558            600        -2      923            937
#> 3  2013     1     1      559            600        -1      854            902
#> 4  2013     1     1      602            610        -8      812            820
#> 5  2013     1     1      602            605        -3      821            805
#> 6  2013     1     1      611            600        11      945            931
#> # … with 336,770 more rows, and 11 more variables: arr_delay <dbl>,
#> #   carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> #   air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>

Now you can work with it like a regular data frame, but the computations will be spread across multiple cores. Once you’ve finished computation, use collect() to bring the data back to the host session:

flight_dest %>% 
  summarise(delay = mean(dep_delay, na.rm = TRUE), n = n()) %>% 
  collect()
#> # A tibble: 105 x 3
#>    dest  delay     n
#>    <chr> <dbl> <int>
#>  1 ABQ    13.7   254
#>  2 AUS    13.0  2439
#>  3 BQN    12.4   896
#>  4 BTV    13.6  2589
#>  5 BUF    13.4  4681
#>  6 CLE    13.4  4573
#>  7 CMH    12.2  3524
#>  8 DEN    15.2  7266
#>  9 DSM    26.2   569
#> 10 DTW    11.8  9384
#> # … with 95 more rows

Note that there is some overhead associated with copying data from the worker nodes back to the host node (and vice versa), so you’re best off using multidplyr with more complex operations. See vignette("multidplyr") for more details.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].