All Projects → tensorflow → recommenders-addons

tensorflow / recommenders-addons

Licence: Apache-2.0 license
Additional utils and helpers to extend TensorFlow when build recommendation systems, contributed and maintained by SIG Recommenders.

Programming Languages

Cuda
1817 projects
python
139335 projects - #7 most used programming language
C++
36643 projects - #6 most used programming language
c
50402 projects - #5 most used programming language
Smarty
1635 projects
Starlark
911 projects

TensorFlow Recommenders Addons


TensorFlow Recommenders logo PyPI Status Badge PyPI - Python Version Documentation

TensorFlow Recommenders Addons(TFRA) are a collection of projects related to large-scale recommendation systems built upon TensorFlow by introducing the Dynamic Embedding Technology to TensorFlow that makes TensorFlow more suitable for training models of Search, Recommendations, and Advertising and makes building, evaluating, and serving sophisticated recommenders models easy. See approved TensorFlow RFC #313. Those contributions will be complementary to TensorFlow Core and TensorFlow Recommenders etc.

For Apple silicon(M1), please refer to Apple Silicon Support.

Main Features

  • Make key-value data structure (dynamic embedding) trainable in TensorFlow
  • Get better recommendation effect compared to static embedding mechanism with no hash conflicts
  • Compatible with all native TensorFlow optimizers and initializers
  • Compatible with native TensorFlow CheckPoint and SavedModel format
  • Fully support train and inference recommenders models on GPUs
  • Support TF serving and Triton Inference Server as inference framework
  • Support variant Key-Value implements as dynamic embedding storage and easy to extend
  • Support half synchronous training based on Horovod
    • Synchronous training for dense weights
    • Asynchronous training for sparse weights

Subpackages

Contributors

TensorFlow Recommenders-Addons depends on public contributions, bug fixes, and documentation. This project exists thanks to all the people and organizations who contribute. [Contribute]



A special thanks to NVIDIA Merlin Team and NVIDIA China DevTech Team, who have provided GPU acceleration technology support and code contribution.

Tutorials & Demos

See tutorials and demo for end-to-end examples of each subpackages.

Installation

Stable Builds

TensorFlow Recommenders-Addons is available on PyPI for Linux, macOS. To install the latest version, run the following:

pip install tensorflow-recommenders-addons

By default, CPU version will be installed. To install GPU version, run the following:

pip install tensorflow-recommenders-addons-gpu

To use TensorFlow Recommenders-Addons:

import tensorflow as tf
import tensorflow_recommenders_addons as tfra

Compatibility with Tensorflow

TensorFlow C++ APIs are not stable and thus we can only guarantee compatibility with the version TensorFlow Recommenders-Addons(TFRA) was built against. It is possible TFRA will work with multiple versions of TensorFlow, but there is also a chance for segmentation faults or other problematic crashes. Warnings will be emitted if your TensorFlow version does not match what it was built against.

Additionally, TFRA custom ops registration does not have a stable ABI interface so it is required that users have a compatible installation of TensorFlow even if the versions match what we had built against. A simplification of this is that TensorFlow Recommenders-Addons custom ops will work with pip-installed TensorFlow but will have issues when TensorFlow is compiled differently. A typical example of this would be conda-installed TensorFlow. RFC #133 aims to fix this.

Compatibility Matrix

GPU is supported by version 0.2.0 and later.

TFRA TensorFlow Compiler CUDA CUDNN Compute Capability CPU
0.4.0 2.5.1 GCC 7.3.1 11.2 8.1 6.0, 6.1, 7.0, 7.5, 8.0, 8.6 x86
0.4.0 2.5.0 Xcode 13.1 - - - Apple M1
0.3.1 2.5.1 GCC 7.3.1 11.2 8.1 6.0, 6.1, 7.0, 7.5, 8.0, 8.6 x86
0.2.0 2.4.1 GCC 7.3.1 11.0 8.0 6.0, 6.1, 7.0, 7.5, 8.0 x86
0.2.0 1.15.2 GCC 7.3.1 10.0 7.6 6.0, 6.1, 7.0, 7.5 x86
0.1.0 2.4.1 GCC 7.3.1 - - - x86

Check nvidia-support-matrix for more details.

NOTICE

  • The release packages have a strict version binding relationship with TensorFlow.
  • Due to the significant changes in the Tensorflow API, we can only ensure version 0.2.0 compatibility with TF1.15.2 on CPU & GPU, but there are no official releases, you can only get it through compiling by the following:
PY_VERSION="3.7" \
TF_VERSION="1.15.2" \
TF_NEED_CUDA=1 \
sh .github/workflows/make_wheel_Linux_x86.sh

# .whl file will be created in ./wheelhouse/
  • If you need to work with TensorFlow 1.14.x or older version, we suggest you give up, but maybe this doc can help you : Extract headers from TensorFlow compiling directory. At the same time, we find some OPs used by TRFA have better performance, so we highly recommend you update TensorFlow to 2.x.

Installing from Source

For all developers, we recommend you use the development docker containers which are all GPU enabled:

docker pull tfra/dev_container:latest-python3.8  # "3.7", "3.9" are all avaliable.
docker run --privileged --gpus all -it --rm -v $(pwd):$(pwd) tfra/dev_container:latest-3.8
CPU Only

You can also install from source. This requires the Bazel build system (version == 5.1.1). Please install a TensorFlow on your compiling machine, The compiler needs to know the version of Tensorflow and its headers according to the installed TensorFlow.

export TF_VERSION="2.5.1"  # "2.7.0", "2.5.1" are well tested.
pip install tensorflow[-gpu]==$TF_VERSION

git clone https://github.com/tensorflow/recommenders-addons.git
cd recommenders-addons

# This script links project with TensorFlow dependency
python configure.py

bazel build --enable_runfiles build_pip_pkg
bazel-bin/build_pip_pkg artifacts

pip install artifacts/tensorflow_recommenders_addons-*.whl
GPU Support

Only TF_NEED_CUDA=1 is required and other environment variables are optional:

export TF_VERSION="2.5.1"  # "2.7.0", "2.5.1" are well tested.
export PY_VERSION="3.8" 
export TF_NEED_CUDA=1
export TF_CUDA_VERSION=11.2
export TF_CUDNN_VERSION=8.1
export CUDA_TOOLKIT_PATH="/usr/local/cuda"
export CUDNN_INSTALL_PATH="/usr/lib/x86_64-linux-gnu"

python configure.py

And then build the pip package and install:

bazel build --enable_runfiles build_pip_pkg
bazel-bin/build_pip_pkg artifacts
pip install artifacts/tensorflow_recommenders_addons_gpu-*.whl
Apple Silicon Support (Beta Release)

Requirements:

  • macOS 12.0.0+
  • Python 3.8 or 3.9
  • tensorflow-macos 2.5.0
  • bazel 4.1.0+

Before installing TFRA from source, you need to install tensorflow-macos from Apple. To install the natively supported version of tensorflow-macos, it's required to install the Conda environment.

After installing conda environment, run the following commands in the terminal.

# Create a virtual environment
conda create -n $YOUR-ENVIRONMENT-NAME Python=$PYTHON_VERSION

# Activate your environment
conda activate $YOUR-ENVIRONMENT-NAME

# Install TensorFlow macOS dependencies via miniforge
conda install -c apple tensorflow-deps==2.5.0

# Install base TensorFlow
python -m pip install tensorflow-macos==2.5.0

# Install TensorFlow Recommenders Addons from PyPi distribution (optional)
python -m pip install tensorflow-recommenders-addons --no-deps

There is a difference between the tensorflow-macos installation instruction and our instruction because this build requires Python 3.8 or 3.9 and tensorflow-macos 2.5.0.

The building script has been tested on macOS Monterey, If you are using macOS Big Sur, you may need to customize the building script.

# Build arm64 wheel from source
PY_VERSION=$PYTHON_VERSION TF_VERSION="2.5.0" TF_NEED_CUDA="0" sh .github/workflows/make_wheel_macOS_arm64.sh

# Install
python -m pip install --no-deps ./artifacts/*.whl

NOTICE:

  • The Apple silicon version TFRA doesn't support data type float16, the issue may be fixed in the future release.
Data Type Matrix for tfra.dynamic_embedding.Variable
Values \ Keys int64 int32 string
float CPU, GPU CPU, GPU CPU
half CPU, GPU - CPU
int32 CPU, GPU CPU CPU
int8 CPU, GPU - CPU
int64 CPU - CPU
double CPU, CPU CPU CPU
bool - - CPU
string CPU - -
To use GPU by tfra.dynamic_embedding.Variable

The tfra.dynamic_embedding.Variable will ignore the device placement mechanism of TensorFlow, you should specify the devices onto GPUs explicitly for it.

import tensorflow as tf
import tensorflow_recommenders_addons as tfra

de = tfra.dynamic_embedding.get_variable("VariableOnGpu",
                                         devices=["/job:ps/task:0/GPU:0", ],
                                         # ...
                                         )

Usage restrictions on GPU

  • Only work on Nvidia GPU with cuda compute capability 6.0 or higher.
  • Considering the size of the .whl file, currently dim only supports less than or equal to 200, if you need longer dim, please submit an issue.
  • Only dynamic_embedding APIs and relative OPs support running on GPU.
  • For GPU HashTables manage GPU memory independently, TensorFlow should be configured to allow GPU memory growth by the following:
sess_config.gpu_options.allow_growth = True

Inference with TensorFlow Serving

Compatibility Matrix

TFRA TensorFlow Serving Compiler CUDA CUDNN Compute Capability
0.4.0 2.5.1 2.5.2 GCC 7.3.1 11.2 8.1 6.0, 6.1, 7.0, 7.5, 8.0, 8.6
0.3.1 2.5.1 2.5.2 GCC 7.3.1 11.2 8.1 6.0, 6.1, 7.0, 7.5, 8.0, 8.6
0.2.0 2.4.1 2.4.0 GCC 7.3.1 11.0 8.0 6.0, 6.1, 7.0, 7.5, 8.0
0.2.0 1.15.2 1.15.0 GCC 7.3.1 10.0 7.6 6.0, 6.1, 7.0, 7.5
0.1.0 2.4.1 2.4.0 GCC 7.3.1 - - -

NOTICE:Reference documents: https://www.tensorflow.org/tfx/serving/custom_op

CPU or GPU Serving TensorFlow models with custom ops

When compiling, set the environment variable:

export FOR_TF_SERVING = "1"

Tensorflow Serving modification(model_servers/BUILD):

SUPPORTED_TENSORFLOW_OPS = if_v2([]) + if_not_v2([
    "@org_tensorflow//tensorflow/contrib:contrib_kernels",
    "@org_tensorflow//tensorflow/contrib:contrib_ops_op_lib",
]) + [
    "@org_tensorflow_text//tensorflow_text:ops_lib",
    "//tensorflow_recommenders_addons/dynamic_embedding/core:_cuckoo_hashtable_ops.so",
    "//tensorflow_recommenders_addons/dynamic_embedding/core:_math_ops.so",
]

NOTICE

  • Distributed inference is only supported when using Redis as Key-Value storage.

Community

Acknowledgment

We are very grateful to the maintainers of tensorflow/addons for borrowing a lot of code from tensorflow/addons to build our workflow and documentation system. We also want to extend a thank you to the Google team members who have helped with CI setup and reviews!

License

Apache License 2.0

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].