Skip to content

JIT Compiler, Faster Distributed, C++ Frontend

Compare
Choose a tag to compare
@soumith soumith released this 07 Dec 19:19

Table of Contents

  • Highlights
    • JIT
    • Brand New Distributed Package
    • C++ Frontend [API Unstable]
    • Torch Hub
  • Breaking Changes
  • Additional New Features
    • N-dimensional empty tensors
    • New Operators
    • New Distributions
    • Sparse API Improvements
    • Additions to existing Operators and Distributions
  • Bug Fixes
    • Serious
    • Backwards Compatibility
    • Correctness
    • Error checking
    • Miscellaneous
  • Other Improvements
  • Deprecations
    • CPP Extensions
  • Performance
  • Documentation Improvements

Highlights

JIT

The JIT is a set of compiler tools for bridging the gap between research in PyTorch
and production. It allows for the creation of models that can run without a dependency on the Python interpreter and which can be optimized more aggressively. Using program annotations existing models can be transformed into Torch Script, a subset of Python that PyTorch can run directly. Model code is still valid Python code and can be debugged with the standard Python toolchain. PyTorch 1.0 provides two ways in which you can make your existing code compatible with the JIT, using torch.jit.trace or torch.jit.script. Once annotated, Torch Script code can be aggressively optimized and it can be serialized for later use in our new C++ API, which doesn't depend on Python at all.

# Write in Python, run anywhere!
@torch.jit.script
def RNN(x, h, W_h, U_h, b_h):
  y = []
  for t in range(x.size(0)):
    h = torch.tanh(x[t] @ W_h + h @ U_h + b_h)
    y += [h]
  return torch.stack(y), h

As an example, see a tutorial on deploying a seq2seq model,
loading an exported model from C++, or browse the docs.

Brand New Distributed Package

The torch.distributed package and torch.nn.parallel.DistributedDataParallel module are backed by a brand new re-designed distributed library. The main highlights of the new library are:

  • New torch.distributed is performance driven and operates entirely asynchronously for all backends: Gloo, NCCL, and MPI.
  • Significant Distributed Data Parallel performance improvements especially for hosts with slower networks such as ethernet-based hosts
  • Adds async support for all distributed collective operations in the torch.distributed package.
  • Adds the following CPU ops in the Gloo backend: send, recv, reduce, all_gather, gather, scatter
  • Adds barrier op in the NCCL backend
  • Adds new_group support for the NCCL backend

C++ Frontend [API Unstable].

The C++ frontend is a pure C++ interface to the PyTorch backend that follows the API and architecture of the established Python frontend. It is intended to enable research in high performance, low latency and bare metal C++ applications. It provides equivalents to torch.nn, torch.optim, torch.data and other components of the Python frontend. Here is a minimal side-by-side comparison of the two language frontends:

PythonC++
import torch

model = torch.nn.Linear(5, 1)
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
prediction = model.forward(torch.randn(3, 5))
loss = torch.nn.functional.mse_loss(prediction, torch.ones(3, 1))
loss.backward()
optimizer.step()
      
#include <torch/torch.h>

torch::nn::Linear model(5, 1);
torch::optim::SGD optimizer(model->parameters(), /*lr=*/0.1);
torch::Tensor prediction = model->forward(torch::randn({3, 5}));
auto loss = torch::mse_loss(prediction, torch::ones({3, 1}));
loss.backward();
optimizer.step();
      

We are releasing the C++ frontend marked as "API Unstable" as part of PyTorch 1.0. This means it is ready to be used for your research application, but still has some open construction sites that will stabilize over the next couple of releases. Some parts of the API may undergo breaking changes during this time.

See https://pytorch.org/cppdocs for detailed documentation on the greater PyTorch C++ API as well as the C++ frontend.

Torch Hub

Torch Hub is a pre-trained model repository designed to facilitate research reproducibility.

Torch Hub supports publishing pre-trained models (model definitions and pre-trained weights) to a github repository using a simple hubconf.py file; see hubconf for resnet models in pytorch/vision as an example. Once published, users can load the pre-trained models using the torch.hub.load API.

For more details, see the torch.hub documentation. Expect a more-detailed blog post introducing Torch Hub in the near future!

Breaking Changes

  • Indexing a 0-dimensional tensor will now throw an error instead of warn. Use tensor.item() instead. (#11679).
  • torch.legacy is removed. (#11823).
  • torch.masked_copy_ is removed, use torch.masked_scatter_ instead. (#9817).
  • Operations that result in 0 element tensors may return changed shapes.
    • Before: all 0 element tensors would collapse to shape (0,). For example, torch.nonzero is documented to return a tensor of shape (n,z), where n = number of nonzero elements and z = dimensions of the input, but would always return a Tensor of shape _(0,) when no nonzero elements existed.
    • Now: Operations return their documented shape.
      # Previously: all 0-element tensors are collapsed to shape (0,)
      >>> torch.nonzero(torch.zeros(2, 3))
      tensor([], dtype=torch.int64)
      
      # Now, proper shape is returned
      >>> torch.nonzero(torch.zeros(2, 3))
      tensor([], size=(0, 2), dtype=torch.int64)
      
  • Sparse tensor indices and values shape invariants are changed to be more consistent in the case of 0-element tensors. See link for more details. (#9279).
  • torch.distributed: the TCP backend is removed, we recommend to use Gloo and MPI backends for CPU collectives and NCCL backend for GPU collectives.
  • Some inter-type operations (e.g. *) between torch.Tensors and NumPy arrays will now favor dispatching to the torch variant. This may result in different return types. (#9651).
  • Implicit numpy conversion no longer implicitly moves a tensor to CPU. Therefore, you may have to explicitly move a CUDA tensor to CPU (tensor.to('cpu')) before an implicit conversion. (#10553).
  • torch.randint now defaults to using dtype torch.int64 rather than the default floating-point dtype. (#11040).
  • torch.tensor function with a Tensor argument now returns a detached Tensor (i.e. a Tensor where grad_fn is None). This more closely aligns with the intent of the function, which is to return a Tensor with copied data and no history. (#11061,
    #11815).
  • torch.nn.functional.multilabel_soft_margin_loss now returns Tensors of shape (N,) instead of (N, C) to match the behavior of torch.nn.MultiMarginLoss. In addition, it is more numerically stable.
    (#9965).
  • The result type of a torch.float16 0-dimensional tensor and a integer is now torch.float16 (was torch.float32 or torch.float64 depending on the dtype of the integer). (#11941).
  • Dirichlet and Categorical distributions no longer accept scalar parameters. (#11589).
  • CPP Extensions: Deprecated factory functions that accept a type as the first argument and a size as a second argument argument have been removed. Instead, use the new-style factory functions that accept the size as the first argument and TensorOptions as the last argument. For example, replace your call to at::ones(torch::CPU(at::kFloat)), {2, 3}) with torch::ones({2, 3}, at::kCPU). This applies to the following functions:
    • arange, empty, eye, full, linspace, logspace, ones, rand, randint, randn, randperm, range, zeros.
  • torch.potrf renamed to torch.cholesky. It has a new default (upper=False) (#12699).
  • Renamed elementwise_mean to mean for loss reduction functions (#13419)

Additional New Features

N-dimensional empty tensors

  • Tensors with 0 elements can now have an arbitrary number of dimensions and support indexing and other torch operations; previously, 0 element tensors were limited to shape (0,). (#9947). Example:
    >>> torch.empty((0, 2, 4, 0), dtype=torch.float64)
    tensor([], size=(0, 2, 4, 0), dtype=torch.float64)
    

New Operators

New Distributions

Sparse API Improvements

Additions to existing Operators and Distributions

Bug Fixes

Serious

Backwards Compatibility

  • torch.nn.Module load_from_state_dict now correctly handles 1-dimensional vs 0-dimensional tensors saved from 0.3 versions. (#9781).
  • Fix RuntimeError: storages don't support slicing when loading models saved with PyTorch 0.3. (#11314).
  • BCEWithLogitsLoss: fixed an issue with legacy reduce parameter. (#12689).

Correctness

Error checking

Miscellaneous

Other Improvements

Deprecations

CPP Extensions

  • The torch/torch.h header is deprecated in favor of torch/extension.h, which should be used in all C++ extensions going forward. Including torch/torch.h from a C++ extension will produce a warning. It is safe to batch replace torch/torch.h with torch/extension.h.
  • Usage of the following functions in C++ extensions is also deprecated:
    • torch::set_requires_grad. Replacement: at::Tensor now has a set_requires_grad method.
    • torch::requires_grad. Replacement: at::Tensor now has a requires_grad method.
    • torch::getVariableType. Replacement: None.
  • Fix version.groups() (#14505)
  • Allow building libraries with setuptools that dont have abi suffix (#14130)
  • Missing .decode() after check_output in cpp_extensions (#13935)

torch.distributed

Performance

Documentation Improvements