Wednesday, May 1, 2024
HomePythonPyTorch Tensor to NumPy Array and Again

PyTorch Tensor to NumPy Array and Again


NumPy to PyTorch

PyTorch is designed to be fairly suitable with NumPy. Due to this, changing a NumPy array to a PyTorch tensor is straightforward:

import torch
import numpy as np

x = np.eye(3)

torch.from_numpy(x)

# Anticipated end result
# tensor([[1., 0., 0.],
#         [0., 1., 0.],
#         [0., 0., 1.]], dtype=torch.float64)

All it’s important to do is use the torch.from_numpy() operate.

As soon as the tensor is in PyTorch, chances are you’ll need to change the info sort:

x = np.eye(3)

torch.from_numpy(x).sort(torch.float32)

# Anticipated end result
# tensor([[1, 0, 0],
#         [0, 1, 0],
#         [0, 0, 1]])

All it’s important to do is name the .sort() technique. Simple sufficient.

Or, chances are you’ll need to ship the tensor to a distinct machine, like your GPU:

x = np.eye(3)

torch.from_numpy(x).to("cuda")

# Anticipated end result
# tensor([[1., 0., 0.],
#         [0., 1., 0.],
#         [0., 0., 1.]], machine='cuda:0', dtype=torch.float64)

The .to() technique sends a tensor to a distinct machine. Observe: the above solely works for those who’re working a model of PyTorch that was compiled with CUDA and have an Nvidia GPU in your machine. You’ll be able to check whether or not that’s true with torch.cuda.is_available().

PyTorch to NumPy

Going the opposite course is barely extra concerned as a result of you’ll generally need to cope with two variations between a PyTorch tensor and a NumPy array:

  1. PyTorch can goal totally different gadgets (like GPUs).
  2. PyTorch helps automated differentiation.

Within the easiest case, when you’ve got a PyTorch tensor with out gradients on a CPU, you’ll be able to merely name the .numpy() technique:

x = torch.eye(3)

x.numpy()

# Anticipated end result
# array([[1., 0., 0.],
#        [0., 1., 0.],
#        [0., 0., 1.]], dtype=float32)

However, if the tensor is a part of a computation graph that requires a gradient (that’s, if x.requires_grad is true), you will have to name the .detach() technique:

x = torch.eye(3)
x.requires_grad = True

x.detach().numpy()

# Anticipated end result
# array([[1., 0., 0.],
#        [0., 1., 0.],
#        [0., 0., 1.]], dtype=float32)

And if the tensor is on a tool aside from "cpu", you will have to carry it again to the CPU earlier than you’ll be able to name the .numpy() technique. We noticed this above when sending a tensor to the GPU with .to("cuda"). Now, we simply go in reverse:

x = torch.eye(3)
x = x.to("cuda")

x.to("cpu").numpy()

# Anticipated end result
# array([[1., 0., 0.],
#        [0., 1., 0.],
#        [0., 0., 1.]], dtype=float32)

Each the .detach() technique and the .to("cpu") technique are idempotent. So, if you wish to, you’ll be able to plan on calling them each time you need to convert a PyTorch tensor to a NumPy array, even when it’s not strictly obligatory:

x = torch.eye(3)

x.detach().to("cpu").numpy()

# Anticipated end result
# array([[1., 0., 0.],
#        [0., 1., 0.],
#        [0., 0., 1.]], dtype=float32)

By the way in which, if you wish to carry out picture transforms on a NumPy array instantly you’ll be able to! All you want is to have a remodel that accepts NumPy arrays as enter. Take a look at my submit on TorchVision transforms.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments