Difference Between torch.tensor() and torch.Tensor() in PyTorch

This is a quick reference note on the differences between torch.tensor() and torch.Tensor().

Introduction

When creating tensors in PyTorch, the torch.tensor() method is commonly used.

However, it also appears possible to create tensors by calling the class constructor directly using torch.Tensor().

I wasn't entirely clear on the differences between these two approaches: - torch.tensor() - torch.Tensor()

So here's a note for future reference.

Note: This article was translated from my original post.

Differences Between torch.tensor and torch.Tensor

In Short

Use torch.tensor() as your default choice.
It's convenient because it automatically infers the data type.

There's currently no compelling reason to use torch.Tensor().

In Detail

Let's start by looking at how they behave.

x = torch.tensor([1, 2, 3])
print(x)
print(x.dtype)

X = torch.Tensor([1, 2, 3])
print(X)
print(X.dtype)

# Output
# tensor([1, 2, 3])
# torch.int64
# tensor([1., 2., 3.])
# torch.float32

When creating a tensor with torch.tensor([1, 2, 3]), the data type is torch.int64, but with torch.Tensor([1, 2, 3]), the data type becomes torch.float32.

This is because torch.tensor() infers the type from the input data, whereas torch.Tensor() always returns a torch.FloatTensor.

Of course, when using torch.tensor(), you can also explicitly specify the data type with the dtype argument.

y = torch.tensor([1, 2, 3], dtype=torch.float32)
print(y)
print(y.dtype)

# Output
# tensor([1., 2., 3.])
# torch.float32


So in general, using torch.tensor() provides more flexibility.

The torch.Tensor documentation also states that torch.tensor() is recommended when creating a tensor from pre-existing data:

To create a tensor with pre-existing data, use torch.tensor().

Note: Creating Empty Tensors

At first glance, attempting to create an empty tensor with torch.tensor() results in an error.

empty_err = torch.tensor()
print(empty_err)
print(empty_err.dtype)

# Output: Error
Traceback (most recent call last):
  File "/workspaces/python-examples/torch_tensor/main.py", line 25, in <module>
    empty_err = torch.tensor()
TypeError: tensor() missing 1 required positional arguments: "data"

# torch.Tensor() doesn't raise an error
empty = torch.Tensor()
print(empty)
print(empty.dtype)

# Output
tensor([])
torch.float32

Does this mean torch.Tensor() is better for creating empty tensors? Not necessarily.

You can create an empty tensor using torch.tensor(()).

empty = torch.tensor(())
print(empty)
print(empty.dtype)

# Output
tensor([])
torch.float32

Conclusion

That's a quick note on the differences between torch.tensor and torch.Tensor.

I hope this is helpful to someone out there.

[Related Articles]

en.bioerrorlog.work

References