fastdev.xform.warp_transforms
=============================

.. py:module:: fastdev.xform.warp_transforms




Module Contents
---------------

.. py:function:: transform_points(pts: jaxtyping.Float[torch.Tensor, ... n 3], tf_mat: jaxtyping.Float[torch.Tensor, ... 4 4]) -> jaxtyping.Float[torch.Tensor, ... n 3]

   Apply a transformation matrix on a set of 3D points.

   :param pts: 3D points, could be [... n 3]
   :type pts: torch.Tensor
   :param tf_mat: Transformation matrix, could be [... 4 4]
   :type tf_mat: torch.Tensor

   :returns: Transformed pts in shape of [... n 3]

   .. rubric:: Examples

   >>> pts = torch.tensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
   >>> tf_mat = torch.tensor([[0.0, 1.0, 0.0, 1.0], [0.0, 0.0, 1.0, 2.0], [1.0, 0.0, 0.0, 3.0], [0.0, 0.0, 0.0, 1.0]])
   >>> transform_points(pts, tf_mat)
   tensor([[3., 5., 4.],
           [6., 8., 7.]])

   .. note::
       The dimension number of `pts` and `tf_mat` should be the same. The batch dimensions (...) are broadcasted_ (and
       thus must be broadcastable). We don't adopt the shapes [... 3] and [... 4 4] because there is no real
       broadcasted vector-matrix multiplication in pytorch. [... 3] and [... 4 4] will be converted to [... 1 3]
       and [... 4 4] and apply a broadcasted matrix-matrix multiplication.

   .. _broadcasted: https://pytorch.org/docs/stable/notes/broadcasting.html


.. py:function:: rotate_points(pts: jaxtyping.Float[torch.Tensor, ... n 3], tf_mat: jaxtyping.Float[torch.Tensor, ... 3 3]) -> jaxtyping.Float[torch.Tensor, ... n 3]

   Apply a rotation matrix on a set of 3D points.

   :param pts: 3D points in shape [... n 3].
   :type pts: torch.Tensor
   :param rot_mat: Rotation matrix in shape [... 3 3].
   :type rot_mat: torch.Tensor

   :returns: Rotated points in shape [... n 3].
   :rtype: torch.Tensor