fastdev.utils ============= .. py:module:: fastdev.utils Submodules ---------- .. toctree:: :maxdepth: 1 /api/fastdev/utils/cuda/index /api/fastdev/utils/model_summary/index /api/fastdev/utils/profile/index /api/fastdev/utils/seed/index /api/fastdev/utils/struct/index /api/fastdev/utils/tensor/index /api/fastdev/utils/tui/index Package Contents ---------------- .. py:function:: cuda_toolkit_available() -> bool Check if the nvcc is avaiable on the machine. .. py:function:: current_cuda_arch() -> str Get the current CUDA architecture. .. py:function:: summarize_model(model: torch.nn.Module, max_depth: int = 1) .. py:class:: cuda_timeit(print_tmpl: Optional[str] = None) Bases: :py:obj:`timeit` Measure the time of a block of code that may involve CUDA operations. We use CUDA events and synchronization for the accurate measurements. :param print_tmpl: The template to print the time. Defaults to None. Can be a string with a placeholder for the time, e.g., "func foo costs {:.5f} s" or a string without a placeholder, e.g., "func foo". :type print_tmpl: str, optional .. py:method:: __enter__() .. py:method:: __exit__(exec_type, exec_value, traceback) .. py:class:: timeit(fn_or_print_tmpl: Optional[Union[Callable, str]] = None) Measure the time of a block of code. :param print_tmpl: The template to print the time. Defaults to None. Can be a string with a placeholder for the time, e.g., "func foo costs {:.5f} s" or a string without a placeholder, e.g., "func foo". :type print_tmpl: str, optional .. rubric:: Examples >>> with timeit(): ... time.sleep(1) it costs 1.00000 s >>> @timeit ... def foo(): ... time.sleep(1) foo costs 1.00000 s >>> @timeit("func foo") ... def foo(): ... time.sleep(1) func foo costs 1.00000 s .. py:method:: __enter__() .. py:method:: __exit__(exec_type, exec_value, traceback) .. py:method:: __call__(func: T) -> T .. py:function:: seed_everything(seed: int, deterministic: bool = False) Seed all random number generators. :param seed: Seed to be used. :type seed: int :param deterministic: Whether to set the deterministic option for CUDNN backend, i.e., set `torch.backends.cudnn.deterministic` to True and `torch.backends.cudnn.benchmark` to False. Default: False. :type deterministic: bool .. py:function:: list_to_packed(x: List[torch.Tensor]) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor] Transforms a list of N tensors each of shape (Mi, K, ...) into a single tensor of shape (sum(Mi), K, ...). :param x: list of tensors. :returns: 4-element tuple containing - x_packed: tensor consisting of packed input tensors along the 1st dimension. - num_items: tensor of shape N containing Mi for each element in x. - item_packed_first_idx: tensor of shape N indicating the index of the first item belonging to the same element in the original list. - item_packed_to_list_idx: tensor of shape sum(Mi) containing the index of the element in the list the item belongs to. .. py:function:: list_to_padded(x: Union[List[torch.Tensor], Tuple[torch.Tensor]], pad_size: Union[Sequence[int], None] = None, pad_value: Union[float, int] = 0.0, equisized: bool = False) -> torch.Tensor Transforms a list of N tensors each of shape (Si_0, Si_1, ... Si_D) into: - a single tensor of shape (N, pad_size(0), pad_size(1), ..., pad_size(D)) if pad_size is provided - or a tensor of shape (N, max(Si_0), max(Si_1), ..., max(Si_D)) if pad_size is None. :param x: list of Tensors :param pad_size: list(int) specifying the size of the padded tensor. If `None` (default), the largest size of each dimension is set as the `pad_size`. :param pad_value: float value to be used to fill the padded tensor :param equisized: bool indicating whether the items in x are of equal size (sometimes this is known and if provided saves computation) :returns: tensor consisting of padded input tensors stored over the newly allocated memory. :rtype: x_padded .. py:function:: packed_to_list(x: torch.Tensor, split_size: Union[Sequence[int], int]) -> List[torch.Tensor] Transforms a tensor of shape (sum(Mi), K, L, ...) to N set of tensors of shape (Mi, K, L, ...) where Mi's are defined in split_size :param x: tensor :param split_size: list, tuple or int defining the number of items for each tensor in the output list. :returns: A list of Tensors :rtype: x_list .. py:function:: padded_to_list(x: torch.Tensor, split_size: Union[Sequence[int], None] = None, dim: int = 0) -> List[torch.Tensor] Transforms a padded tensor of shape (N, S_1, S_2, ..., S_D) into a list of N tensors of shape: - (Si_1, Si_2, ..., Si_D) where (Si_1, Si_2, ..., Si_D) is specified in split_size(i) - or (S_1, S_2, ..., S_D) if split_size is None - or (Si_1, S_2, ..., S_D) if split_size(i) is an integer. :param x: tensor :param split_size: optional 1D list/tuple of ints defining the number of items for each tensor. :returns: a list of tensors sharing the memory with the input. :rtype: x_list .. py:function:: padded_to_packed(x: torch.Tensor, split_size: Union[list, tuple], dim: int = 0) Transforms a padded tensor of shape (..., N, M, ...) into a packed tensor of shape: - (..., sum(split_size), ...) if split_size is provided - (..., N * M, ...) if split_size is None :param x: tensor of shape (..., N, M, ...) :param split_size: list, tuple defining the number of items for each tensor in the output list. :param dim: the `N` dimension in the input tensor :returns: a packed tensor :rtype: x_packed .. py:function:: atleast_nd(tensor: None, expected_ndim: int) -> None atleast_nd(tensor: numpy.ndarray, expected_ndim: int) -> numpy.ndarray atleast_nd(tensor: torch.Tensor, expected_ndim: int) -> torch.Tensor Convert input to at least nD tensor. .. note:: Differs from `np.atleast_nd` and `torch.atleast_nd`, this function can add dimensions to the front or back of the tensor. .. py:function:: auto_cast(fn: Optional[Callable] = None, return_type: Literal['by_input', 'by_func', 'pt', 'np'] = 'by_input') -> Callable Automatically cast input and output of a function to numpy or torch tensors. Since the function simply converts the input and output to numpy or torch tensors, it may introduce overhead. It is recommended to use this function for functions that are not performance critical. :param fn: Function to be wrapped. :type fn: Callable :param return_type: Type of return value. Defaults to "by_input". - "by_input": Return type is determined by the input argument type, first found array/tensor type is used. - "by_func": Return type is determined by the orginal function. - "pt": Return type is torch.Tensor. - "np": Return type is np.ndarray. :type return_type: Literal["by_input", "by_func", "pt", "np"], optional :returns: Wrapped function. :rtype: Callable .. py:function:: to_number(x: None) -> None to_number(x: int) -> int to_number(x: float) -> float to_number(x: numpy.ndarray) -> Union[int, float] to_number(x: torch.Tensor) -> Union[int, float] Convert input to number. :param x: Input to be converted. :type x: Any .. py:function:: to_numpy(x: torch.Tensor, preserve_list: bool = ...) -> numpy.ndarray to_numpy(x: numpy.ndarray, preserve_list: bool = ...) -> numpy.ndarray to_numpy(x: numpy.typing.ArrayLike, preserve_list: bool = ...) -> numpy.ndarray to_numpy(x: None, preserve_list: bool = ...) -> None to_numpy(x: Dict[Any, Any], preserve_list: bool = ...) -> Dict[Any, numpy.ndarray] to_numpy(x: List[Any], preserve_list: Literal[True]) -> List[numpy.ndarray] to_numpy(x: List[Any], preserve_list: Literal[False] = ...) -> numpy.ndarray to_numpy(x: Tuple[Any, Ellipsis], preserve_list: Literal[True]) -> Tuple[numpy.ndarray, Ellipsis] to_numpy(x: Tuple[Any, Ellipsis], preserve_list: Literal[False] = ...) -> numpy.ndarray Convert input to numpy array. :param x: Input to be converted. :type x: Any :param preserve_list: Whether to preserve list or convert to numpy array. Defaults to True. :type preserve_list: bool, optional .. py:function:: to_torch(x: numpy.ndarray, preserve_list: bool = ...) -> torch.Tensor to_torch(x: torch.Tensor, preserve_list: bool = ...) -> torch.Tensor to_torch(x: None, preserve_list: bool = ...) -> None to_torch(x: Dict[Any, Any], preserve_list: bool = ...) -> Dict[Any, torch.Tensor] to_torch(x: List[Any], preserve_list: Literal[True]) -> List[torch.Tensor] to_torch(x: List[Any], preserve_list: Literal[False] = ...) -> torch.Tensor to_torch(x: Tuple[Any, Ellipsis], preserve_list: Literal[True]) -> Tuple[torch.Tensor, Ellipsis] to_torch(x: Tuple[Any, Ellipsis], preserve_list: Literal[False] = ...) -> torch.Tensor Convert input to torch tensor. :param x: Input to be converted. :type x: Any :param preserve_list: Whether to preserve list or convert to torch tensor. Defaults to True. :type preserve_list: bool, optional .. py:function:: log_once(message: str, level: Union[str, int] = logging.INFO, logger: Optional[logging.Logger] = None) Log a message only once (based on the message content and the source code location). :param message: message to log :type message: str :param level: log level, could be "critical", "error", "warning", "info", "debug" or corresponding int value (default: "info") :type level: str or int .. py:function:: parallel_track(func: Callable[[T], R], args: List[T], num_workers: int = 8, description: str = 'Processing') -> List[R]