plenoptic.simulate.canonical_computations package

Submodules

plenoptic.simulate.canonical_computations.filters module

plenoptic.simulate.canonical_computations.filters.circular_gaussian2d(kernel_size, std, out_channels=1)[source]

Creates normalized, centered circular 2D gaussian tensor with which to convolve.

Parameters:
  • kernel_size (int | tuple[int, int]) – Filter kernel size. Recommended to be odd so that kernel is properly centered.

  • std (float | Tensor) – Standard deviation of 2D circular Gaussian.

  • out_channels (int) – Number of channels with same kernel repeated along channel dim.

Returns:

Circular gaussian kernel, normalized by total pixel-sum (_not_ by 2pi*std). filt has Size([out_channels=n_channels, in_channels=1, height, width]).

Return type:

filt

plenoptic.simulate.canonical_computations.filters.gaussian1d(kernel_size=11, std=1.5)[source]

Normalized 1D Gaussian.

1d Gaussian of size kernel_size, centered half-way, with variable std deviation, and sum of 1.

With default values, this is the 1d Gaussian used to generate the windows for SSIM

Parameters:
  • kernel_size (int) – Size of Gaussian. Recommended to be odd so that kernel is properly centered.

  • std (int | float | Tensor) – Standard deviation of Gaussian.

Returns:

1d Gaussian with Size([kernel_size]).

Return type:

filt

plenoptic.simulate.canonical_computations.laplacian_pyramid module

class plenoptic.simulate.canonical_computations.laplacian_pyramid.LaplacianPyramid(n_scales=5, scale_filter=False)[source]

Bases: Module

Laplacian Pyramid in Torch.

The Laplacian pyramid [1] is a multiscale image representation. It decomposes the image by computing the local mean using Gaussian blurring filters and substracting it from the image and repeating this operation on the local mean itself after downsampling. This representation is overcomplete and invertible.

Parameters:
  • n_scales (int) – number of scales to compute

  • scale_filter (bool, optional) – If true, the norm of the downsampling/upsampling filter is 1. If false (default), it is 2. If the norm is 1, the image is multiplied by 4 during the upsampling operation; the net effect is that the n`th scale of the pyramid is divided by `2^n.

References

[1]

Burt, P. and Adelson, E., 1983. The Laplacian pyramid as a compact image code. IEEE Transactions on communications, 31(4), pp.532-540.

Methods

add_module(name, module)

Add a child module to the current module.

apply(fn)

Apply fn recursively to every submodule (as returned by .children()) as well as self.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Return an iterator over module buffers.

children()

Return an iterator over immediate children modules.

compile(*args, **kwargs)

Compile this Module's forward using torch.compile().

cpu()

Move all model parameters and buffers to the CPU.

cuda([device])

Move all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Set the module in evaluation mode.

extra_repr()

Set the extra representation of the module.

float()

Casts all floating point parameters and buffers to float datatype.

forward(x)

Build the Laplacian pyramid of an image.

get_buffer(target)

Return the buffer given by target if it exists, otherwise throw an error.

get_extra_state()

Return any extra state to include in the module's state_dict.

get_parameter(target)

Return the parameter given by target if it exists, otherwise throw an error.

get_submodule(target)

Return the submodule given by target if it exists, otherwise throw an error.

half()

Casts all floating point parameters and buffers to half datatype.

ipu([device])

Move all model parameters and buffers to the IPU.

load_state_dict(state_dict[, strict, assign])

Copy parameters and buffers from state_dict into this module and its descendants.

modules()

Return an iterator over all modules in the network.

mtia([device])

Move all model parameters and buffers to the MTIA.

named_buffers([prefix, recurse, ...])

Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse, ...])

Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Return an iterator over module parameters.

recon_pyr(y)

Reconstruct the image from its Laplacian pyramid.

register_backward_hook(hook)

Register a backward hook on the module.

register_buffer(name, tensor[, persistent])

Add a buffer to the module.

register_forward_hook(hook, *[, prepend, ...])

Register a forward hook on the module.

register_forward_pre_hook(hook, *[, ...])

Register a forward pre-hook on the module.

register_full_backward_hook(hook[, prepend])

Register a backward hook on the module.

register_full_backward_pre_hook(hook[, prepend])

Register a backward pre-hook on the module.

register_load_state_dict_post_hook(hook)

Register a post-hook to be run after module's load_state_dict() is called.

register_load_state_dict_pre_hook(hook)

Register a pre-hook to be run before module's load_state_dict() is called.

register_module(name, module)

Alias for add_module().

register_parameter(name, param)

Add a parameter to the module.

register_state_dict_post_hook(hook)

Register a post-hook for the state_dict() method.

register_state_dict_pre_hook(hook)

Register a pre-hook for the state_dict() method.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_extra_state(state)

Set extra state contained in the loaded state_dict.

set_submodule(target, module)

Set the submodule given by target if it exists, otherwise throw an error.

share_memory()

See torch.Tensor.share_memory_().

state_dict(*args[, destination, prefix, ...])

Return a dictionary containing references to the whole state of the module.

to(*args, **kwargs)

Move and/or cast the parameters and buffers.

to_empty(*, device[, recurse])

Move the parameters and buffers to the specified device without copying storage.

train([mode])

Set the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

xpu([device])

Move all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Reset gradients of all model parameters.

__call__

forward(x)[source]

Build the Laplacian pyramid of an image.

Parameters:

x (torch.Tensor of shape (batch, channel, height, width)) – Image, or batch of images. If there are multiple channels, the Laplacian is computed separately for each of them

Returns:

y – Laplacian pyramid representation, each element of the list corresponds to a scale, from fine to coarse

Return type:

list of torch.Tensor

recon_pyr(y)[source]

Reconstruct the image from its Laplacian pyramid.

Parameters:

y (list of torch.Tensor) – Laplacian pyramid representation, each element of the list corresponds to a scale, from fine to coarse

Returns:

x – Image, or batch of images

Return type:

torch.Tensor of shape (batch, channel, height, width)

plenoptic.simulate.canonical_computations.non_linearities module

plenoptic.simulate.canonical_computations.non_linearities.local_gain_control(x, epsilon=1e-08)[source]

Spatially local gain control.

Parameters:
  • x (torch.Tensor) – Tensor of shape (batch, channel, height, width)

  • epsilon (float, optional) – Small constant to avoid division by zero.

Returns:

  • norm (torch.Tensor) – The local energy of x. Note that it is down sampled by a factor 2 in (unlike rect2pol).

  • direction (torch.Tensor) – The local phase of x (aka. local unit vector, or local state)

Notes

This function is an analogue to rectangular_to_polar for real valued signals.

Norm and direction (analogous to complex modulus and phase) are defined using blurring operator and division. Indeed blurring the responses removes high frequencies introduced by the squaring operation. In the complex case adding the quadrature pair response has the same effect (note that this is most clearly seen in the frequency domain). Here computing the direction (phase) reduces to dividing out the norm (modulus), indeed the signal only has one real component. This is a normalization operation (local unit vector), hence the connection to local gain control.

plenoptic.simulate.canonical_computations.non_linearities.local_gain_control_dict(coeff_dict, residuals=True)[source]

Spatially local gain control, for each element in a dictionary.

Parameters:
  • coeff_dict (dict) – A dictionary containing tensors of shape (batch, channel, height, width)

  • residuals (bool, optional) – An option to carry around residuals in the energy dict. Note that the transformation is not applied to the residuals, that is dictionary elements with a key starting in “residual”.

Returns:

  • energy (dict) – The dictionary of torch.Tensors containing the local energy of x.

  • state (dict) – The dictionary of torch.Tensors containing the local phase of x.

Notes

Note that energy and state is not computed on the residuals.

The inverse operation is achieved by local_gain_release_dict. This function is an analogue to rectangular_to_polar_dict for real valued signals. For more details, see local_gain_control()

plenoptic.simulate.canonical_computations.non_linearities.local_gain_release(norm, direction, epsilon=1e-08)[source]

Spatially local gain release.

Parameters:
  • norm (torch.Tensor) – The local energy of x. Note that it is down sampled by a factor 2 in (unlike rect2pol).

  • direction (torch.Tensor) – The local phase of x (aka. local unit vector, or local state)

  • epsilon (float, optional) – Small constant to avoid division by zero.

Returns:

x – Tensor of shape (batch, channel, height, width)

Return type:

torch.Tensor

Notes

This function is an analogue to polar_to_rectangular for real valued signals.

Norm and direction (analogous to complex modulus and phase) are defined using blurring operator and division. Indeed blurring the responses removes high frequencies introduced by the squaring operation. In the complex case adding the quadrature pair response has the same effect (note that this is most clearly seen in the frequency domain). Here computing the direction (phase) reduces to dividing out the norm (modulus), indeed the signal only has one real component. This is a normalization operation (local unit vector), hence the connection to local gain control.

plenoptic.simulate.canonical_computations.non_linearities.local_gain_release_dict(energy, state, residuals=True)[source]

Spatially local gain release, for each element in a dictionary.

Parameters:
  • energy (dict) – The dictionary of torch.Tensors containing the local energy of x.

  • state (dict) – The dictionary of torch.Tensors containing the local phase of x.

  • residuals (bool, optional) – An option to carry around residuals in the energy dict. Note that the transformation is not applied to the residuals, that is dictionary elements with a key starting in “residual”.

Returns:

coeff_dict – A dictionary containing tensors of shape (batch, channel, height, width)

Return type:

dict

Notes

The inverse operation to local_gain_control_dict. This function is an analogue to polar_to_rectangular_dict for real valued signals. For more details, see local_gain_release()

plenoptic.simulate.canonical_computations.non_linearities.polar_to_rectangular_dict(energy, state, residuals=True)[source]

Return the real and imaginary parts of tensor in a dictionary.

Parameters:
  • energy (dict) – The dictionary of torch.Tensors containing the local complex modulus.

  • state (dict) – The dictionary of torch.Tensors containing the local phase.

  • dim (int, optional) – The dimension that contains the real and imaginary components.

  • residuals (bool, optional) – An option to carry around residuals in the energy branch.

Returns:

coeff_dict – A dictionary containing complex tensors of coefficients.

Return type:

dict

plenoptic.simulate.canonical_computations.non_linearities.rectangular_to_polar_dict(coeff_dict, residuals=False)[source]

Return the complex modulus and the phase of each complex tensor in a dictionary.

Parameters:
  • coeff_dict (dict) – A dictionary containing complex tensors.

  • dim (int, optional) – The dimension that contains the real and imaginary components.

  • residuals (bool, optional) – An option to carry around residuals in the energy branch.

Returns:

  • energy (dict) – The dictionary of torch.Tensors containing the local complex modulus of coeff_dict.

  • state (dict) – The dictionary of torch.Tensors containing the local phase of coeff_dict.

plenoptic.simulate.canonical_computations.steerable_pyramid_freq module

Steerable frequency pyramid

Construct a steerable pyramid on matrix two dimensional signals, in the Fourier domain.

class plenoptic.simulate.canonical_computations.steerable_pyramid_freq.SteerablePyramidFreq(image_shape, height='auto', order=3, twidth=1, is_complex=False, downsample=True, tight_frame=False)[source]

Bases: Module

Steerable frequency pyramid in Torch

Construct a steerable pyramid on matrix two dimensional signals, in the Fourier domain. Boundary-handling is circular. Reconstruction is exact (within floating point errors). However, if the image has an odd-shape, the reconstruction will not be exact due to boundary-handling issues that have not been resolved.

The squared radial functions tile the Fourier plane with a raised-cosine falloff. Angular functions are cos(theta-k*pi/order+1)^(order).

Notes

Transform described in [1], filter kernel design described in [2]. For further information see the project webpage_

Parameters:
  • image_shape (list or tuple) – shape of input image

  • height (‘auto’ or int) – The height of the pyramid. If ‘auto’, will automatically determine based on the size of image. If an int, must be non-negative and less than log2(min(image_shape[1], image_shape[1]))-2. If height=0, this only returns the residuals.

  • order (int.) – The Gaussian derivative order used for the steerable filters, in [1, 15]. Note that to achieve steerability the minimum number of orientation is order + 1, and is used here. To get more orientations at the same order, use the method steer_coeffs

  • twidth (int) – The width of the transition region of the radial lowpass function, in octaves

  • is_complex (bool) – Whether the pyramid coefficients should be complex or not. If True, the real and imaginary parts correspond to a pair of even and odd symmetric filters. If False, the coefficients only include the real part / even

  • downsample (bool) – Whether to downsample each scale in the pyramid or keep the output pyramid coefficients in fixed bands of size imshapeximshape. When downsample is False, the forward method returns a tensor.

  • tight_frame (bool default: False) – Whether the pyramid obeys the generalized parseval theorem or not (i.e. is a tight frame). If True, the energy of the pyr_coeffs = energy of the image. If not this is not true. In order to match the matlabPyrTools or pyrtools pyramids, this must be set to False

image_shape

shape of input image

Type:

list or tuple

pyr_size

Dictionary containing the sizes of the pyramid coefficients. Keys are (level, band) tuples and values are tuples.

Type:

dict

fft_norm

The way the ffts are normalized, see pytorch documentation for more details.

Type:

str

is_complex

Whether the coefficients are complex- or real-valued.

Type:

bool

References

[1]

E P Simoncelli and W T Freeman, “The Steerable Pyramid: A Flexible Architecture for Multi-Scale Derivative Computation,” Second Int’l Conf on Image Processing, Washington, DC, Oct 1995.

[2]

A Karasaridis and E P Simoncelli, “A Filter Design Technique for Steerable Pyramid Image Transforms”, ICASSP, Atlanta, GA, May 1996. .. _webpage: https://www.cns.nyu.edu/~eero/steerpyr/

Methods

add_module(name, module)

Add a child module to the current module.

apply(fn)

Apply fn recursively to every submodule (as returned by .children()) as well as self.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Return an iterator over module buffers.

children()

Return an iterator over immediate children modules.

compile(*args, **kwargs)

Compile this Module's forward using torch.compile().

convert_pyr_to_tensor(pyr_coeffs[, ...])

Convert coefficient dictionary to a tensor.

convert_tensor_to_pyr(pyr_tensor, ...)

Convert pyramid coefficient tensor to dictionary format.

cpu()

Move all model parameters and buffers to the CPU.

cuda([device])

Move all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Set the module in evaluation mode.

extra_repr()

Set the extra representation of the module.

float()

Casts all floating point parameters and buffers to float datatype.

forward(x[, scales])

Generate the steerable pyramid coefficients for an image

get_buffer(target)

Return the buffer given by target if it exists, otherwise throw an error.

get_extra_state()

Return any extra state to include in the module's state_dict.

get_parameter(target)

Return the parameter given by target if it exists, otherwise throw an error.

get_submodule(target)

Return the submodule given by target if it exists, otherwise throw an error.

half()

Casts all floating point parameters and buffers to half datatype.

ipu([device])

Move all model parameters and buffers to the IPU.

load_state_dict(state_dict[, strict, assign])

Copy parameters and buffers from state_dict into this module and its descendants.

modules()

Return an iterator over all modules in the network.

mtia([device])

Move all model parameters and buffers to the MTIA.

named_buffers([prefix, recurse, ...])

Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse, ...])

Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Return an iterator over module parameters.

recon_pyr(pyr_coeffs[, levels, bands])

Reconstruct the image or batch of images, optionally using subset of pyramid coefficients.

register_backward_hook(hook)

Register a backward hook on the module.

register_buffer(name, tensor[, persistent])

Add a buffer to the module.

register_forward_hook(hook, *[, prepend, ...])

Register a forward hook on the module.

register_forward_pre_hook(hook, *[, ...])

Register a forward pre-hook on the module.

register_full_backward_hook(hook[, prepend])

Register a backward hook on the module.

register_full_backward_pre_hook(hook[, prepend])

Register a backward pre-hook on the module.

register_load_state_dict_post_hook(hook)

Register a post-hook to be run after module's load_state_dict() is called.

register_load_state_dict_pre_hook(hook)

Register a pre-hook to be run before module's load_state_dict() is called.

register_module(name, module)

Alias for add_module().

register_parameter(name, param)

Add a parameter to the module.

register_state_dict_post_hook(hook)

Register a post-hook for the state_dict() method.

register_state_dict_pre_hook(hook)

Register a pre-hook for the state_dict() method.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_extra_state(state)

Set extra state contained in the loaded state_dict.

set_submodule(target, module)

Set the submodule given by target if it exists, otherwise throw an error.

share_memory()

See torch.Tensor.share_memory_().

state_dict(*args[, destination, prefix, ...])

Return a dictionary containing references to the whole state of the module.

steer_coeffs(pyr_coeffs, angles[, even_phase])

Steer pyramid coefficients to the specified angles

to(*args, **kwargs)

Move and/or cast the parameters and buffers.

to_empty(*, device[, recurse])

Move the parameters and buffers to the specified device without copying storage.

train([mode])

Set the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

xpu([device])

Move all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Reset gradients of all model parameters.

__call__

static convert_pyr_to_tensor(pyr_coeffs, split_complex=False)[source]

Convert coefficient dictionary to a tensor.

The output tensor has shape (batch, channel, height, width) and is intended to be used in an torch.nn.Module downstream. In the multichannel case, all bands for each channel will be stacked together (i.e. if there are 2 channels and 18 bands per channel, pyr_tensor[:,0:18,…] will contain the pyr responses for channel 1 and pyr_tensor[:, 18:36, …] will contain the responses for channel 2). In the case of a complex, multichannel pyramid with split_complex=True, the real/imaginary bands will be intereleaved so that they appear as pairs with neighboring indices in the channel dimension of the tensor (Note: the residual bands are always real so they will only ever have a single band even when split_complex=True.)

This only works if pyr_coeffs was created with a pyramid with downsample=False

Parameters:
  • pyr_coeffs (OrderedDict) – the pyramid coefficients

  • split_complex (bool) – indicates whether the output should split complex bands into real/imag channels or keep them as a single channel. This should be True if you intend to use a convolutional layer on top of the output.

Return type:

tuple[Tensor, tuple[int, bool, list[Union[tuple[int, int], Literal['residual_lowpass', 'residual_highpass']]]]]

Returns:

  • pyr_tensor – shape (batch, channel, height, width). pyramid coefficients reshaped into tensor. The first channel will be the residual highpass and the last will be the residual lowpass. Each band is then a separate channel.

  • pyr_info – Information required to recreate the dictionary, containing the number of channels, if split_complex was used in this function call, and the list of pyramid keys for the dictionary

See also

convert_tensor_to_pyr

Convert tensor representation to pyramid dictionary.

static convert_tensor_to_pyr(pyr_tensor, num_channels, split_complex, pyr_keys)[source]

Convert pyramid coefficient tensor to dictionary format.

num_channels, split_complex, and pyr_keys are elements of the pyr_info tuple returned by convert_pyr_to_tensor. You should always unpack the arguments for this function from that pyr_info tuple. Example Usage:

pyr_tensor, pyr_info = convert_pyr_to_tensor(pyr_coeffs, split_complex=True)
pyr_dict = convert_tensor_to_pyr(pyr_tensor, *pyr_info)
Parameters:
  • pyr_tensor (Tensor) – Shape (batch, channel, height, width). The pyramid coefficients

  • num_channels (int) – number of channels in the original input tensor the pyramid was created for (i.e. if the input was an RGB image, this would be 3)

  • split_complex (bool) – true or false, specifying whether the pyr_tensor was created with complex channels split or not (if the pyramid was a complex pyramid).

  • pyr_keys (list[Union[tuple[int, int], Literal['residual_lowpass', 'residual_highpass']]]) – tuple containing the list of keys for the original pyramid dictionary

Returns:

pyramid coefficients in dictionary format

Return type:

pyr_coeffs

See also

convert_pyr_to_tensor

Convert pyramid dictionary representation to tensor.

forward(x, scales=None)[source]

Generate the steerable pyramid coefficients for an image

Parameters:
  • x (Tensor) – A tensor containing the image to analyze. We want to operate on this in the pytorch-y way, so we want it to be 4d (batch, channel, height, width).

  • scales (list[Union[int, Literal['residual_lowpass', 'residual_highpass']]] | None) – Which scales to include in the returned representation. If None, we include all scales. Otherwise, can contain subset of values present in this model’s scales attribute (ints from 0 up to self.num_scales-1 and the strs ‘residual_highpass’ and ‘residual_lowpass’. Can contain a single value or multiple values. If it’s an int, we include all orientations from that scale. Order within the list does not matter.

Returns:

Pyramid coefficients

Return type:

representation

recon_pyr(pyr_coeffs, levels='all', bands='all')[source]

Reconstruct the image or batch of images, optionally using subset of pyramid coefficients.

NOTE: in order to call this function, you need to have previously called self.forward(x), where x is the tensor you wish to reconstruct. This will fail if you called forward() with a subset of scales.

Parameters:
  • pyr_coeffs (OrderedDict) – pyramid coefficients to reconstruct from

  • levels (Union[Literal['all'], list[Union[int, Literal['residual_lowpass', 'residual_highpass']]]]) – If list should contain some subset of integers from 0 to self.num_scales-1 (inclusive), ‘residual_lowpass’, and ‘residual_highpass’. If ‘all’, returned value will contain all valid levels. Otherwise, must be one of the valid levels.

  • bands (Union[Literal['all'], list[int]]) – If list, should contain some subset of integers from 0 to self.num_orientations-1. If ‘all’, returned value will contain all valid orientations. Otherwise, must be one of the valid orientations.

Returns:

The reconstructed image, of shape (batch, channel, height, width)

Return type:

recon

steer_coeffs(pyr_coeffs, angles, even_phase=True)[source]

Steer pyramid coefficients to the specified angles

This allows you to have filters that have the Gaussian derivative order specified in construction, but arbitrary angles or number of orientations.

Parameters:
  • pyr_coeffs (OrderedDict) – the pyramid coefficients to steer

  • angles (list[float]) – list of angles (in radians) to steer the pyramid coefficients to

  • even_phase (bool) – specifies whether the harmonics are cosine or sine phase aligned about those positions.

Return type:

tuple[dict, dict]

Returns:

  • resteered_coeffs – dictionary of re-steered pyramid coefficients. will have the same number of scales as the original pyramid (though it will not contain the residual highpass or lowpass). like pyr_coeffs, keys are 2-tuples of ints indexing the scale and orientation, but now we’re indexing angles instead of self.num_orientations.

  • resteering_weights – dictionary of weights used to re-steer the pyramid coefficients. will have the same keys as resteered_coeffs.

Module contents