mixmo.core.loss.WrapperLoss

class mixmo.core.loss.WrapperLoss(config_loss, config_args, device)[source]

Bases: mixmo.core.loss.AbstractLoss

Wrapper around the multiple losses. Initialized from listloss.

__init__(config_loss, config_args, device)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

Methods

__init__(config_loss, config_args, device)

Initializes internal Module state, shared by both nn.Module and ScriptModule.

add_module(name, module)

Adds a child module to the current module.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

float()

Casts all floating point parameters and buffers to float datatype.

forward(input, target)

Defines the computation performed at every call.

get_accumulator_stats([format, split])

Gather tracked stats into a dictionary as formatted strings

half()

Casts all floating point parameters and buffers to half datatype.

l2_reg()

Compute l2 regularization/weight decay over the non-excluded parameters

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Returns an iterator over module parameters.

print_details()

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor)

Adds a persistent buffer to the module.

register_forward_hook(hook)

Registers a forward hook on the module.

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

register_parameter(name, param)

Adds a parameter to the module.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_regularized_network(network)

share_memory()

start_accumulator()

state_dict([destination, prefix, keep_vars])

Returns a dictionary containing a whole state of the module.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

zero_grad()

Sets gradients of all model parameters to zero.

Attributes

dump_patches

This allows better BC support for load_state_dict().

_forward(input, target)[source]

Perform loss forwards for each sublosses and l2 reg

_forward_subloss(loss, input, target)[source]

Standard loss forward for one of the sublosses

_init_get_losses()[source]

Initialize and gather losses from listloss

get_accumulator_stats(format='short', split=None)[source]

Gather tracked stats into a dictionary as formatted strings

l2_reg()[source]

Compute l2 regularization/weight decay over the non-excluded parameters