espaloma.nn.readout.janossy.JanossyPoolingImproper
- class espaloma.nn.readout.janossy.JanossyPoolingImproper(config, in_features, out_features={'k': 6}, out_features_dimensions=- 1)[source]
- Bases: - torch.nn.modules.module.Module- Janossy pooling (arXiv:1811.01900) to average node representation for improper torsions. - __init__(config, in_features, out_features={'k': 6}, out_features_dimensions=- 1)[source]
- Initializes internal Module state, shared by both nn.Module and ScriptModule. 
 - Methods - __init__(config, in_features[, ...])- Initializes internal Module state, shared by both nn.Module and ScriptModule. - add_module(name, module)- Adds a child module to the current module. - apply(fn)- Applies - fnrecursively to every submodule (as returned by- .children()) as well as self.- bfloat16()- Casts all floating point parameters and buffers to - bfloat16datatype.- buffers([recurse])- Returns an iterator over module buffers. - children()- Returns an iterator over immediate children modules. - cpu()- Moves all model parameters and buffers to the CPU. - cuda([device])- Moves all model parameters and buffers to the GPU. - double()- Casts all floating point parameters and buffers to - doubledatatype.- eval()- Sets the module in evaluation mode. - Set the extra representation of the module - float()- Casts all floating point parameters and buffers to - floatdatatype.- forward(g)- Forward pass. - get_buffer(target)- Returns the buffer given by - targetif it exists, otherwise throws an error.- Returns any extra state to include in the module's state_dict. - get_parameter(target)- Returns the parameter given by - targetif it exists, otherwise throws an error.- get_submodule(target)- Returns the submodule given by - targetif it exists, otherwise throws an error.- half()- Casts all floating point parameters and buffers to - halfdatatype.- load_state_dict(state_dict[, strict])- Copies parameters and buffers from - state_dictinto this module and its descendants.- modules()- Returns an iterator over all modules in the network. - named_buffers([prefix, recurse])- Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself. - Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself. - named_modules([memo, prefix, remove_duplicate])- Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself. - named_parameters([prefix, recurse])- Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself. - parameters([recurse])- Returns an iterator over module parameters. - register_backward_hook(hook)- Registers a backward hook on the module. - register_buffer(name, tensor[, persistent])- Adds a buffer to the module. - register_forward_hook(hook)- Registers a forward hook on the module. - Registers a forward pre-hook on the module. - Registers a backward hook on the module. - register_module(name, module)- Alias for - add_module().- register_parameter(name, param)- Adds a parameter to the module. - requires_grad_([requires_grad])- Change if autograd should record operations on parameters in this module. - set_extra_state(state)- This function is called from - load_state_dict()to handle any extra state found within the state_dict.- See - torch.Tensor.share_memory_()- state_dict([destination, prefix, keep_vars])- Returns a dictionary containing a whole state of the module. - to(*args, **kwargs)- Moves and/or casts the parameters and buffers. - to_empty(*, device)- Moves the parameters and buffers to the specified device without copying storage. - train([mode])- Sets the module in training mode. - type(dst_type)- Casts all parameters and buffers to - dst_type.- xpu([device])- Moves all model parameters and buffers to the XPU. - zero_grad([set_to_none])- Sets gradients of all model parameters to zero. - Attributes - T_destination- alias of TypeVar('T_destination', bound= - Mapping[- str,- torch.Tensor])- dump_patches- This allows better BC support for - load_state_dict().- add_module(name: str, module: Optional[torch.nn.modules.module.Module]) None
- Adds a child module to the current module. - The module can be accessed as an attribute using the given name. - Args:
- name (string): name of the child module. The child module can be
- accessed from this module using the given name 
 - module (Module): child module to be added to the module. 
 
 - apply(fn: Callable[[torch.nn.modules.module.Module], None]) torch.nn.modules.module.T
- Applies - fnrecursively to every submodule (as returned by- .children()) as well as self. Typical use includes initializing the parameters of a model (see also nn-init-doc).- Args:
- fn ( - Module-> None): function to be applied to each submodule
- Returns:
- Module: self 
 - Example: - >>> @torch.no_grad() >>> def init_weights(m): >>> print(m) >>> if type(m) == nn.Linear: >>> m.weight.fill_(1.0) >>> print(m.weight) >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) >>> net.apply(init_weights) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[ 1., 1.], [ 1., 1.]]) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[ 1., 1.], [ 1., 1.]]) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) 
 - bfloat16() torch.nn.modules.module.T
- Casts all floating point parameters and buffers to - bfloat16datatype.- Note - This method modifies the module in-place. - Returns:
- Module: self 
 
 - buffers(recurse: bool = True) Iterator[torch.Tensor]
- Returns an iterator over module buffers. - Args:
- recurse (bool): if True, then yields buffers of this module
- and all submodules. Otherwise, yields only buffers that are direct members of this module. 
 
- Yields:
- torch.Tensor: module buffer 
 - Example: - >>> for buf in model.buffers(): >>> print(type(buf), buf.size()) <class 'torch.Tensor'> (20L,) <class 'torch.Tensor'> (20L, 1L, 5L, 5L) 
 - children() Iterator[torch.nn.modules.module.Module]
- Returns an iterator over immediate children modules. - Yields:
- Module: a child module 
 
 - cpu() torch.nn.modules.module.T
- Moves all model parameters and buffers to the CPU. - Note - This method modifies the module in-place. - Returns:
- Module: self 
 
 - cuda(device: Optional[Union[int, torch.device]] = None) torch.nn.modules.module.T
- Moves all model parameters and buffers to the GPU. - This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized. - Note - This method modifies the module in-place. - Args:
- device (int, optional): if specified, all parameters will be
- copied to that device 
 
- Returns:
- Module: self 
 
 - double() torch.nn.modules.module.T
- Casts all floating point parameters and buffers to - doubledatatype.- Note - This method modifies the module in-place. - Returns:
- Module: self 
 
 - eval() torch.nn.modules.module.T
- Sets the module in evaluation mode. - This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. - Dropout,- BatchNorm, etc.- This is equivalent with - self.train(False).- See locally-disable-grad-doc for a comparison between .eval() and several similar mechanisms that may be confused with it. - Returns:
- Module: self 
 
 - extra_repr() str
- Set the extra representation of the module - To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable. 
 - float() torch.nn.modules.module.T
- Casts all floating point parameters and buffers to - floatdatatype.- Note - This method modifies the module in-place. - Returns:
- Module: self 
 
 - get_buffer(target: str) torch.Tensor
- Returns the buffer given by - targetif it exists, otherwise throws an error.- See the docstring for - get_submodulefor a more detailed explanation of this method’s functionality as well as how to correctly specify- target.- Args:
- target: The fully-qualified string name of the buffer
- to look for. (See - get_submodulefor how to specify a fully-qualified string.)
 
- Returns:
- torch.Tensor: The buffer referenced by - target
- Raises:
- AttributeError: If the target string references an invalid
- path or resolves to something that is not a buffer 
 
 
 - get_extra_state() Any
- Returns any extra state to include in the module’s state_dict. Implement this and a corresponding - set_extra_state()for your module if you need to store extra state. This function is called when building the module’s state_dict().- Note that extra state should be pickleable to ensure working serialization of the state_dict. We only provide provide backwards compatibility guarantees for serializing Tensors; other objects may break backwards compatibility if their serialized pickled form changes. - Returns:
- object: Any extra state to store in the module’s state_dict 
 
 - get_parameter(target: str) torch.nn.parameter.Parameter
- Returns the parameter given by - targetif it exists, otherwise throws an error.- See the docstring for - get_submodulefor a more detailed explanation of this method’s functionality as well as how to correctly specify- target.- Args:
- target: The fully-qualified string name of the Parameter
- to look for. (See - get_submodulefor how to specify a fully-qualified string.)
 
- Returns:
- torch.nn.Parameter: The Parameter referenced by - target
- Raises:
- AttributeError: If the target string references an invalid
- path or resolves to something that is not an - nn.Parameter
 
 
 - get_submodule(target: str) torch.nn.modules.module.Module
- Returns the submodule given by - targetif it exists, otherwise throws an error.- For example, let’s say you have an - nn.Module- Athat looks like this:- (The diagram shows an - nn.Module- A.- Ahas a nested submodule- net_b, which itself has two submodules- net_cand- linear.- net_cthen has a submodule- conv.)- To check whether or not we have the - linearsubmodule, we would call- get_submodule("net_b.linear"). To check whether we have the- convsubmodule, we would call- get_submodule("net_b.net_c.conv").- The runtime of - get_submoduleis bounded by the degree of module nesting in- target. A query against- named_modulesachieves the same result, but it is O(N) in the number of transitive modules. So, for a simple check to see if some submodule exists,- get_submoduleshould always be used.- Args:
- target: The fully-qualified string name of the submodule
- to look for. (See above example for how to specify a fully-qualified string.) 
 
- Returns:
- torch.nn.Module: The submodule referenced by - target
- Raises:
- AttributeError: If the target string references an invalid
- path or resolves to something that is not an - nn.Module
 
 
 - half() torch.nn.modules.module.T
- Casts all floating point parameters and buffers to - halfdatatype.- Note - This method modifies the module in-place. - Returns:
- Module: self 
 
 - load_state_dict(state_dict: collections.OrderedDict[str, torch.Tensor], strict: bool = True)
- Copies parameters and buffers from - state_dictinto this module and its descendants. If- strictis- True, then the keys of- state_dictmust exactly match the keys returned by this module’s- state_dict()function.- Args:
- state_dict (dict): a dict containing parameters and
- persistent buffers. 
- strict (bool, optional): whether to strictly enforce that the keys
- in - state_dictmatch the keys returned by this module’s- state_dict()function. Default:- True
 
- Returns:
- NamedTuplewith- missing_keysand- unexpected_keysfields:
- missing_keys is a list of str containing the missing keys 
- unexpected_keys is a list of str containing the unexpected keys 
 
 
- Note:
- If a parameter or buffer is registered as - Noneand its corresponding key exists in- state_dict,- load_state_dict()will raise a- RuntimeError.
 
 - modules() Iterator[torch.nn.modules.module.Module]
- Returns an iterator over all modules in the network. - Yields:
- Module: a module in the network 
- Note:
- Duplicate modules are returned only once. In the following example, - lwill be returned only once.
 - Example: - >>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.modules()): print(idx, '->', m) 0 -> Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) 1 -> Linear(in_features=2, out_features=2, bias=True) 
 - named_buffers(prefix: str = '', recurse: bool = True) Iterator[Tuple[str, torch.Tensor]]
- Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself. - Args:
- prefix (str): prefix to prepend to all buffer names. recurse (bool): if True, then yields buffers of this module - and all submodules. Otherwise, yields only buffers that are direct members of this module. 
- Yields:
- (string, torch.Tensor): Tuple containing the name and buffer 
 - Example: - >>> for name, buf in self.named_buffers(): >>> if name in ['running_var']: >>> print(buf.size()) 
 - named_children() Iterator[Tuple[str, torch.nn.modules.module.Module]]
- Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself. - Yields:
- (string, Module): Tuple containing a name and child module 
 - Example: - >>> for name, module in model.named_children(): >>> if name in ['conv4', 'conv5']: >>> print(module) 
 - named_modules(memo: Optional[Set[torch.nn.modules.module.Module]] = None, prefix: str = '', remove_duplicate: bool = True)
- Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself. - Args:
- memo: a memo to store the set of modules already added to the result prefix: a prefix that will be added to the name of the module remove_duplicate: whether to remove the duplicated module instances in the result - or not 
- Yields:
- (string, Module): Tuple of name and module 
- Note:
- Duplicate modules are returned only once. In the following example, - lwill be returned only once.
 - Example: - >>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.named_modules()): print(idx, '->', m) 0 -> ('', Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) )) 1 -> ('0', Linear(in_features=2, out_features=2, bias=True)) 
 - named_parameters(prefix: str = '', recurse: bool = True) Iterator[Tuple[str, torch.nn.parameter.Parameter]]
- Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself. - Args:
- prefix (str): prefix to prepend to all parameter names. recurse (bool): if True, then yields parameters of this module - and all submodules. Otherwise, yields only parameters that are direct members of this module. 
- Yields:
- (string, Parameter): Tuple containing the name and parameter 
 - Example: - >>> for name, param in self.named_parameters(): >>> if name in ['bias']: >>> print(param.size()) 
 - parameters(recurse: bool = True) Iterator[torch.nn.parameter.Parameter]
- Returns an iterator over module parameters. - This is typically passed to an optimizer. - Args:
- recurse (bool): if True, then yields parameters of this module
- and all submodules. Otherwise, yields only parameters that are direct members of this module. 
 
- Yields:
- Parameter: module parameter 
 - Example: - >>> for param in model.parameters(): >>> print(type(param), param.size()) <class 'torch.Tensor'> (20L,) <class 'torch.Tensor'> (20L, 1L, 5L, 5L) 
 - register_backward_hook(hook: Callable[[torch.nn.modules.module.Module, Union[Tuple[torch.Tensor, ...], torch.Tensor], Union[Tuple[torch.Tensor, ...], torch.Tensor]], Union[None, torch.Tensor]]) torch.utils.hooks.RemovableHandle
- Registers a backward hook on the module. - This function is deprecated in favor of - register_full_backward_hook()and the behavior of this function will change in future versions.- Returns:
- torch.utils.hooks.RemovableHandle:
- a handle that can be used to remove the added hook by calling - handle.remove()
 
 
 - register_buffer(name: str, tensor: Optional[torch.Tensor], persistent: bool = True) None
- Adds a buffer to the module. - This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s - running_meanis not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved alongside parameters. This behavior can be changed by setting- persistentto- False. The only difference between a persistent buffer and a non-persistent buffer is that the latter will not be a part of this module’s- state_dict.- Buffers can be accessed as attributes using given names. - Args:
- name (string): name of the buffer. The buffer can be accessed
- from this module using the given name 
- tensor (Tensor or None): buffer to be registered. If None, then operations
- that run on buffers, such as - cuda, are ignored. If- None, the buffer is not included in the module’s- state_dict.
- persistent (bool): whether the buffer is part of this module’s
 
 - Example: - >>> self.register_buffer('running_mean', torch.zeros(num_features)) 
 - register_forward_hook(hook: Callable[[...], None]) torch.utils.hooks.RemovableHandle
- Registers a forward hook on the module. - The hook will be called every time after - forward()has computed an output. It should have the following signature:- hook(module, input, output) -> None or modified output - The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the - forward. The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called after- forward()is called.- Returns:
- torch.utils.hooks.RemovableHandle:
- a handle that can be used to remove the added hook by calling - handle.remove()
 
 
 - register_forward_pre_hook(hook: Callable[[...], None]) torch.utils.hooks.RemovableHandle
- Registers a forward pre-hook on the module. - The hook will be called every time before - forward()is invoked. It should have the following signature:- hook(module, input) -> None or modified input - The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the - forward. The hook can modify the input. User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if a single value is returned(unless that value is already a tuple).- Returns:
- torch.utils.hooks.RemovableHandle:
- a handle that can be used to remove the added hook by calling - handle.remove()
 
 
 - register_full_backward_hook(hook: Callable[[torch.nn.modules.module.Module, Union[Tuple[torch.Tensor, ...], torch.Tensor], Union[Tuple[torch.Tensor, ...], torch.Tensor]], Union[None, torch.Tensor]]) torch.utils.hooks.RemovableHandle
- Registers a backward hook on the module. - The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature: - hook(module, grad_input, grad_output) -> tuple(Tensor) or None - The - grad_inputand- grad_outputare tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place of- grad_inputin subsequent computations.- grad_inputwill only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries in- grad_inputand- grad_outputwill be- Nonefor all non-Tensor arguments.- For technical reasons, when this hook is applied to a Module, its forward function will receive a view of each Tensor passed to the Module. Similarly the caller will receive a view of each Tensor returned by the Module’s forward function. - Warning - Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error. - Returns:
- torch.utils.hooks.RemovableHandle:
- a handle that can be used to remove the added hook by calling - handle.remove()
 
 
 - register_module(name: str, module: Optional[torch.nn.modules.module.Module]) None
- Alias for - add_module().
 - register_parameter(name: str, param: Optional[torch.nn.parameter.Parameter]) None
- Adds a parameter to the module. - The parameter can be accessed as an attribute using given name. - Args:
- name (string): name of the parameter. The parameter can be accessed
- from this module using the given name 
- param (Parameter or None): parameter to be added to the module. If
- None, then operations that run on parameters, such as- cuda, are ignored. If- None, the parameter is not included in the module’s- state_dict.
 
 
 - requires_grad_(requires_grad: bool = True) torch.nn.modules.module.T
- Change if autograd should record operations on parameters in this module. - This method sets the parameters’ - requires_gradattributes in-place.- This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training). - See locally-disable-grad-doc for a comparison between .requires_grad_() and several similar mechanisms that may be confused with it. - Args:
- requires_grad (bool): whether autograd should record operations on
- parameters in this module. Default: - True.
 
- Returns:
- Module: self 
 
 - set_extra_state(state: Any)
- This function is called from - load_state_dict()to handle any extra state found within the state_dict. Implement this function and a corresponding- get_extra_state()for your module if you need to store extra state within its state_dict.- Args:
- state (dict): Extra state from the state_dict 
 
 - See - torch.Tensor.share_memory_()
 - state_dict(destination=None, prefix='', keep_vars=False)
- Returns a dictionary containing a whole state of the module. - Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Parameters and buffers set to - Noneare not included.- Returns:
- dict:
- a dictionary containing a whole state of the module 
 
 - Example: - >>> module.state_dict().keys() ['bias', 'weight'] 
 - to(*args, **kwargs)
- Moves and/or casts the parameters and buffers. - This can be called as - to(device=None, dtype=None, non_blocking=False)
 - to(dtype, non_blocking=False)
 - to(tensor, non_blocking=False)
 - to(memory_format=torch.channels_last)
 - Its signature is similar to - torch.Tensor.to(), but only accepts floating point or complex- dtypes. In addition, this method will only cast the floating point or complex parameters and buffers to- dtype(if given). The integral parameters and buffers will be moved- device, if that is given, but with dtypes unchanged. When- non_blockingis set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices.- See below for examples. - Note - This method modifies the module in-place. - Args:
- device (torch.device): the desired device of the parameters
- and buffers in this module 
- dtype (torch.dtype): the desired floating point or complex dtype of
- the parameters and buffers in this module 
- tensor (torch.Tensor): Tensor whose dtype and device are the desired
- dtype and device for all parameters and buffers in this module 
- memory_format (torch.memory_format): the desired memory
- format for 4D parameters and buffers in this module (keyword only argument) 
 
- device (
- Returns:
- Module: self 
 - Examples: - >>> linear = nn.Linear(2, 2) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]]) >>> linear.to(torch.double) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]], dtype=torch.float64) >>> gpu1 = torch.device("cuda:1") >>> linear.to(gpu1, dtype=torch.half, non_blocking=True) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1') >>> cpu = torch.device("cpu") >>> linear.to(cpu) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16) >>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble) >>> linear.weight Parameter containing: tensor([[ 0.3741+0.j, 0.2382+0.j], [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128) >>> linear(torch.ones(3, 2, dtype=torch.cdouble)) tensor([[0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128) 
 - to_empty(*, device: Union[str, torch.device]) torch.nn.modules.module.T
- Moves the parameters and buffers to the specified device without copying storage. - Args:
- device (torch.device): The desired device of the parameters
- and buffers in this module. 
 
- device (
- Returns:
- Module: self 
 
 - train(mode: bool = True) torch.nn.modules.module.T
- Sets the module in training mode. - This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. - Dropout,- BatchNorm, etc.- Args:
- mode (bool): whether to set training mode (True) or evaluation
- mode ( - False). Default:- True.
 
- mode (bool): whether to set training mode (
- Returns:
- Module: self 
 
 - type(dst_type: Union[torch.dtype, str]) torch.nn.modules.module.T
- Casts all parameters and buffers to - dst_type.- Note - This method modifies the module in-place. - Args:
- dst_type (type or string): the desired type 
- Returns:
- Module: self 
 
 - xpu(device: Optional[Union[int, torch.device]] = None) torch.nn.modules.module.T
- Moves all model parameters and buffers to the XPU. - This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized. - Note - This method modifies the module in-place. - Arguments:
- device (int, optional): if specified, all parameters will be
- copied to that device 
 
- Returns:
- Module: self 
 
 - zero_grad(set_to_none: bool = False) None
- Sets gradients of all model parameters to zero. See similar function under - torch.optim.Optimizerfor more context.- Args:
- set_to_none (bool): instead of setting to zero, set the grads to None.
- See - torch.optim.Optimizer.zero_grad()for details.