@Namespace(value="torch::nn") @NoOffset @Properties(inherit=torch.class) public class LayerNormImpl extends LayerNormImplCloneable
Layer Normalization
_ .
See https://pytorch.org/docs/master/nn.html#torch.nn.LayerNorm to learn
about the exact behavior of this module.
See the documentation for torch::nn::LayerNormOptions
class to learn what
constructor arguments are supported for this module.
Example:
LayerNorm model(LayerNormOptions({2,
2}).elementwise_affine(false).eps(2e-5));
Pointer.CustomDeallocator, Pointer.Deallocator, Pointer.NativeDeallocator, Pointer.ReferenceCounter
Constructor and Description |
---|
LayerNormImpl(LayerNormOptions options_) |
LayerNormImpl(LongVector normalized_shape) |
LayerNormImpl(Module pointer)
Downcast constructor.
|
LayerNormImpl(Pointer p)
Pointer cast constructor.
|
Modifier and Type | Method and Description |
---|---|
Tensor |
bias()
The learned bias.
|
LayerNormImpl |
bias(Tensor setter) |
Tensor |
forward(Tensor input)
Applies layer normalization over a mini-batch of inputs as described in
the paper
Layer Normalization _ . |
LayerNormOptions |
options()
The options with which this module was constructed.
|
LayerNormImpl |
options(LayerNormOptions setter) |
void |
pretty_print(Pointer stream)
Pretty prints the
LayerNorm module into the given stream . |
void |
reset_parameters() |
void |
reset()
reset() must perform initialization of all members with reference
semantics, most importantly parameters, buffers and submodules. |
Tensor |
weight()
The learned weight.
|
LayerNormImpl |
weight(Tensor setter) |
asModule, asModule, clone, clone
apply, apply, apply, apply, apply, apply, apply, apply, buffers, buffers, children, eval, is_serializable, is_training, load, modules, modules, name, named_buffers, named_buffers, named_children, named_modules, named_modules, named_modules, named_parameters, named_parameters, parameters, parameters, register_buffer, register_buffer, register_module, register_module, register_parameter, register_parameter, register_parameter, register_parameter, save, shiftLeft, to, to, to, train, unregister_module, unregister_module, zero_grad
address, asBuffer, asByteBuffer, availablePhysicalBytes, calloc, capacity, capacity, close, deallocate, deallocate, deallocateReferences, deallocator, deallocator, equals, fill, formatBytes, free, getDirectBufferAddress, getPointer, getPointer, getPointer, getPointer, hashCode, interruptDeallocatorThread, isNull, isNull, limit, limit, malloc, maxBytes, maxPhysicalBytes, memchr, memcmp, memcpy, memmove, memset, offsetAddress, offsetof, offsetof, parseBytes, physicalBytes, physicalBytesInaccurate, position, position, put, realloc, referenceCount, releaseReference, retainReference, setNull, sizeof, sizeof, toString, totalBytes, totalCount, totalPhysicalBytes, withDeallocator, zero
public LayerNormImpl(Pointer p)
Pointer(Pointer)
.public LayerNormImpl(Module pointer)
public LayerNormImpl(@ByVal @Cast(value="std::vector<int64_t>*") LongVector normalized_shape)
public LayerNormImpl(@ByVal LayerNormOptions options_)
public void reset()
LayerNormImplCloneable
reset()
must perform initialization of all members with reference
semantics, most importantly parameters, buffers and submodules.reset
in class LayerNormImplCloneable
public void reset_parameters()
public void pretty_print(@Cast(value="std::ostream*") @ByRef Pointer stream)
LayerNorm
module into the given stream
.pretty_print
in class Module
@ByVal public Tensor forward(@Const @ByRef Tensor input)
Layer Normalization
_ .
The mean and standard-deviation are calculated separately over the last
certain number dimensions which have to be of the shape specified by
input normalized_shape
.
Layer Normalization
: https://arxiv.org/abs/1607.06450@ByRef public LayerNormOptions options()
public LayerNormImpl options(LayerNormOptions setter)
@ByRef public Tensor weight()
elementwise_affine
option is set to true
upon construction.public LayerNormImpl weight(Tensor setter)
@ByRef public Tensor bias()
elementwise_affine
option is set to true
upon
construction.public LayerNormImpl bias(Tensor setter)
Copyright © 2024. All rights reserved.