@Namespace(value="torch::nn") @NoOffset @Properties(inherit=torch.class) public class SequentialImpl extends SequentialImplCloneable
Module
s that acts as a Module
itself.
A Sequential
is fundamentally a list of Module
s, each with a forward()
method. Sequential
provides a forward()
method of its own, which accepts
any input and forwards it to the first module it stores. It then "chains"
outputs to inputs sequentially for each subsequent module, finally returning
the output of the last module. For example:
\rst
.. code-block:: cpp
torch::nn::Sequential seq(
torch::nn::Linear(3, 4),
torch::nn::BatchNorm1d(4),
torch::nn::Dropout(0.5)
);
auto output = seq->forward(torch::ones(3));
\endrst
This can conceptually be thought of as the following loop (using Python as
pseudocode):
\rst
.. code-block:: python
def forward(sequential, input):
for module in sequential:
input = module(input)
return input
\endrst
Why should you use Sequential
instead of a simple std::vector
? The value
a Sequential
provides over manually calling a sequence of modules is that
it allows treating the whole container *as a single module*, such that
performing a transformation on the Sequential
applies to each of the
modules it stores (which are each a registered submodule of the
Sequential
). For example, calling
.to(torch::kCUDA)
on a Sequential
will move each module in the list to
CUDA memory. For example:
\rst
.. code-block:: cpp
torch::nn::Sequential seq(
torch::nn::Linear(3, 4),
torch::nn::BatchNorm1d(4),
torch::nn::Dropout(0.5)
);
// Convert all modules to CUDA.
seq->to(torch::kCUDA);
\endrst
Finally, Sequential
provides a lightweight container API, such as allowing
iteration over submodules, positional access, adding a new module after
construction via push_back
, as well as joining two Sequential
s via
extend
.
\rst
.. attention::
One current limitation of Sequential
is that all except the first module
must accept a single argument. If your modules need to take multiple
arguments, you should define them to take and return tuples.
\endrstPointer.CustomDeallocator, Pointer.Deallocator, Pointer.NativeDeallocator, Pointer.ReferenceCounter
Constructor and Description |
---|
SequentialImpl() |
SequentialImpl(Module pointer)
Downcast constructor.
|
SequentialImpl(Pointer p)
Pointer cast constructor.
|
SequentialImpl(StringAnyModuleDict ordered_dict)
Constructs the
Sequential from an OrderedDict of named AnyModule s. |
asModule, asModule
apply, apply, apply, apply, apply, apply, apply, apply, buffers, buffers, children, eval, is_serializable, is_training, load, modules, modules, name, named_buffers, named_buffers, named_children, named_modules, named_modules, named_modules, named_parameters, named_parameters, parameters, parameters, register_buffer, register_buffer, register_module, register_module, register_parameter, register_parameter, register_parameter, register_parameter, save, shiftLeft, to, to, to, train, unregister_module, unregister_module, zero_grad
address, asBuffer, asByteBuffer, availablePhysicalBytes, calloc, capacity, capacity, close, deallocate, deallocate, deallocateReferences, deallocator, deallocator, equals, fill, formatBytes, free, getDirectBufferAddress, getPointer, getPointer, getPointer, getPointer, hashCode, interruptDeallocatorThread, isNull, isNull, limit, limit, malloc, maxBytes, maxPhysicalBytes, memchr, memcmp, memcpy, memmove, memset, offsetAddress, offsetof, offsetof, parseBytes, physicalBytes, physicalBytesInaccurate, position, position, put, realloc, referenceCount, releaseReference, retainReference, setNull, sizeof, sizeof, toString, totalBytes, totalCount, totalPhysicalBytes, withDeallocator, zero
public SequentialImpl(Pointer p)
Pointer(Pointer)
.public SequentialImpl(Module pointer)
public SequentialImpl()
public SequentialImpl(@ByRef(value=true) StringAnyModuleDict ordered_dict)
Sequential
from an OrderedDict
of named AnyModule
s.@SharedPtr(value="torch::nn::Module") @ByVal public Module clone(@Const @ByRef(nullValue="c10::optional<torch::Device>(c10::nullopt)") DeviceOptional device)
Sequential
because it does not use
reset()
.clone
in class SequentialImplCloneable
@SharedPtr(value="torch::nn::Module") @ByVal public Module clone()
clone
in class SequentialImplCloneable
public void reset()
reset()
is empty for Sequential
, since it does not have parameters of
its own.reset
in class SequentialImplCloneable
public void pretty_print(@Cast(value="std::ostream*") @ByRef Pointer stream)
Sequential
module into the given stream
.pretty_print
in class Module
@ByVal public Tensor forward(@Const @ByRef Tensor input)
inputs
to the first module and then chains outputs to inputs,
returning the last output.
Conceptually the following loop in Python:
\rst
.. code-block:: python
def forward(sequential, input):
for module in sequential:
input = module(input)
return input
\endrst
The return type is taken as the first template parameter. It defaults to
Tensor
. If the last module in the Sequential
returns another type T
,
you should call forward<T>(inputs)
instead of just forward(inputs)
:
\rst
.. code-block:: cpp
torch::Tensor tensor = sequential1->forward(inputs);
int integer = sequential2->forward@ByVal public Tensor forward(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2, @Const @ByRef Tensor input3)
@ByVal public Tensor forward(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2, @Const @ByRef Tensor input3, @Const @ByRef Tensor input4)
@ByVal public Tensor forward(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2, @Const @ByRef Tensor input3, @Const @ByRef Tensor input4, @Const @ByRef Tensor input5, @Const @ByRef Tensor input6)
@ByVal public Tensor forward(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2, @Const @ByRef Tensor input3, @Const @ByRef Tensor input4, @Const @ByRef Tensor input5, @Const @ByRef Tensor input6, @Const @ByRef Tensor input7, @Const @ByRef Tensor input8)
@ByVal public Tensor forward(@Const @ByRef Tensor input, @ByRef(nullValue="c10::optional<at::IntArrayRef>(c10::nullopt)") @Cast(value={"int64_t*","c10::ArrayRef<int64_t>","std::vector<int64_t>&"}) @StdVector long... output_size)
@ByVal public Tensor forward(@Const @ByRef Tensor input, @Const @ByRef(nullValue="c10::optional<at::IntArrayRef>(c10::nullopt)") LongArrayRefOptional output_size)
@ByVal public Tensor forward(@Const @ByRef Tensor input, @Const @ByRef Tensor indices, @Const @ByRef(nullValue="c10::optional<std::vector<int64_t> >(c10::nullopt)") LongVectorOptional output_size)
@ByVal @Name(value="forward<std::tuple<torch::Tensor,std::tuple<torch::Tensor,torch::Tensor>>>") public T_TensorT_TensorTensor_T_T forwardT_TensorT_TensorTensor_T_T(@Const @ByRef Tensor input)
@ByVal @Name(value="forward<std::tuple<torch::Tensor,std::tuple<torch::Tensor,torch::Tensor>>>") public T_TensorT_TensorTensor_T_T forwardT_TensorT_TensorTensor_T_T(@Const @ByRef Tensor input, @ByVal(nullValue="torch::optional<std::tuple<torch::Tensor,torch::Tensor> >{}") T_TensorTensor_TOptional hx_opt)
@ByVal @Name(value="forward<std::tuple<torch::Tensor,torch::Tensor>>") public T_TensorTensor_T forwardT_TensorTensor_T(@Const @ByRef Tensor input)
@ByVal @Name(value="forward<std::tuple<torch::Tensor,torch::Tensor>>") public T_TensorTensor_T forwardT_TensorTensor_T(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2)
@ByVal @Name(value="forward<std::tuple<torch::Tensor,torch::Tensor>>") public T_TensorTensor_T forwardT_TensorTensor_T(@Const @ByRef Tensor input1, @Const @ByRef Tensor input2, @Const @ByRef Tensor input3)
@ByVal @Name(value="forward<std::tuple<torch::Tensor,torch::Tensor>>") public T_TensorTensor_T forwardT_TensorTensor_T(@Const @ByRef Tensor input, @ByVal(nullValue="torch::optional<std::tuple<torch::Tensor,torch::Tensor> >{}") T_TensorTensor_TOptional hx_opt)
@ByVal @Name(value="forward<std::tuple<torch::Tensor,torch::Tensor>>") public T_TensorTensor_T forwardT_TensorTensor_T(@Const @ByRef Tensor query, @Const @ByRef Tensor key, @Const @ByRef Tensor value, @Const @ByRef(nullValue="torch::Tensor{}") Tensor key_padding_mask, @Cast(value="bool") boolean need_weights, @Const @ByRef(nullValue="torch::Tensor{}") Tensor attn_mask, @Cast(value="bool") boolean average_attn_weights)
@ByVal @Name(value="forward<torch::nn::ASMoutput>") public ASMoutput forwardASMoutput(@Const @ByRef Tensor input, @Const @ByRef Tensor target)
@Name(value="push_back<torch::nn::AdaptiveLogSoftmaxWithLossImpl>") public void push_back(@SharedPtr AdaptiveLogSoftmaxWithLossImpl module_ptr)
Module
to the Sequential
container.@Name(value="push_back<torch::nn::BatchNorm1dImpl>") public void push_back(@SharedPtr BatchNorm1dImpl module_ptr)
@Name(value="push_back<torch::nn::InstanceNorm1dImpl>") public void push_back(@SharedPtr InstanceNorm1dImpl module_ptr)
@Name(value="push_back<torch::nn::Conv1dImpl>") public void push_back(@SharedPtr Conv1dImpl module_ptr)
@Name(value="push_back<torch::nn::ConvTranspose1dImpl>") public void push_back(@SharedPtr ConvTranspose1dImpl module_ptr)
@Name(value="push_back<torch::nn::DropoutImpl>") public void push_back(@SharedPtr DropoutImpl module_ptr)
@Name(value="push_back<torch::nn::BatchNorm2dImpl>") public void push_back(@SharedPtr BatchNorm2dImpl module_ptr)
@Name(value="push_back<torch::nn::InstanceNorm2dImpl>") public void push_back(@SharedPtr InstanceNorm2dImpl module_ptr)
@Name(value="push_back<torch::nn::Conv2dImpl>") public void push_back(@SharedPtr Conv2dImpl module_ptr)
@Name(value="push_back<torch::nn::ConvTranspose2dImpl>") public void push_back(@SharedPtr ConvTranspose2dImpl module_ptr)
@Name(value="push_back<torch::nn::Dropout2dImpl>") public void push_back(@SharedPtr Dropout2dImpl module_ptr)
@Name(value="push_back<torch::nn::BatchNorm3dImpl>") public void push_back(@SharedPtr BatchNorm3dImpl module_ptr)
@Name(value="push_back<torch::nn::InstanceNorm3dImpl>") public void push_back(@SharedPtr InstanceNorm3dImpl module_ptr)
@Name(value="push_back<torch::nn::Conv3dImpl>") public void push_back(@SharedPtr Conv3dImpl module_ptr)
@Name(value="push_back<torch::nn::ConvTranspose3dImpl>") public void push_back(@SharedPtr ConvTranspose3dImpl module_ptr)
@Name(value="push_back<torch::nn::Dropout3dImpl>") public void push_back(@SharedPtr Dropout3dImpl module_ptr)
@Name(value="push_back<torch::nn::AlphaDropoutImpl>") public void push_back(@SharedPtr AlphaDropoutImpl module_ptr)
@Name(value="push_back<torch::nn::FeatureAlphaDropoutImpl>") public void push_back(@SharedPtr FeatureAlphaDropoutImpl module_ptr)
@Name(value="push_back<torch::nn::CosineSimilarityImpl>") public void push_back(@SharedPtr CosineSimilarityImpl module_ptr)
@Name(value="push_back<torch::nn::PairwiseDistanceImpl>") public void push_back(@SharedPtr PairwiseDistanceImpl module_ptr)
@Name(value="push_back<torch::nn::EmbeddingImpl>") public void push_back(@SharedPtr EmbeddingImpl module_ptr)
@Name(value="push_back<torch::nn::EmbeddingBagImpl>") public void push_back(@SharedPtr EmbeddingBagImpl module_ptr)
@Name(value="push_back<torch::nn::FoldImpl>") public void push_back(@SharedPtr FoldImpl module_ptr)
@Name(value="push_back<torch::nn::UnfoldImpl>") public void push_back(@SharedPtr UnfoldImpl module_ptr)
@Name(value="push_back<torch::nn::IdentityImpl>") public void push_back(@SharedPtr IdentityImpl module_ptr)
@Name(value="push_back<torch::nn::LinearImpl>") public void push_back(@SharedPtr LinearImpl module_ptr)
@Name(value="push_back<torch::nn::BilinearImpl>") public void push_back(@SharedPtr BilinearImpl module_ptr)
@Name(value="push_back<torch::nn::FlattenImpl>") public void push_back(@SharedPtr FlattenImpl module_ptr)
@Name(value="push_back<torch::nn::UnflattenImpl>") public void push_back(@SharedPtr UnflattenImpl module_ptr)
@Name(value="push_back<torch::nn::L1LossImpl>") public void push_back(@SharedPtr L1LossImpl module_ptr)
@Name(value="push_back<torch::nn::KLDivLossImpl>") public void push_back(@SharedPtr KLDivLossImpl module_ptr)
@Name(value="push_back<torch::nn::MSELossImpl>") public void push_back(@SharedPtr MSELossImpl module_ptr)
@Name(value="push_back<torch::nn::BCELossImpl>") public void push_back(@SharedPtr BCELossImpl module_ptr)
@Name(value="push_back<torch::nn::HingeEmbeddingLossImpl>") public void push_back(@SharedPtr HingeEmbeddingLossImpl module_ptr)
@Name(value="push_back<torch::nn::MultiMarginLossImpl>") public void push_back(@SharedPtr MultiMarginLossImpl module_ptr)
@Name(value="push_back<torch::nn::CosineEmbeddingLossImpl>") public void push_back(@SharedPtr CosineEmbeddingLossImpl module_ptr)
@Name(value="push_back<torch::nn::SmoothL1LossImpl>") public void push_back(@SharedPtr SmoothL1LossImpl module_ptr)
@Name(value="push_back<torch::nn::HuberLossImpl>") public void push_back(@SharedPtr HuberLossImpl module_ptr)
@Name(value="push_back<torch::nn::MultiLabelMarginLossImpl>") public void push_back(@SharedPtr MultiLabelMarginLossImpl module_ptr)
@Name(value="push_back<torch::nn::SoftMarginLossImpl>") public void push_back(@SharedPtr SoftMarginLossImpl module_ptr)
@Name(value="push_back<torch::nn::MultiLabelSoftMarginLossImpl>") public void push_back(@SharedPtr MultiLabelSoftMarginLossImpl module_ptr)
@Name(value="push_back<torch::nn::TripletMarginLossImpl>") public void push_back(@SharedPtr TripletMarginLossImpl module_ptr)
@Name(value="push_back<torch::nn::TripletMarginWithDistanceLossImpl>") public void push_back(@SharedPtr TripletMarginWithDistanceLossImpl module_ptr)
@Name(value="push_back<torch::nn::CTCLossImpl>") public void push_back(@SharedPtr CTCLossImpl module_ptr)
@Name(value="push_back<torch::nn::PoissonNLLLossImpl>") public void push_back(@SharedPtr PoissonNLLLossImpl module_ptr)
@Name(value="push_back<torch::nn::MarginRankingLossImpl>") public void push_back(@SharedPtr MarginRankingLossImpl module_ptr)
@Name(value="push_back<torch::nn::NLLLossImpl>") public void push_back(@SharedPtr NLLLossImpl module_ptr)
@Name(value="push_back<torch::nn::CrossEntropyLossImpl>") public void push_back(@SharedPtr CrossEntropyLossImpl module_ptr)
@Name(value="push_back<torch::nn::BCEWithLogitsLossImpl>") public void push_back(@SharedPtr BCEWithLogitsLossImpl module_ptr)
@Name(value="push_back<torch::nn::ReflectionPad1dImpl>") public void push_back(@SharedPtr ReflectionPad1dImpl module_ptr)
@Name(value="push_back<torch::nn::ReplicationPad1dImpl>") public void push_back(@SharedPtr ReplicationPad1dImpl module_ptr)
@Name(value="push_back<torch::nn::ConstantPad1dImpl>") public void push_back(@SharedPtr ConstantPad1dImpl module_ptr)
@Name(value="push_back<torch::nn::ZeroPad1dImpl>") public void push_back(@SharedPtr ZeroPad1dImpl module_ptr)
@Name(value="push_back<torch::nn::AvgPool1dImpl>") public void push_back(@SharedPtr AvgPool1dImpl module_ptr)
@Name(value="push_back<torch::nn::MaxPool1dImpl>") public void push_back(@SharedPtr MaxPool1dImpl module_ptr)
@Name(value="push_back<torch::nn::AdaptiveAvgPool1dImpl>") public void push_back(@SharedPtr AdaptiveAvgPool1dImpl module_ptr)
@Name(value="push_back<torch::nn::AdaptiveMaxPool1dImpl>") public void push_back(@SharedPtr AdaptiveMaxPool1dImpl module_ptr)
@Name(value="push_back<torch::nn::MaxUnpool1dImpl>") public void push_back(@SharedPtr MaxUnpool1dImpl module_ptr)
@Name(value="push_back<torch::nn::LPPool1dImpl>") public void push_back(@SharedPtr LPPool1dImpl module_ptr)
@Name(value="push_back<torch::nn::ReflectionPad2dImpl>") public void push_back(@SharedPtr ReflectionPad2dImpl module_ptr)
@Name(value="push_back<torch::nn::ReplicationPad2dImpl>") public void push_back(@SharedPtr ReplicationPad2dImpl module_ptr)
@Name(value="push_back<torch::nn::ConstantPad2dImpl>") public void push_back(@SharedPtr ConstantPad2dImpl module_ptr)
@Name(value="push_back<torch::nn::ZeroPad2dImpl>") public void push_back(@SharedPtr ZeroPad2dImpl module_ptr)
@Name(value="push_back<torch::nn::AvgPool2dImpl>") public void push_back(@SharedPtr AvgPool2dImpl module_ptr)
@Name(value="push_back<torch::nn::MaxPool2dImpl>") public void push_back(@SharedPtr MaxPool2dImpl module_ptr)
@Name(value="push_back<torch::nn::AdaptiveAvgPool2dImpl>") public void push_back(@SharedPtr AdaptiveAvgPool2dImpl module_ptr)
@Name(value="push_back<torch::nn::AdaptiveMaxPool2dImpl>") public void push_back(@SharedPtr AdaptiveMaxPool2dImpl module_ptr)
@Name(value="push_back<torch::nn::MaxUnpool2dImpl>") public void push_back(@SharedPtr MaxUnpool2dImpl module_ptr)
@Name(value="push_back<torch::nn::FractionalMaxPool2dImpl>") public void push_back(@SharedPtr FractionalMaxPool2dImpl module_ptr)
@Name(value="push_back<torch::nn::LPPool2dImpl>") public void push_back(@SharedPtr LPPool2dImpl module_ptr)
@Name(value="push_back<torch::nn::ReflectionPad3dImpl>") public void push_back(@SharedPtr ReflectionPad3dImpl module_ptr)
@Name(value="push_back<torch::nn::ReplicationPad3dImpl>") public void push_back(@SharedPtr ReplicationPad3dImpl module_ptr)
@Name(value="push_back<torch::nn::ConstantPad3dImpl>") public void push_back(@SharedPtr ConstantPad3dImpl module_ptr)
@Name(value="push_back<torch::nn::ZeroPad3dImpl>") public void push_back(@SharedPtr ZeroPad3dImpl module_ptr)
@Name(value="push_back<torch::nn::AvgPool3dImpl>") public void push_back(@SharedPtr AvgPool3dImpl module_ptr)
@Name(value="push_back<torch::nn::MaxPool3dImpl>") public void push_back(@SharedPtr MaxPool3dImpl module_ptr)
@Name(value="push_back<torch::nn::AdaptiveAvgPool3dImpl>") public void push_back(@SharedPtr AdaptiveAvgPool3dImpl module_ptr)
@Name(value="push_back<torch::nn::AdaptiveMaxPool3dImpl>") public void push_back(@SharedPtr AdaptiveMaxPool3dImpl module_ptr)
@Name(value="push_back<torch::nn::MaxUnpool3dImpl>") public void push_back(@SharedPtr MaxUnpool3dImpl module_ptr)
@Name(value="push_back<torch::nn::FractionalMaxPool3dImpl>") public void push_back(@SharedPtr FractionalMaxPool3dImpl module_ptr)
@Name(value="push_back<torch::nn::RNNImpl>") public void push_back(@SharedPtr RNNImpl module_ptr)
@Name(value="push_back<torch::nn::LSTMImpl>") public void push_back(@SharedPtr LSTMImpl module_ptr)
@Name(value="push_back<torch::nn::GRUImpl>") public void push_back(@SharedPtr GRUImpl module_ptr)
@Name(value="push_back<torch::nn::RNNCellImpl>") public void push_back(@SharedPtr RNNCellImpl module_ptr)
@Name(value="push_back<torch::nn::LSTMCellImpl>") public void push_back(@SharedPtr LSTMCellImpl module_ptr)
@Name(value="push_back<torch::nn::GRUCellImpl>") public void push_back(@SharedPtr GRUCellImpl module_ptr)
@Name(value="push_back<torch::nn::PixelShuffleImpl>") public void push_back(@SharedPtr PixelShuffleImpl module_ptr)
@Name(value="push_back<torch::nn::PixelUnshuffleImpl>") public void push_back(@SharedPtr PixelUnshuffleImpl module_ptr)
@Name(value="push_back<torch::nn::UpsampleImpl>") public void push_back(@SharedPtr UpsampleImpl module_ptr)
@Name(value="push_back<torch::nn::ELUImpl>") public void push_back(@SharedPtr ELUImpl module_ptr)
@Name(value="push_back<torch::nn::SELUImpl>") public void push_back(@SharedPtr SELUImpl module_ptr)
@Name(value="push_back<torch::nn::HardshrinkImpl>") public void push_back(@SharedPtr HardshrinkImpl module_ptr)
@Name(value="push_back<torch::nn::HardtanhImpl>") public void push_back(@SharedPtr HardtanhImpl module_ptr)
@Name(value="push_back<torch::nn::LeakyReLUImpl>") public void push_back(@SharedPtr LeakyReLUImpl module_ptr)
@Name(value="push_back<torch::nn::LogSigmoidImpl>") public void push_back(@SharedPtr LogSigmoidImpl module_ptr)
@Name(value="push_back<torch::nn::SoftmaxImpl>") public void push_back(@SharedPtr SoftmaxImpl module_ptr)
@Name(value="push_back<torch::nn::SoftminImpl>") public void push_back(@SharedPtr SoftminImpl module_ptr)
@Name(value="push_back<torch::nn::LogSoftmaxImpl>") public void push_back(@SharedPtr LogSoftmaxImpl module_ptr)
@Name(value="push_back<torch::nn::Softmax2dImpl>") public void push_back(@SharedPtr Softmax2dImpl module_ptr)
@Name(value="push_back<torch::nn::PReLUImpl>") public void push_back(@SharedPtr PReLUImpl module_ptr)
@Name(value="push_back<torch::nn::ReLUImpl>") public void push_back(@SharedPtr ReLUImpl module_ptr)
@Name(value="push_back<torch::nn::ReLU6Impl>") public void push_back(@SharedPtr ReLU6Impl module_ptr)
@Name(value="push_back<torch::nn::RReLUImpl>") public void push_back(@SharedPtr RReLUImpl module_ptr)
@Name(value="push_back<torch::nn::CELUImpl>") public void push_back(@SharedPtr CELUImpl module_ptr)
@Name(value="push_back<torch::nn::GLUImpl>") public void push_back(@SharedPtr GLUImpl module_ptr)
@Name(value="push_back<torch::nn::GELUImpl>") public void push_back(@SharedPtr GELUImpl module_ptr)
@Name(value="push_back<torch::nn::SiLUImpl>") public void push_back(@SharedPtr SiLUImpl module_ptr)
@Name(value="push_back<torch::nn::MishImpl>") public void push_back(@SharedPtr MishImpl module_ptr)
@Name(value="push_back<torch::nn::SigmoidImpl>") public void push_back(@SharedPtr SigmoidImpl module_ptr)
@Name(value="push_back<torch::nn::SoftplusImpl>") public void push_back(@SharedPtr SoftplusImpl module_ptr)
@Name(value="push_back<torch::nn::SoftshrinkImpl>") public void push_back(@SharedPtr SoftshrinkImpl module_ptr)
@Name(value="push_back<torch::nn::SoftsignImpl>") public void push_back(@SharedPtr SoftsignImpl module_ptr)
@Name(value="push_back<torch::nn::TanhImpl>") public void push_back(@SharedPtr TanhImpl module_ptr)
@Name(value="push_back<torch::nn::TanhshrinkImpl>") public void push_back(@SharedPtr TanhshrinkImpl module_ptr)
@Name(value="push_back<torch::nn::ThresholdImpl>") public void push_back(@SharedPtr ThresholdImpl module_ptr)
@Name(value="push_back<torch::nn::MultiheadAttentionImpl>") public void push_back(@SharedPtr MultiheadAttentionImpl module_ptr)
@Name(value="push_back<torch::nn::LayerNormImpl>") public void push_back(@SharedPtr LayerNormImpl module_ptr)
@Name(value="push_back<torch::nn::LocalResponseNormImpl>") public void push_back(@SharedPtr LocalResponseNormImpl module_ptr)
@Name(value="push_back<torch::nn::CrossMapLRN2dImpl>") public void push_back(@SharedPtr CrossMapLRN2dImpl module_ptr)
@Name(value="push_back<torch::nn::GroupNormImpl>") public void push_back(@SharedPtr GroupNormImpl module_ptr)
@Name(value="push_back<torch::nn::TransformerEncoderLayerImpl>") public void push_back(@SharedPtr TransformerEncoderLayerImpl module_ptr)
@Name(value="push_back<torch::nn::TransformerDecoderLayerImpl>") public void push_back(@SharedPtr TransformerDecoderLayerImpl module_ptr)
@Name(value="push_back<torch::nn::TransformerEncoderImpl>") public void push_back(@SharedPtr TransformerEncoderImpl module_ptr)
@Name(value="push_back<torch::nn::TransformerDecoderImpl>") public void push_back(@SharedPtr TransformerDecoderImpl module_ptr)
@Name(value="push_back<torch::nn::TransformerImpl>") public void push_back(@SharedPtr TransformerImpl module_ptr)
@Name(value="push_back<torch::nn::AdaptiveLogSoftmaxWithLossImpl>") public void push_back(@StdString BytePointer name, @SharedPtr AdaptiveLogSoftmaxWithLossImpl module_ptr)
Module
to the Sequential
container.@Name(value="push_back<torch::nn::AdaptiveLogSoftmaxWithLossImpl>") public void push_back(@StdString String name, @SharedPtr AdaptiveLogSoftmaxWithLossImpl module_ptr)
@Name(value="push_back<torch::nn::BatchNorm1dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr BatchNorm1dImpl module_ptr)
@Name(value="push_back<torch::nn::BatchNorm1dImpl>") public void push_back(@StdString String name, @SharedPtr BatchNorm1dImpl module_ptr)
@Name(value="push_back<torch::nn::InstanceNorm1dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr InstanceNorm1dImpl module_ptr)
@Name(value="push_back<torch::nn::InstanceNorm1dImpl>") public void push_back(@StdString String name, @SharedPtr InstanceNorm1dImpl module_ptr)
@Name(value="push_back<torch::nn::Conv1dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr Conv1dImpl module_ptr)
@Name(value="push_back<torch::nn::Conv1dImpl>") public void push_back(@StdString String name, @SharedPtr Conv1dImpl module_ptr)
@Name(value="push_back<torch::nn::ConvTranspose1dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr ConvTranspose1dImpl module_ptr)
@Name(value="push_back<torch::nn::ConvTranspose1dImpl>") public void push_back(@StdString String name, @SharedPtr ConvTranspose1dImpl module_ptr)
@Name(value="push_back<torch::nn::DropoutImpl>") public void push_back(@StdString BytePointer name, @SharedPtr DropoutImpl module_ptr)
@Name(value="push_back<torch::nn::DropoutImpl>") public void push_back(@StdString String name, @SharedPtr DropoutImpl module_ptr)
@Name(value="push_back<torch::nn::BatchNorm2dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr BatchNorm2dImpl module_ptr)
@Name(value="push_back<torch::nn::BatchNorm2dImpl>") public void push_back(@StdString String name, @SharedPtr BatchNorm2dImpl module_ptr)
@Name(value="push_back<torch::nn::InstanceNorm2dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr InstanceNorm2dImpl module_ptr)
@Name(value="push_back<torch::nn::InstanceNorm2dImpl>") public void push_back(@StdString String name, @SharedPtr InstanceNorm2dImpl module_ptr)
@Name(value="push_back<torch::nn::Conv2dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr Conv2dImpl module_ptr)
@Name(value="push_back<torch::nn::Conv2dImpl>") public void push_back(@StdString String name, @SharedPtr Conv2dImpl module_ptr)
@Name(value="push_back<torch::nn::ConvTranspose2dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr ConvTranspose2dImpl module_ptr)
@Name(value="push_back<torch::nn::ConvTranspose2dImpl>") public void push_back(@StdString String name, @SharedPtr ConvTranspose2dImpl module_ptr)
@Name(value="push_back<torch::nn::Dropout2dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr Dropout2dImpl module_ptr)
@Name(value="push_back<torch::nn::Dropout2dImpl>") public void push_back(@StdString String name, @SharedPtr Dropout2dImpl module_ptr)
@Name(value="push_back<torch::nn::BatchNorm3dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr BatchNorm3dImpl module_ptr)
@Name(value="push_back<torch::nn::BatchNorm3dImpl>") public void push_back(@StdString String name, @SharedPtr BatchNorm3dImpl module_ptr)
@Name(value="push_back<torch::nn::InstanceNorm3dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr InstanceNorm3dImpl module_ptr)
@Name(value="push_back<torch::nn::InstanceNorm3dImpl>") public void push_back(@StdString String name, @SharedPtr InstanceNorm3dImpl module_ptr)
@Name(value="push_back<torch::nn::Conv3dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr Conv3dImpl module_ptr)
@Name(value="push_back<torch::nn::Conv3dImpl>") public void push_back(@StdString String name, @SharedPtr Conv3dImpl module_ptr)
@Name(value="push_back<torch::nn::ConvTranspose3dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr ConvTranspose3dImpl module_ptr)
@Name(value="push_back<torch::nn::ConvTranspose3dImpl>") public void push_back(@StdString String name, @SharedPtr ConvTranspose3dImpl module_ptr)
@Name(value="push_back<torch::nn::Dropout3dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr Dropout3dImpl module_ptr)
@Name(value="push_back<torch::nn::Dropout3dImpl>") public void push_back(@StdString String name, @SharedPtr Dropout3dImpl module_ptr)
@Name(value="push_back<torch::nn::AlphaDropoutImpl>") public void push_back(@StdString BytePointer name, @SharedPtr AlphaDropoutImpl module_ptr)
@Name(value="push_back<torch::nn::AlphaDropoutImpl>") public void push_back(@StdString String name, @SharedPtr AlphaDropoutImpl module_ptr)
@Name(value="push_back<torch::nn::FeatureAlphaDropoutImpl>") public void push_back(@StdString BytePointer name, @SharedPtr FeatureAlphaDropoutImpl module_ptr)
@Name(value="push_back<torch::nn::FeatureAlphaDropoutImpl>") public void push_back(@StdString String name, @SharedPtr FeatureAlphaDropoutImpl module_ptr)
@Name(value="push_back<torch::nn::CosineSimilarityImpl>") public void push_back(@StdString BytePointer name, @SharedPtr CosineSimilarityImpl module_ptr)
@Name(value="push_back<torch::nn::CosineSimilarityImpl>") public void push_back(@StdString String name, @SharedPtr CosineSimilarityImpl module_ptr)
@Name(value="push_back<torch::nn::PairwiseDistanceImpl>") public void push_back(@StdString BytePointer name, @SharedPtr PairwiseDistanceImpl module_ptr)
@Name(value="push_back<torch::nn::PairwiseDistanceImpl>") public void push_back(@StdString String name, @SharedPtr PairwiseDistanceImpl module_ptr)
@Name(value="push_back<torch::nn::EmbeddingImpl>") public void push_back(@StdString BytePointer name, @SharedPtr EmbeddingImpl module_ptr)
@Name(value="push_back<torch::nn::EmbeddingImpl>") public void push_back(@StdString String name, @SharedPtr EmbeddingImpl module_ptr)
@Name(value="push_back<torch::nn::EmbeddingBagImpl>") public void push_back(@StdString BytePointer name, @SharedPtr EmbeddingBagImpl module_ptr)
@Name(value="push_back<torch::nn::EmbeddingBagImpl>") public void push_back(@StdString String name, @SharedPtr EmbeddingBagImpl module_ptr)
@Name(value="push_back<torch::nn::FoldImpl>") public void push_back(@StdString BytePointer name, @SharedPtr FoldImpl module_ptr)
@Name(value="push_back<torch::nn::FoldImpl>") public void push_back(@StdString String name, @SharedPtr FoldImpl module_ptr)
@Name(value="push_back<torch::nn::UnfoldImpl>") public void push_back(@StdString BytePointer name, @SharedPtr UnfoldImpl module_ptr)
@Name(value="push_back<torch::nn::UnfoldImpl>") public void push_back(@StdString String name, @SharedPtr UnfoldImpl module_ptr)
@Name(value="push_back<torch::nn::IdentityImpl>") public void push_back(@StdString BytePointer name, @SharedPtr IdentityImpl module_ptr)
@Name(value="push_back<torch::nn::IdentityImpl>") public void push_back(@StdString String name, @SharedPtr IdentityImpl module_ptr)
@Name(value="push_back<torch::nn::LinearImpl>") public void push_back(@StdString BytePointer name, @SharedPtr LinearImpl module_ptr)
@Name(value="push_back<torch::nn::LinearImpl>") public void push_back(@StdString String name, @SharedPtr LinearImpl module_ptr)
@Name(value="push_back<torch::nn::BilinearImpl>") public void push_back(@StdString BytePointer name, @SharedPtr BilinearImpl module_ptr)
@Name(value="push_back<torch::nn::BilinearImpl>") public void push_back(@StdString String name, @SharedPtr BilinearImpl module_ptr)
@Name(value="push_back<torch::nn::FlattenImpl>") public void push_back(@StdString BytePointer name, @SharedPtr FlattenImpl module_ptr)
@Name(value="push_back<torch::nn::FlattenImpl>") public void push_back(@StdString String name, @SharedPtr FlattenImpl module_ptr)
@Name(value="push_back<torch::nn::UnflattenImpl>") public void push_back(@StdString BytePointer name, @SharedPtr UnflattenImpl module_ptr)
@Name(value="push_back<torch::nn::UnflattenImpl>") public void push_back(@StdString String name, @SharedPtr UnflattenImpl module_ptr)
@Name(value="push_back<torch::nn::L1LossImpl>") public void push_back(@StdString BytePointer name, @SharedPtr L1LossImpl module_ptr)
@Name(value="push_back<torch::nn::L1LossImpl>") public void push_back(@StdString String name, @SharedPtr L1LossImpl module_ptr)
@Name(value="push_back<torch::nn::KLDivLossImpl>") public void push_back(@StdString BytePointer name, @SharedPtr KLDivLossImpl module_ptr)
@Name(value="push_back<torch::nn::KLDivLossImpl>") public void push_back(@StdString String name, @SharedPtr KLDivLossImpl module_ptr)
@Name(value="push_back<torch::nn::MSELossImpl>") public void push_back(@StdString BytePointer name, @SharedPtr MSELossImpl module_ptr)
@Name(value="push_back<torch::nn::MSELossImpl>") public void push_back(@StdString String name, @SharedPtr MSELossImpl module_ptr)
@Name(value="push_back<torch::nn::BCELossImpl>") public void push_back(@StdString BytePointer name, @SharedPtr BCELossImpl module_ptr)
@Name(value="push_back<torch::nn::BCELossImpl>") public void push_back(@StdString String name, @SharedPtr BCELossImpl module_ptr)
@Name(value="push_back<torch::nn::HingeEmbeddingLossImpl>") public void push_back(@StdString BytePointer name, @SharedPtr HingeEmbeddingLossImpl module_ptr)
@Name(value="push_back<torch::nn::HingeEmbeddingLossImpl>") public void push_back(@StdString String name, @SharedPtr HingeEmbeddingLossImpl module_ptr)
@Name(value="push_back<torch::nn::MultiMarginLossImpl>") public void push_back(@StdString BytePointer name, @SharedPtr MultiMarginLossImpl module_ptr)
@Name(value="push_back<torch::nn::MultiMarginLossImpl>") public void push_back(@StdString String name, @SharedPtr MultiMarginLossImpl module_ptr)
@Name(value="push_back<torch::nn::CosineEmbeddingLossImpl>") public void push_back(@StdString BytePointer name, @SharedPtr CosineEmbeddingLossImpl module_ptr)
@Name(value="push_back<torch::nn::CosineEmbeddingLossImpl>") public void push_back(@StdString String name, @SharedPtr CosineEmbeddingLossImpl module_ptr)
@Name(value="push_back<torch::nn::SmoothL1LossImpl>") public void push_back(@StdString BytePointer name, @SharedPtr SmoothL1LossImpl module_ptr)
@Name(value="push_back<torch::nn::SmoothL1LossImpl>") public void push_back(@StdString String name, @SharedPtr SmoothL1LossImpl module_ptr)
@Name(value="push_back<torch::nn::HuberLossImpl>") public void push_back(@StdString BytePointer name, @SharedPtr HuberLossImpl module_ptr)
@Name(value="push_back<torch::nn::HuberLossImpl>") public void push_back(@StdString String name, @SharedPtr HuberLossImpl module_ptr)
@Name(value="push_back<torch::nn::MultiLabelMarginLossImpl>") public void push_back(@StdString BytePointer name, @SharedPtr MultiLabelMarginLossImpl module_ptr)
@Name(value="push_back<torch::nn::MultiLabelMarginLossImpl>") public void push_back(@StdString String name, @SharedPtr MultiLabelMarginLossImpl module_ptr)
@Name(value="push_back<torch::nn::SoftMarginLossImpl>") public void push_back(@StdString BytePointer name, @SharedPtr SoftMarginLossImpl module_ptr)
@Name(value="push_back<torch::nn::SoftMarginLossImpl>") public void push_back(@StdString String name, @SharedPtr SoftMarginLossImpl module_ptr)
@Name(value="push_back<torch::nn::MultiLabelSoftMarginLossImpl>") public void push_back(@StdString BytePointer name, @SharedPtr MultiLabelSoftMarginLossImpl module_ptr)
@Name(value="push_back<torch::nn::MultiLabelSoftMarginLossImpl>") public void push_back(@StdString String name, @SharedPtr MultiLabelSoftMarginLossImpl module_ptr)
@Name(value="push_back<torch::nn::TripletMarginLossImpl>") public void push_back(@StdString BytePointer name, @SharedPtr TripletMarginLossImpl module_ptr)
@Name(value="push_back<torch::nn::TripletMarginLossImpl>") public void push_back(@StdString String name, @SharedPtr TripletMarginLossImpl module_ptr)
@Name(value="push_back<torch::nn::TripletMarginWithDistanceLossImpl>") public void push_back(@StdString BytePointer name, @SharedPtr TripletMarginWithDistanceLossImpl module_ptr)
@Name(value="push_back<torch::nn::TripletMarginWithDistanceLossImpl>") public void push_back(@StdString String name, @SharedPtr TripletMarginWithDistanceLossImpl module_ptr)
@Name(value="push_back<torch::nn::CTCLossImpl>") public void push_back(@StdString BytePointer name, @SharedPtr CTCLossImpl module_ptr)
@Name(value="push_back<torch::nn::CTCLossImpl>") public void push_back(@StdString String name, @SharedPtr CTCLossImpl module_ptr)
@Name(value="push_back<torch::nn::PoissonNLLLossImpl>") public void push_back(@StdString BytePointer name, @SharedPtr PoissonNLLLossImpl module_ptr)
@Name(value="push_back<torch::nn::PoissonNLLLossImpl>") public void push_back(@StdString String name, @SharedPtr PoissonNLLLossImpl module_ptr)
@Name(value="push_back<torch::nn::MarginRankingLossImpl>") public void push_back(@StdString BytePointer name, @SharedPtr MarginRankingLossImpl module_ptr)
@Name(value="push_back<torch::nn::MarginRankingLossImpl>") public void push_back(@StdString String name, @SharedPtr MarginRankingLossImpl module_ptr)
@Name(value="push_back<torch::nn::NLLLossImpl>") public void push_back(@StdString BytePointer name, @SharedPtr NLLLossImpl module_ptr)
@Name(value="push_back<torch::nn::NLLLossImpl>") public void push_back(@StdString String name, @SharedPtr NLLLossImpl module_ptr)
@Name(value="push_back<torch::nn::CrossEntropyLossImpl>") public void push_back(@StdString BytePointer name, @SharedPtr CrossEntropyLossImpl module_ptr)
@Name(value="push_back<torch::nn::CrossEntropyLossImpl>") public void push_back(@StdString String name, @SharedPtr CrossEntropyLossImpl module_ptr)
@Name(value="push_back<torch::nn::BCEWithLogitsLossImpl>") public void push_back(@StdString BytePointer name, @SharedPtr BCEWithLogitsLossImpl module_ptr)
@Name(value="push_back<torch::nn::BCEWithLogitsLossImpl>") public void push_back(@StdString String name, @SharedPtr BCEWithLogitsLossImpl module_ptr)
@Name(value="push_back<torch::nn::ReflectionPad1dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr ReflectionPad1dImpl module_ptr)
@Name(value="push_back<torch::nn::ReflectionPad1dImpl>") public void push_back(@StdString String name, @SharedPtr ReflectionPad1dImpl module_ptr)
@Name(value="push_back<torch::nn::ReplicationPad1dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr ReplicationPad1dImpl module_ptr)
@Name(value="push_back<torch::nn::ReplicationPad1dImpl>") public void push_back(@StdString String name, @SharedPtr ReplicationPad1dImpl module_ptr)
@Name(value="push_back<torch::nn::ConstantPad1dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr ConstantPad1dImpl module_ptr)
@Name(value="push_back<torch::nn::ConstantPad1dImpl>") public void push_back(@StdString String name, @SharedPtr ConstantPad1dImpl module_ptr)
@Name(value="push_back<torch::nn::ZeroPad1dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr ZeroPad1dImpl module_ptr)
@Name(value="push_back<torch::nn::ZeroPad1dImpl>") public void push_back(@StdString String name, @SharedPtr ZeroPad1dImpl module_ptr)
@Name(value="push_back<torch::nn::AvgPool1dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr AvgPool1dImpl module_ptr)
@Name(value="push_back<torch::nn::AvgPool1dImpl>") public void push_back(@StdString String name, @SharedPtr AvgPool1dImpl module_ptr)
@Name(value="push_back<torch::nn::MaxPool1dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr MaxPool1dImpl module_ptr)
@Name(value="push_back<torch::nn::MaxPool1dImpl>") public void push_back(@StdString String name, @SharedPtr MaxPool1dImpl module_ptr)
@Name(value="push_back<torch::nn::AdaptiveAvgPool1dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr AdaptiveAvgPool1dImpl module_ptr)
@Name(value="push_back<torch::nn::AdaptiveAvgPool1dImpl>") public void push_back(@StdString String name, @SharedPtr AdaptiveAvgPool1dImpl module_ptr)
@Name(value="push_back<torch::nn::AdaptiveMaxPool1dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr AdaptiveMaxPool1dImpl module_ptr)
@Name(value="push_back<torch::nn::AdaptiveMaxPool1dImpl>") public void push_back(@StdString String name, @SharedPtr AdaptiveMaxPool1dImpl module_ptr)
@Name(value="push_back<torch::nn::MaxUnpool1dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr MaxUnpool1dImpl module_ptr)
@Name(value="push_back<torch::nn::MaxUnpool1dImpl>") public void push_back(@StdString String name, @SharedPtr MaxUnpool1dImpl module_ptr)
@Name(value="push_back<torch::nn::LPPool1dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr LPPool1dImpl module_ptr)
@Name(value="push_back<torch::nn::LPPool1dImpl>") public void push_back(@StdString String name, @SharedPtr LPPool1dImpl module_ptr)
@Name(value="push_back<torch::nn::ReflectionPad2dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr ReflectionPad2dImpl module_ptr)
@Name(value="push_back<torch::nn::ReflectionPad2dImpl>") public void push_back(@StdString String name, @SharedPtr ReflectionPad2dImpl module_ptr)
@Name(value="push_back<torch::nn::ReplicationPad2dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr ReplicationPad2dImpl module_ptr)
@Name(value="push_back<torch::nn::ReplicationPad2dImpl>") public void push_back(@StdString String name, @SharedPtr ReplicationPad2dImpl module_ptr)
@Name(value="push_back<torch::nn::ConstantPad2dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr ConstantPad2dImpl module_ptr)
@Name(value="push_back<torch::nn::ConstantPad2dImpl>") public void push_back(@StdString String name, @SharedPtr ConstantPad2dImpl module_ptr)
@Name(value="push_back<torch::nn::ZeroPad2dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr ZeroPad2dImpl module_ptr)
@Name(value="push_back<torch::nn::ZeroPad2dImpl>") public void push_back(@StdString String name, @SharedPtr ZeroPad2dImpl module_ptr)
@Name(value="push_back<torch::nn::AvgPool2dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr AvgPool2dImpl module_ptr)
@Name(value="push_back<torch::nn::AvgPool2dImpl>") public void push_back(@StdString String name, @SharedPtr AvgPool2dImpl module_ptr)
@Name(value="push_back<torch::nn::MaxPool2dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr MaxPool2dImpl module_ptr)
@Name(value="push_back<torch::nn::MaxPool2dImpl>") public void push_back(@StdString String name, @SharedPtr MaxPool2dImpl module_ptr)
@Name(value="push_back<torch::nn::AdaptiveAvgPool2dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr AdaptiveAvgPool2dImpl module_ptr)
@Name(value="push_back<torch::nn::AdaptiveAvgPool2dImpl>") public void push_back(@StdString String name, @SharedPtr AdaptiveAvgPool2dImpl module_ptr)
@Name(value="push_back<torch::nn::AdaptiveMaxPool2dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr AdaptiveMaxPool2dImpl module_ptr)
@Name(value="push_back<torch::nn::AdaptiveMaxPool2dImpl>") public void push_back(@StdString String name, @SharedPtr AdaptiveMaxPool2dImpl module_ptr)
@Name(value="push_back<torch::nn::MaxUnpool2dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr MaxUnpool2dImpl module_ptr)
@Name(value="push_back<torch::nn::MaxUnpool2dImpl>") public void push_back(@StdString String name, @SharedPtr MaxUnpool2dImpl module_ptr)
@Name(value="push_back<torch::nn::FractionalMaxPool2dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr FractionalMaxPool2dImpl module_ptr)
@Name(value="push_back<torch::nn::FractionalMaxPool2dImpl>") public void push_back(@StdString String name, @SharedPtr FractionalMaxPool2dImpl module_ptr)
@Name(value="push_back<torch::nn::LPPool2dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr LPPool2dImpl module_ptr)
@Name(value="push_back<torch::nn::LPPool2dImpl>") public void push_back(@StdString String name, @SharedPtr LPPool2dImpl module_ptr)
@Name(value="push_back<torch::nn::ReflectionPad3dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr ReflectionPad3dImpl module_ptr)
@Name(value="push_back<torch::nn::ReflectionPad3dImpl>") public void push_back(@StdString String name, @SharedPtr ReflectionPad3dImpl module_ptr)
@Name(value="push_back<torch::nn::ReplicationPad3dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr ReplicationPad3dImpl module_ptr)
@Name(value="push_back<torch::nn::ReplicationPad3dImpl>") public void push_back(@StdString String name, @SharedPtr ReplicationPad3dImpl module_ptr)
@Name(value="push_back<torch::nn::ConstantPad3dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr ConstantPad3dImpl module_ptr)
@Name(value="push_back<torch::nn::ConstantPad3dImpl>") public void push_back(@StdString String name, @SharedPtr ConstantPad3dImpl module_ptr)
@Name(value="push_back<torch::nn::ZeroPad3dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr ZeroPad3dImpl module_ptr)
@Name(value="push_back<torch::nn::ZeroPad3dImpl>") public void push_back(@StdString String name, @SharedPtr ZeroPad3dImpl module_ptr)
@Name(value="push_back<torch::nn::AvgPool3dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr AvgPool3dImpl module_ptr)
@Name(value="push_back<torch::nn::AvgPool3dImpl>") public void push_back(@StdString String name, @SharedPtr AvgPool3dImpl module_ptr)
@Name(value="push_back<torch::nn::MaxPool3dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr MaxPool3dImpl module_ptr)
@Name(value="push_back<torch::nn::MaxPool3dImpl>") public void push_back(@StdString String name, @SharedPtr MaxPool3dImpl module_ptr)
@Name(value="push_back<torch::nn::AdaptiveAvgPool3dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr AdaptiveAvgPool3dImpl module_ptr)
@Name(value="push_back<torch::nn::AdaptiveAvgPool3dImpl>") public void push_back(@StdString String name, @SharedPtr AdaptiveAvgPool3dImpl module_ptr)
@Name(value="push_back<torch::nn::AdaptiveMaxPool3dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr AdaptiveMaxPool3dImpl module_ptr)
@Name(value="push_back<torch::nn::AdaptiveMaxPool3dImpl>") public void push_back(@StdString String name, @SharedPtr AdaptiveMaxPool3dImpl module_ptr)
@Name(value="push_back<torch::nn::MaxUnpool3dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr MaxUnpool3dImpl module_ptr)
@Name(value="push_back<torch::nn::MaxUnpool3dImpl>") public void push_back(@StdString String name, @SharedPtr MaxUnpool3dImpl module_ptr)
@Name(value="push_back<torch::nn::FractionalMaxPool3dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr FractionalMaxPool3dImpl module_ptr)
@Name(value="push_back<torch::nn::FractionalMaxPool3dImpl>") public void push_back(@StdString String name, @SharedPtr FractionalMaxPool3dImpl module_ptr)
@Name(value="push_back<torch::nn::RNNImpl>") public void push_back(@StdString BytePointer name, @SharedPtr RNNImpl module_ptr)
@Name(value="push_back<torch::nn::RNNImpl>") public void push_back(@StdString String name, @SharedPtr RNNImpl module_ptr)
@Name(value="push_back<torch::nn::LSTMImpl>") public void push_back(@StdString BytePointer name, @SharedPtr LSTMImpl module_ptr)
@Name(value="push_back<torch::nn::LSTMImpl>") public void push_back(@StdString String name, @SharedPtr LSTMImpl module_ptr)
@Name(value="push_back<torch::nn::GRUImpl>") public void push_back(@StdString BytePointer name, @SharedPtr GRUImpl module_ptr)
@Name(value="push_back<torch::nn::GRUImpl>") public void push_back(@StdString String name, @SharedPtr GRUImpl module_ptr)
@Name(value="push_back<torch::nn::RNNCellImpl>") public void push_back(@StdString BytePointer name, @SharedPtr RNNCellImpl module_ptr)
@Name(value="push_back<torch::nn::RNNCellImpl>") public void push_back(@StdString String name, @SharedPtr RNNCellImpl module_ptr)
@Name(value="push_back<torch::nn::LSTMCellImpl>") public void push_back(@StdString BytePointer name, @SharedPtr LSTMCellImpl module_ptr)
@Name(value="push_back<torch::nn::LSTMCellImpl>") public void push_back(@StdString String name, @SharedPtr LSTMCellImpl module_ptr)
@Name(value="push_back<torch::nn::GRUCellImpl>") public void push_back(@StdString BytePointer name, @SharedPtr GRUCellImpl module_ptr)
@Name(value="push_back<torch::nn::GRUCellImpl>") public void push_back(@StdString String name, @SharedPtr GRUCellImpl module_ptr)
@Name(value="push_back<torch::nn::PixelShuffleImpl>") public void push_back(@StdString BytePointer name, @SharedPtr PixelShuffleImpl module_ptr)
@Name(value="push_back<torch::nn::PixelShuffleImpl>") public void push_back(@StdString String name, @SharedPtr PixelShuffleImpl module_ptr)
@Name(value="push_back<torch::nn::PixelUnshuffleImpl>") public void push_back(@StdString BytePointer name, @SharedPtr PixelUnshuffleImpl module_ptr)
@Name(value="push_back<torch::nn::PixelUnshuffleImpl>") public void push_back(@StdString String name, @SharedPtr PixelUnshuffleImpl module_ptr)
@Name(value="push_back<torch::nn::UpsampleImpl>") public void push_back(@StdString BytePointer name, @SharedPtr UpsampleImpl module_ptr)
@Name(value="push_back<torch::nn::UpsampleImpl>") public void push_back(@StdString String name, @SharedPtr UpsampleImpl module_ptr)
@Name(value="push_back<torch::nn::ELUImpl>") public void push_back(@StdString BytePointer name, @SharedPtr ELUImpl module_ptr)
@Name(value="push_back<torch::nn::ELUImpl>") public void push_back(@StdString String name, @SharedPtr ELUImpl module_ptr)
@Name(value="push_back<torch::nn::SELUImpl>") public void push_back(@StdString BytePointer name, @SharedPtr SELUImpl module_ptr)
@Name(value="push_back<torch::nn::SELUImpl>") public void push_back(@StdString String name, @SharedPtr SELUImpl module_ptr)
@Name(value="push_back<torch::nn::HardshrinkImpl>") public void push_back(@StdString BytePointer name, @SharedPtr HardshrinkImpl module_ptr)
@Name(value="push_back<torch::nn::HardshrinkImpl>") public void push_back(@StdString String name, @SharedPtr HardshrinkImpl module_ptr)
@Name(value="push_back<torch::nn::HardtanhImpl>") public void push_back(@StdString BytePointer name, @SharedPtr HardtanhImpl module_ptr)
@Name(value="push_back<torch::nn::HardtanhImpl>") public void push_back(@StdString String name, @SharedPtr HardtanhImpl module_ptr)
@Name(value="push_back<torch::nn::LeakyReLUImpl>") public void push_back(@StdString BytePointer name, @SharedPtr LeakyReLUImpl module_ptr)
@Name(value="push_back<torch::nn::LeakyReLUImpl>") public void push_back(@StdString String name, @SharedPtr LeakyReLUImpl module_ptr)
@Name(value="push_back<torch::nn::LogSigmoidImpl>") public void push_back(@StdString BytePointer name, @SharedPtr LogSigmoidImpl module_ptr)
@Name(value="push_back<torch::nn::LogSigmoidImpl>") public void push_back(@StdString String name, @SharedPtr LogSigmoidImpl module_ptr)
@Name(value="push_back<torch::nn::SoftmaxImpl>") public void push_back(@StdString BytePointer name, @SharedPtr SoftmaxImpl module_ptr)
@Name(value="push_back<torch::nn::SoftmaxImpl>") public void push_back(@StdString String name, @SharedPtr SoftmaxImpl module_ptr)
@Name(value="push_back<torch::nn::SoftminImpl>") public void push_back(@StdString BytePointer name, @SharedPtr SoftminImpl module_ptr)
@Name(value="push_back<torch::nn::SoftminImpl>") public void push_back(@StdString String name, @SharedPtr SoftminImpl module_ptr)
@Name(value="push_back<torch::nn::LogSoftmaxImpl>") public void push_back(@StdString BytePointer name, @SharedPtr LogSoftmaxImpl module_ptr)
@Name(value="push_back<torch::nn::LogSoftmaxImpl>") public void push_back(@StdString String name, @SharedPtr LogSoftmaxImpl module_ptr)
@Name(value="push_back<torch::nn::Softmax2dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr Softmax2dImpl module_ptr)
@Name(value="push_back<torch::nn::Softmax2dImpl>") public void push_back(@StdString String name, @SharedPtr Softmax2dImpl module_ptr)
@Name(value="push_back<torch::nn::PReLUImpl>") public void push_back(@StdString BytePointer name, @SharedPtr PReLUImpl module_ptr)
@Name(value="push_back<torch::nn::PReLUImpl>") public void push_back(@StdString String name, @SharedPtr PReLUImpl module_ptr)
@Name(value="push_back<torch::nn::ReLUImpl>") public void push_back(@StdString BytePointer name, @SharedPtr ReLUImpl module_ptr)
@Name(value="push_back<torch::nn::ReLUImpl>") public void push_back(@StdString String name, @SharedPtr ReLUImpl module_ptr)
@Name(value="push_back<torch::nn::ReLU6Impl>") public void push_back(@StdString BytePointer name, @SharedPtr ReLU6Impl module_ptr)
@Name(value="push_back<torch::nn::ReLU6Impl>") public void push_back(@StdString String name, @SharedPtr ReLU6Impl module_ptr)
@Name(value="push_back<torch::nn::RReLUImpl>") public void push_back(@StdString BytePointer name, @SharedPtr RReLUImpl module_ptr)
@Name(value="push_back<torch::nn::RReLUImpl>") public void push_back(@StdString String name, @SharedPtr RReLUImpl module_ptr)
@Name(value="push_back<torch::nn::CELUImpl>") public void push_back(@StdString BytePointer name, @SharedPtr CELUImpl module_ptr)
@Name(value="push_back<torch::nn::CELUImpl>") public void push_back(@StdString String name, @SharedPtr CELUImpl module_ptr)
@Name(value="push_back<torch::nn::GLUImpl>") public void push_back(@StdString BytePointer name, @SharedPtr GLUImpl module_ptr)
@Name(value="push_back<torch::nn::GLUImpl>") public void push_back(@StdString String name, @SharedPtr GLUImpl module_ptr)
@Name(value="push_back<torch::nn::GELUImpl>") public void push_back(@StdString BytePointer name, @SharedPtr GELUImpl module_ptr)
@Name(value="push_back<torch::nn::GELUImpl>") public void push_back(@StdString String name, @SharedPtr GELUImpl module_ptr)
@Name(value="push_back<torch::nn::SiLUImpl>") public void push_back(@StdString BytePointer name, @SharedPtr SiLUImpl module_ptr)
@Name(value="push_back<torch::nn::SiLUImpl>") public void push_back(@StdString String name, @SharedPtr SiLUImpl module_ptr)
@Name(value="push_back<torch::nn::MishImpl>") public void push_back(@StdString BytePointer name, @SharedPtr MishImpl module_ptr)
@Name(value="push_back<torch::nn::MishImpl>") public void push_back(@StdString String name, @SharedPtr MishImpl module_ptr)
@Name(value="push_back<torch::nn::SigmoidImpl>") public void push_back(@StdString BytePointer name, @SharedPtr SigmoidImpl module_ptr)
@Name(value="push_back<torch::nn::SigmoidImpl>") public void push_back(@StdString String name, @SharedPtr SigmoidImpl module_ptr)
@Name(value="push_back<torch::nn::SoftplusImpl>") public void push_back(@StdString BytePointer name, @SharedPtr SoftplusImpl module_ptr)
@Name(value="push_back<torch::nn::SoftplusImpl>") public void push_back(@StdString String name, @SharedPtr SoftplusImpl module_ptr)
@Name(value="push_back<torch::nn::SoftshrinkImpl>") public void push_back(@StdString BytePointer name, @SharedPtr SoftshrinkImpl module_ptr)
@Name(value="push_back<torch::nn::SoftshrinkImpl>") public void push_back(@StdString String name, @SharedPtr SoftshrinkImpl module_ptr)
@Name(value="push_back<torch::nn::SoftsignImpl>") public void push_back(@StdString BytePointer name, @SharedPtr SoftsignImpl module_ptr)
@Name(value="push_back<torch::nn::SoftsignImpl>") public void push_back(@StdString String name, @SharedPtr SoftsignImpl module_ptr)
@Name(value="push_back<torch::nn::TanhImpl>") public void push_back(@StdString BytePointer name, @SharedPtr TanhImpl module_ptr)
@Name(value="push_back<torch::nn::TanhImpl>") public void push_back(@StdString String name, @SharedPtr TanhImpl module_ptr)
@Name(value="push_back<torch::nn::TanhshrinkImpl>") public void push_back(@StdString BytePointer name, @SharedPtr TanhshrinkImpl module_ptr)
@Name(value="push_back<torch::nn::TanhshrinkImpl>") public void push_back(@StdString String name, @SharedPtr TanhshrinkImpl module_ptr)
@Name(value="push_back<torch::nn::ThresholdImpl>") public void push_back(@StdString BytePointer name, @SharedPtr ThresholdImpl module_ptr)
@Name(value="push_back<torch::nn::ThresholdImpl>") public void push_back(@StdString String name, @SharedPtr ThresholdImpl module_ptr)
@Name(value="push_back<torch::nn::MultiheadAttentionImpl>") public void push_back(@StdString BytePointer name, @SharedPtr MultiheadAttentionImpl module_ptr)
@Name(value="push_back<torch::nn::MultiheadAttentionImpl>") public void push_back(@StdString String name, @SharedPtr MultiheadAttentionImpl module_ptr)
@Name(value="push_back<torch::nn::LayerNormImpl>") public void push_back(@StdString BytePointer name, @SharedPtr LayerNormImpl module_ptr)
@Name(value="push_back<torch::nn::LayerNormImpl>") public void push_back(@StdString String name, @SharedPtr LayerNormImpl module_ptr)
@Name(value="push_back<torch::nn::LocalResponseNormImpl>") public void push_back(@StdString BytePointer name, @SharedPtr LocalResponseNormImpl module_ptr)
@Name(value="push_back<torch::nn::LocalResponseNormImpl>") public void push_back(@StdString String name, @SharedPtr LocalResponseNormImpl module_ptr)
@Name(value="push_back<torch::nn::CrossMapLRN2dImpl>") public void push_back(@StdString BytePointer name, @SharedPtr CrossMapLRN2dImpl module_ptr)
@Name(value="push_back<torch::nn::CrossMapLRN2dImpl>") public void push_back(@StdString String name, @SharedPtr CrossMapLRN2dImpl module_ptr)
@Name(value="push_back<torch::nn::GroupNormImpl>") public void push_back(@StdString BytePointer name, @SharedPtr GroupNormImpl module_ptr)
@Name(value="push_back<torch::nn::GroupNormImpl>") public void push_back(@StdString String name, @SharedPtr GroupNormImpl module_ptr)
@Name(value="push_back<torch::nn::TransformerEncoderLayerImpl>") public void push_back(@StdString BytePointer name, @SharedPtr TransformerEncoderLayerImpl module_ptr)
@Name(value="push_back<torch::nn::TransformerEncoderLayerImpl>") public void push_back(@StdString String name, @SharedPtr TransformerEncoderLayerImpl module_ptr)
@Name(value="push_back<torch::nn::TransformerDecoderLayerImpl>") public void push_back(@StdString BytePointer name, @SharedPtr TransformerDecoderLayerImpl module_ptr)
@Name(value="push_back<torch::nn::TransformerDecoderLayerImpl>") public void push_back(@StdString String name, @SharedPtr TransformerDecoderLayerImpl module_ptr)
@Name(value="push_back<torch::nn::TransformerEncoderImpl>") public void push_back(@StdString BytePointer name, @SharedPtr TransformerEncoderImpl module_ptr)
@Name(value="push_back<torch::nn::TransformerEncoderImpl>") public void push_back(@StdString String name, @SharedPtr TransformerEncoderImpl module_ptr)
@Name(value="push_back<torch::nn::TransformerDecoderImpl>") public void push_back(@StdString BytePointer name, @SharedPtr TransformerDecoderImpl module_ptr)
@Name(value="push_back<torch::nn::TransformerDecoderImpl>") public void push_back(@StdString String name, @SharedPtr TransformerDecoderImpl module_ptr)
@Name(value="push_back<torch::nn::TransformerImpl>") public void push_back(@StdString BytePointer name, @SharedPtr TransformerImpl module_ptr)
@Name(value="push_back<torch::nn::TransformerImpl>") public void push_back(@StdString String name, @SharedPtr TransformerImpl module_ptr)
public void push_back(@ByVal AnyModule any_module)
AnyModule
to the Sequential
.public void push_back(@StdString BytePointer name, @ByVal AnyModule any_module)
public void push_back(@StdString String name, @ByVal AnyModule any_module)
@ByVal @Cast(value="torch::nn::SequentialImpl::Iterator*") public AnyModuleVector.Iterator begin()
Sequential
.@ByVal @Cast(value="torch::nn::SequentialImpl::Iterator*") public AnyModuleVector.Iterator end()
Sequential
.@SharedPtr(value="torch::nn::Module") @ByVal public Module ptr(@Cast(value="size_t") long index)
std::shared_ptr
whose dynamic type is that of the
underlying module at the given index. Throws an exception if the index is
out of bounds.@SharedPtr(value="torch::nn::Module") @ByVal @Name(value="operator []") public Module get(@Cast(value="size_t") long index)
ptr(index)
.@Cast(value="size_t") @NoException(value=true) public long size()
Sequential
container.@Cast(value="bool") @NoException(value=true) public boolean is_empty()
Sequential
.Copyright © 2024. All rights reserved.