Package | Description |
---|---|
org.bytedeco.caffe |
Modifier and Type | Class and Description |
---|---|
class |
FloatAbsValLayer
\brief Computes
y = |x| |
class |
FloatAccuracyLayer
\brief Computes the classification accuracy for a one-of-many
classification task.
|
class |
FloatArgMaxLayer
\brief Compute the index of the
K max values for each datum across
all dimensions (C \times H \times W) . |
class |
FloatBaseConvolutionLayer
\brief Abstract base class that factors out the BLAS code common to
ConvolutionLayer and DeconvolutionLayer.
|
class |
FloatBaseDataLayer
\brief Provides base for data layers that feed blobs to the Net.
|
class |
FloatBasePrefetchingDataLayer |
class |
FloatBatchNormLayer
\brief Normalizes the input to have 0-mean and/or unit (1) variance across
the batch.
|
class |
FloatBatchReindexLayer
\brief Index into the input blob along its first axis.
|
class |
FloatBNLLLayer
\brief Computes
y = x + \log(1 + \exp(-x)) if x > 0 ;
y = \log(1 + \exp(x)) otherwise. |
class |
FloatConcatLayer
\brief Takes at least two Blob%s and concatenates them along either the num
or channel dimension, outputting the result.
|
class |
FloatContrastiveLossLayer
\brief Computes the contrastive loss
E = \frac{1}{2N} \sum\limits_{n=1}^N \left(y\right) d^2 +
\left(1-y\right) \max \left(margin-d, 0\right)^2
where d = \left| \left| a_n - b_n \right| \right|_2 . |
class |
FloatConvolutionLayer
\brief Convolves the input image with a bank of learned filters,
and (optionally) adds biases.
|
class |
FloatCropLayer
\brief Takes a Blob and crop it, to the shape specified by the second input
Blob, across all dimensions after the specified axis.
|
class |
FloatCuDNNConvolutionLayer |
class |
FloatCuDNNLCNLayer |
class |
FloatCuDNNLRNLayer |
class |
FloatCuDNNPoolingLayer |
class |
FloatCuDNNReLULayer
\brief CuDNN acceleration of ReLULayer.
|
class |
FloatCuDNNSigmoidLayer
\brief CuDNN acceleration of SigmoidLayer.
|
class |
FloatCuDNNSoftmaxLayer
\brief cuDNN implementation of SoftmaxLayer.
|
class |
FloatCuDNNTanHLayer
\brief CuDNN acceleration of TanHLayer.
|
class |
FloatDataLayer |
class |
FloatDeconvolutionLayer
\brief Convolve the input with a bank of learned filters, and (optionally)
add biases, treating filters and convolution parameters in the
opposite sense as ConvolutionLayer.
|
class |
FloatDropoutLayer
\brief During training only, sets a random portion of
x to 0, adjusting
the rest of the vector magnitude accordingly. |
class |
FloatDummyDataLayer
\brief Provides data to the Net generated by a Filler.
|
class |
FloatEltwiseLayer
\brief Compute elementwise operations, such as product and sum,
along multiple input Blobs.
|
class |
FloatEmbedLayer
\brief A layer for learning "embeddings" of one-hot vector input.
|
class |
FloatEuclideanLossLayer
\brief Computes the Euclidean (L2) loss
E = \frac{1}{2N} \sum\limits_{n=1}^N \left| \left| \hat{y}_n - y_n
\right| \right|_2^2 for real-valued regression tasks. |
class |
FloatExpLayer
\brief Computes
y = \gamma ^ {\alpha x + \beta} ,
as specified by the scale \alpha , shift \beta ,
and base \gamma . |
class |
FloatFilterLayer
\brief Takes two+ Blobs, interprets last Blob as a selector and
filter remaining Blobs accordingly with selector data (0 means that
the corresponding item has to be filtered, non-zero means that corresponding
item needs to stay).
|
class |
FloatFlattenLayer
\brief Reshapes the input Blob into flat vectors.
|
class |
FloatHDF5DataLayer
\brief Provides data to the Net from HDF5 files.
|
class |
FloatHDF5OutputLayer
\brief Write blobs to disk as HDF5 files.
|
class |
FloatHingeLossLayer
\brief Computes the hinge loss for a one-of-many classification task.
|
class |
FloatIm2colLayer
\brief A helper for image operations that rearranges image regions into
column vectors.
|
class |
FloatImageDataLayer
\brief Provides data to the Net from image files.
|
class |
FloatInfogainLossLayer
\brief A generalization of MultinomialLogisticLossLayer that takes an
"information gain" (infogain) matrix specifying the "value" of all label
pairs.
|
class |
FloatInnerProductLayer
\brief Also known as a "fully-connected" layer, computes an inner product
with a set of learned weights, and (optionally) adds biases.
|
class |
FloatInputLayer
\brief Provides data to the Net by assigning tops directly.
|
class |
FloatLossLayer
\brief An interface for Layer%s that take two Blob%s as input -- usually
(1) predictions and (2) ground-truth labels -- and output a
singleton Blob representing the loss.
|
class |
FloatLRNLayer
\brief Normalize the input in a local region across or within feature maps.
|
class |
FloatLSTMLayer
\brief Processes sequential inputs using a "Long Short-Term Memory" (LSTM)
[1] style recurrent neural network (RNN).
|
class |
FloatMemoryDataLayer
\brief Provides data to the Net from memory.
|
class |
FloatMultinomialLogisticLossLayer
\brief Computes the multinomial logistic loss for a one-of-many
classification task, directly taking a predicted probability
distribution as input.
|
class |
FloatMVNLayer
\brief Normalizes the input to have 0-mean and/or unit (1) variance.
|
class |
FloatNeuronLayer
\brief An interface for layers that take one blob as input (
x )
and produce one equally-sized blob as output (y ), where
each element of the output depends only on the corresponding input
element. |
class |
FloatParameterLayer |
class |
FloatPoolingLayer
\brief Pools the input image by taking the max, average, etc.
|
class |
FloatPowerLayer
\brief Computes
y = (\alpha x + \beta) ^ \gamma ,
as specified by the scale \alpha , shift \beta ,
and power \gamma . |
class |
FloatPReLULayer
\brief Parameterized Rectified Linear Unit non-linearity
y_i = \max(0, x_i) + a_i \min(0, x_i)
. |
class |
FloatRecurrentLayer
\brief An abstract class for implementing recurrent behavior inside of an
unrolled network.
|
class |
FloatReductionLayer
\brief Compute "reductions" -- operations that return a scalar output Blob
for an input Blob of arbitrary size, such as the sum, absolute sum,
and sum of squares.
|
class |
FloatReLULayer
\brief Rectified Linear Unit non-linearity
y = \max(0, x) . |
class |
FloatReshapeLayer |
class |
FloatRNNLayer
\brief Processes time-varying inputs using a simple recurrent neural network
(RNN).
|
class |
FloatSigmoidCrossEntropyLossLayer
\brief Computes the cross-entropy (logistic) loss
E = \frac{-1}{n} \sum\limits_{n=1}^N \left[
p_n \log \hat{p}_n +
(1 - p_n) \log(1 - \hat{p}_n)
\right]
, often used for predicting targets interpreted as probabilities. |
class |
FloatSigmoidLayer
\brief Sigmoid function non-linearity
y = (1 + \exp(-x))^{-1}
, a classic choice in neural networks. |
class |
FloatSilenceLayer
\brief Ignores bottom blobs while producing no top blobs.
|
class |
FloatSliceLayer
\brief Takes a Blob and slices it along either the num or channel dimension,
outputting multiple sliced Blob results.
|
class |
FloatSoftmaxLayer
\brief Computes the softmax function.
|
class |
FloatSoftmaxWithLossLayer
\brief Computes the multinomial logistic loss for a one-of-many
classification task, passing real-valued predictions through a
softmax to get a probability distribution over classes.
|
class |
FloatSplitLayer
\brief Creates a "split" path in the network by copying the bottom Blob
into multiple top Blob%s to be used by multiple consuming layers.
|
class |
FloatSPPLayer
\brief Does spatial pyramid pooling on the input image
by taking the max, average, etc.
|
class |
FloatTanHLayer
\brief TanH hyperbolic tangent non-linearity
y = \frac{\exp(2x) - 1}{\exp(2x) + 1}
, popular in auto-encoders. |
class |
FloatThresholdLayer
\brief Tests whether the input exceeds a threshold: outputs 1 for inputs
above threshold; 0 otherwise.
|
class |
FloatTileLayer
\brief Copy a Blob along specified dimensions.
|
class |
FloatWindowDataLayer
\brief Provides data to the Net from windows of images files, specified
by a window data file.
|
Modifier and Type | Method and Description |
---|---|
<L extends FloatLayer> |
FloatNet.layer_by_name(Class<L> cls,
BytePointer layer_name) |
<L extends FloatLayer> |
FloatNet.layer_by_name(Class<L> cls,
String layer_name) |
Modifier and Type | Method and Description |
---|---|
FloatLayer |
FloatLayerRegistry.Creator.call(LayerParameter arg0) |
FloatLayer |
FloatLayerRegisterer.Creator_LayerParameter.call(LayerParameter arg0) |
static FloatLayer |
FloatLayerRegistry.CreateLayer(LayerParameter param) |
FloatLayer[] |
FloatLayerSharedVector.get() |
FloatLayer |
FloatLayerSharedVector.Iterator.get() |
FloatLayer |
FloatLayerSharedVector.get(long i) |
FloatLayer |
FloatNet.layer_by_name(BytePointer layer_name) |
FloatLayer |
FloatNet.layer_by_name(String layer_name) |
FloatLayer |
FloatLayerSharedVector.pop_back() |
Modifier and Type | Method and Description |
---|---|
FloatLayerSharedVector.Iterator |
FloatLayerSharedVector.insert(FloatLayerSharedVector.Iterator pos,
FloatLayer value) |
FloatLayerSharedVector |
FloatLayerSharedVector.push_back(FloatLayer value) |
FloatLayerSharedVector |
FloatLayerSharedVector.put(FloatLayer... array) |
FloatLayerSharedVector |
FloatLayerSharedVector.put(FloatLayer value) |
FloatLayerSharedVector |
FloatLayerSharedVector.put(long i,
FloatLayer value) |
Constructor and Description |
---|
FloatLayerSharedVector(FloatLayer... array) |
FloatLayerSharedVector(FloatLayer value) |
Copyright © 2022. All rights reserved.