Package | Description |
---|---|
org.bytedeco.caffe |
Modifier and Type | Class and Description |
---|---|
class |
FloatAbsValLayer
\brief Computes
y = |x| |
class |
FloatBNLLLayer
\brief Computes
y = x + \log(1 + \exp(-x)) if x > 0 ;
y = \log(1 + \exp(x)) otherwise. |
class |
FloatCuDNNReLULayer
\brief CuDNN acceleration of ReLULayer.
|
class |
FloatCuDNNSigmoidLayer
\brief CuDNN acceleration of SigmoidLayer.
|
class |
FloatCuDNNTanHLayer
\brief CuDNN acceleration of TanHLayer.
|
class |
FloatDropoutLayer
\brief During training only, sets a random portion of
x to 0, adjusting
the rest of the vector magnitude accordingly. |
class |
FloatExpLayer
\brief Computes
y = \gamma ^ {\alpha x + \beta} ,
as specified by the scale \alpha , shift \beta ,
and base \gamma . |
class |
FloatPowerLayer
\brief Computes
y = (\alpha x + \beta) ^ \gamma ,
as specified by the scale \alpha , shift \beta ,
and power \gamma . |
class |
FloatPReLULayer
\brief Parameterized Rectified Linear Unit non-linearity
y_i = \max(0, x_i) + a_i \min(0, x_i)
. |
class |
FloatReLULayer
\brief Rectified Linear Unit non-linearity
y = \max(0, x) . |
class |
FloatSigmoidLayer
\brief Sigmoid function non-linearity
y = (1 + \exp(-x))^{-1}
, a classic choice in neural networks. |
class |
FloatTanHLayer
\brief TanH hyperbolic tangent non-linearity
y = \frac{\exp(2x) - 1}{\exp(2x) + 1}
, popular in auto-encoders. |
class |
FloatThresholdLayer
\brief Tests whether the input exceeds a threshold: outputs 1 for inputs
above threshold; 0 otherwise.
|
Copyright © 2022. All rights reserved.