Package | Description |
---|---|
org.bytedeco.caffe |
Modifier and Type | Class and Description |
---|---|
class |
FloatContrastiveLossLayer
\brief Computes the contrastive loss
E = \frac{1}{2N} \sum\limits_{n=1}^N \left(y\right) d^2 +
\left(1-y\right) \max \left(margin-d, 0\right)^2
where d = \left| \left| a_n - b_n \right| \right|_2 . |
class |
FloatEuclideanLossLayer
\brief Computes the Euclidean (L2) loss
E = \frac{1}{2N} \sum\limits_{n=1}^N \left| \left| \hat{y}_n - y_n
\right| \right|_2^2 for real-valued regression tasks. |
class |
FloatHingeLossLayer
\brief Computes the hinge loss for a one-of-many classification task.
|
class |
FloatInfogainLossLayer
\brief A generalization of MultinomialLogisticLossLayer that takes an
"information gain" (infogain) matrix specifying the "value" of all label
pairs.
|
class |
FloatMultinomialLogisticLossLayer
\brief Computes the multinomial logistic loss for a one-of-many
classification task, directly taking a predicted probability
distribution as input.
|
class |
FloatSigmoidCrossEntropyLossLayer
\brief Computes the cross-entropy (logistic) loss
E = \frac{-1}{n} \sum\limits_{n=1}^N \left[
p_n \log \hat{p}_n +
(1 - p_n) \log(1 - \hat{p}_n)
\right]
, often used for predicting targets interpreted as probabilities. |
class |
FloatSoftmaxWithLossLayer
\brief Computes the multinomial logistic loss for a one-of-many
classification task, passing real-valued predictions through a
softmax to get a probability distribution over classes.
|
Copyright © 2022. All rights reserved.