Articles in the permanent category


Binary CrossEntropy Loss
Binary CrossEntropy (BCE), also known as log loss, is a loss function used in binary or multilabel machine learning training.
It's nearly identical to Negative …


Categorical CrossEntropy Loss
Categorical CrossEntropy Loss Function, also known as Softmax Loss, is a loss function used in multiclass classification model training. It applies the Softmax Activation Function …

CrossEntropy
Crossentropy measures the average number of bits required to identify an event if you had a coding scheme optimised for one probability distribution $q …$

Information Entropy
Entropy is a measure of uncertainty of a random variable's possible outcomes.
It's highest when there are many equally likely outcomes. As you introduce more …

Negative LogLikelihood
Negative loglikelihood is a loss function used in multiclass classification.
Calculated as $log(\textbf{y})$, where $\mathbf{\text{y \u2026}}$

Softmax Activation Function
The Softmax function converts a vector of numbers into a vector of probabilities that sum to 1. It's applied to a model's outputs (or Logits …

Sigmoid Activation Function
The Sigmoid function squeezes numbers into a probabilitylike range between 0 and 1.^{1} Used in Binary Classification model architectures to compute loss on discrete …
