Articles in the permanent category
-
-
Binary Cross-Entropy Loss
Binary Cross-Entropy (BCE), also known as log loss, is a loss function used in binary or multi-label machine learning training.
It's nearly identical to Negative …
-
-
Categorical Cross-Entropy Loss
Categorical Cross-Entropy Loss Function, also known as Softmax Loss, is a loss function used in multiclass classification model training. It applies the Softmax Activation Function …
-
Cross-Entropy
Cross-entropy measures the average number of bits required to identify an event if you had a coding scheme optimised for one probability distribution
-
Information Entropy
Entropy is a measure of uncertainty of a random variable's possible outcomes.
It's highest when there are many equally likely outcomes. As you introduce more …
-
Negative Log-Likelihood
Negative log-likelihood is a loss function used in multi-class classification.
Calculated as , where
-
Softmax Activation Function
The Softmax function converts a vector of numbers into a vector of probabilities that sum to 1. It's applied to a model's outputs (or Logits …
-
Sigmoid Activation Function
The Sigmoid function squeezes numbers into a probability-like range between 0 and 1.1 Used in Binary Classification model architectures to compute loss on discrete …
-