![]() ![]() The hinge loss leads to better accuracy and sparsity at the cost of less sensitivity regarding probabilities. ![]() Margins in Hinge Loss is the smallest distance between the line (or hyperplane) and data that separates our points into classes and defines our classification. It instead, punishes the misclassification, leading it to be good at estimating margins. However, Hinge loss does not help in probabilistic estimation. There are various loss types starting from cross-entropy to mean squared error, huber loss and the hinge loss(L-2 regularization loss)Ĭross-entropy loss (logarithmic loss) leads to well behaved probabilistic outputs that is allowing us to find the maximum likelihood estimate of our model’s parameters. For any machine learning model, the weights are learned from minimizing the loss function. They give us an objective to measure the performance of the model. For any machine learning model, loss functions are a key part of it. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
May 2023
Categories |