LogSumExp
LogSumExp
Smooth approximation to the maximum function
The LogSumExp (LSE) (also called RealSoftMax[1] or multivariable softplus) function is a smooth maximum – a smooth approximation to the maximum function, mainly used by machine learning algorithms.[2] It is defined as the logarithm of the sum of the exponentials of the arguments:
This article needs additional citations for verification. (August 2015) |
The LogSumExp function domain is , the real coordinate space, and its codomain is , the real line. It is an approximation to the maximum with the following bounds
The first inequality is strict unless . The second inequality is strict unless all arguments are equal. (Proof: Let . Then . Applying the logarithm to the inequality gives the result.)
In addition, we can scale the function to make the bounds tighter. Consider the function . Then
(Proof: Replace each with for some in the inequalities above, to give
and, since
finally, dividing by gives the result.)
Also, if we multiply by a negative number instead, we of course find a comparison to the function:
The LogSumExp function is convex, and is strictly increasing everywhere in its domain[3] (but not strictly convex everywhere[4]).
Writing the partial derivatives are:
which means the gradient of LogSumExp is the softmax function.
The convex conjugate of LogSumExp is the negative entropy.
The LSE function is often encountered when the usual arithmetic computations are performed on a logarithmic scale, as in log probability.[5]
Similar to multiplication operations in linear-scale becoming simple additions in log-scale, an addition operation in linear-scale becomes the LSE in log-scale:
A common purpose of using log-domain computations is to increase accuracy and avoid underflow and overflow problems when very small or very large numbers are represented directly (i.e. in a linear domain) using limited-precision floating point numbers.[6]
Unfortunately, the use of LSE directly in this case can again cause overflow/underflow problems. Therefore, the following equivalent must be used instead (especially when the accuracy of the above 'max' approximation is not sufficient).
where
Many math libraries such as IT++ provide a default routine of LSE and use this formula internally.
LSE is convex but not strictly convex. We can define a strictly convex log-sum-exp type function[7] by adding an extra argument set to zero:
This function is a proper Bregman generator (strictly convex and differentiable). It is encountered in machine learning, for example, as the cumulant of the multinomial/binomial family.
In tropical analysis, this is the sum in the log semiring.
- Zhang, Aston; Lipton, Zack; Li, Mu; Smola, Alex. "Dive into Deep Learning, Chapter 3 Exercises". www.d2l.ai. Retrieved 27 June 2020.
- Nielsen, Frank; Sun, Ke (2016). "Guaranteed bounds on the Kullback-Leibler divergence of univariate mixtures using piecewise log-sum-exp inequalities". Entropy. 18 (12): 442. arXiv:1606.05850. Bibcode:2016Entrp..18..442N. doi:10.3390/e18120442. S2CID 17259055.
- El Ghaoui, Laurent (2017). Optimization Models and Applications.
- McElreath, Richard. Statistical Rethinking. OCLC 1107423386.
- "Practical issues: Numeric stability". CS231n Convolutional Neural Networks for Visual Recognition.
- Nielsen, Frank; Hadjeres, Gaetan (2018). "Monte Carlo Information Geometry: The dually flat case". arXiv:1803.07225. Bibcode:2018arXiv180307225N.
{{cite journal}}
: Cite journal requires|journal=
(help)