# Jacobian matrix and determinant

In vector calculus, the Jacobian matrix (/əˈkbiən/,[1][2][3] /ɪ-, jɪ-/) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as input as the number of vector components of its output, its determinant is referred to as the Jacobian determinant. Both the matrix and (if applicable) the determinant are often referred to simply as the Jacobian in literature.[4]

Suppose f : RnRm is a function such that each of its first-order partial derivatives exist on Rn. This function takes a point xRn as input and produces the vector f(x) ∈ Rm as output. Then the Jacobian matrix of f is defined to be an m×n matrix, denoted by J, whose (i,j)th entry is ${\textstyle \mathbf {J} _{ij}={\frac {\partial f_{i}}{\partial x_{j}}}}$, or explicitly

${\displaystyle \mathbf {J} ={\begin{bmatrix}{\dfrac {\partial \mathbf {f} }{\partial x_{1}}}&\cdots &{\dfrac {\partial \mathbf {f} }{\partial x_{n}}}\end{bmatrix}}={\begin{bmatrix}\nabla ^{\mathrm {T} }f_{1}\\\vdots \\\nabla ^{\mathrm {T} }f_{m}\end{bmatrix}}={\begin{bmatrix}{\dfrac {\partial f_{1}}{\partial x_{1}}}&\cdots &{\dfrac {\partial f_{1}}{\partial x_{n}}}\\\vdots &\ddots &\vdots \\{\dfrac {\partial f_{m}}{\partial x_{1}}}&\cdots &{\dfrac {\partial f_{m}}{\partial x_{n}}}\end{bmatrix}}}$

where ${\displaystyle \nabla ^{\mathrm {T} }f_{i}}$ is the transpose (row vector) of the gradient of the ${\displaystyle i}$ component.

The Jacobian matrix, whose entries are functions of x, is denoted in various ways; common notations include[citation needed] Df, Jf, ${\displaystyle \nabla \mathbf {f} }$, and ${\displaystyle {\frac {\partial (f_{1},..,f_{m})}{\partial (x_{1},..,x_{n})}}}$. Some authors define the Jacobian as the transpose of the form given above.

The Jacobian matrix represents the differential of f at every point where f is differentiable. In detail, if h is a displacement vector represented by a column matrix, the matrix product J(x) ⋅ h is another displacement vector, that is the best linear approximation of the change of f in a neighborhood of x, if f(x) is differentiable at x.[lower-alpha 1] This means that the function that maps y to f(x) + J(x) ⋅ (yx) is the best linear approximation of f(y) for all points y close to x. This linear function is known as the derivative or the differential of f at x.

When m = n, the Jacobian matrix is square, so its determinant is a well-defined function of x, known as the Jacobian determinant of f. It carries important information about the local behavior of f. In particular, the function f has locally in the neighborhood of a point x an inverse function that is differentiable if and only if the Jacobian determinant is nonzero at x (see Jacobian conjecture). The Jacobian determinant also appears when changing the variables in multiple integrals (see substitution rule for multiple variables).

When m = 1, that is when f : RnR is a scalar-valued function, the Jacobian matrix reduces to the row vector ${\displaystyle \nabla ^{\mathrm {T} }f}$; this row vector of all first-order partial derivatives of f is the transpose of the gradient of f, i.e. ${\displaystyle \mathbf {J} _{f}=\nabla ^{T}f}$. Specializing further, when m = n = 1, that is when f : RR is a scalar-valued function of a single variable, the Jacobian matrix has a single entry; this entry is the derivative of the function f.

These concepts are named after the mathematician Carl Gustav Jacob Jacobi (1804–1851).