pdist
Pairwise distance between pairs of observations
Syntax
Description
returns the distance using the method specified by D
= pdist(X
,Distance
,DistParameter
)Distance
and
DistParameter
. You can specify
DistParameter
only when Distance
is
'seuclidean'
, 'minkowski'
, or
'mahalanobis'
.
D = pdist(
or
X
,Distance
,CacheSize=cache
)D = pdist(
uses a cache of size X
,Distance
,DistParameter
,CacheSize=cache
)cache
megabytes to accelerate the
computation of Euclidean distances. This argument applies only when
Distance
is 'fasteuclidean'
,
'fastsquaredeuclidean'
, or
'fastseuclidean'
.
Examples
Compute Euclidean Distance and Convert Distance Vector to Matrix
Compute the Euclidean distance between pairs of observations, and convert the distance vector to a matrix using squareform
.
Create a matrix with three observations and two variables.
rng('default') % For reproducibility X = rand(3,2);
Compute the Euclidean distance.
D = pdist(X)
D = 1×3
0.2954 1.0670 0.9448
The pairwise distances are arranged in the order (2,1), (3,1), (3,2). You can easily locate the distance between observations i
and j
by using squareform
.
Z = squareform(D)
Z = 3×3
0 0.2954 1.0670
0.2954 0 0.9448
1.0670 0.9448 0
squareform
returns a symmetric matrix where Z(i,j)
corresponds to the pairwise distance between observations i
and j
. For example, you can find the distance between observations 2 and 3.
Z(2,3)
ans = 0.9448
Pass Z
to the squareform
function to reproduce the output of the pdist
function.
y = squareform(Z)
y = 1×3
0.2954 1.0670 0.9448
The outputs y
from squareform
and D
from pdist
are the same.
Compute Minkowski Distance
Create a matrix with three observations and two variables.
rng('default') % For reproducibility X = rand(3,2);
Compute the Minkowski distance with the default exponent 2.
D1 = pdist(X,'minkowski')
D1 = 1×3
0.2954 1.0670 0.9448
Compute the Minkowski distance with an exponent of 1, which is equal to the city block distance.
D2 = pdist(X,'minkowski',1)
D2 = 1×3
0.3721 1.5036 1.3136
D3 = pdist(X,'cityblock')
D3 = 1×3
0.3721 1.5036 1.3136
Compute Pairwise Distance with Missing Elements Using a Custom Distance Function
Define a custom distance function that ignores coordinates with NaN
values, and compute pairwise distance by using the custom distance function.
Create a matrix with three observations and two variables.
rng('default') % For reproducibility X = rand(3,2);
Assume that the first element of the first observation is missing.
X(1,1) = NaN;
Compute the Euclidean distance.
D1 = pdist(X)
D1 = 1×3
NaN NaN 0.9448
If observation i
or j
contains NaN
values, the function pdist
returns NaN
for the pairwise distance between i
and j
. Therefore, D1(1) and D1(2), the pairwise distances (2,1) and (3,1), are NaN
values.
Define a custom distance function naneucdist
that ignores coordinates with NaN
values and returns the Euclidean distance.
function D2 = naneucdist(XI,XJ) %NANEUCDIST Euclidean distance ignoring coordinates with NaNs n = size(XI,2); sqdx = (XIXJ).^2; nstar = sum(~isnan(sqdx),2); % Number of pairs that do not contain NaNs nstar(nstar == 0) = NaN; % To return NaN if all pairs include NaNs D2squared = sum(sqdx,2,'omitnan').*n./nstar; % Correction for missing coordinates D2 = sqrt(D2squared);
Compute the distance with naneucdist
by passing the function handle as an input argument of pdist
.
D2 = pdist(X,@naneucdist)
D2 = 1×3
0.3974 1.1538 0.9448
Accelerate Euclidean Distance Computation Using fasteuclidean
Distance
Create a large matrix of points, and then measure the time used by pdist
with the default "euclidean"
distance metric.
rng default % For reproducibility N = 10000; X = randn(N,1000); D = pdist(X); % Warm up function for more reliable timing information tic D = pdist(X); standard = toc
standard = 9.6896
Next, measure the time used by pdist
with the "fasteuclidean"
distance metric. Specify a cache size of 10.
D = pdist(X,"fasteuclidean",CacheSize=10); % Warm up function tic D2 = pdist(X,"fasteuclidean",CacheSize=10); accelerated = toc
accelerated = 1.1904
Evaluate how many times faster the accelerated computation is compared to the standard.
standard/accelerated
ans = 8.1395
The accelerated version computes about three times faster for this example.
Input Arguments
X
— Input data
numeric matrix
Input data, specified as a numeric matrix of size mbyn. Rows correspond to individual observations, and columns correspond to individual variables.
Data Types: single
 double
Distance
— Distance metric
character vector  string scalar  function handle
Distance metric, specified as a character vector, string scalar, or function handle, as described in the following table.
Value  Description 

'euclidean'  Euclidean distance (default) 
'squaredeuclidean'  Squared Euclidean distance. (This option is provided for efficiency only. It does not satisfy the triangle inequality.) 
'seuclidean'  Standardized Euclidean distance. Each coordinate difference between observations is
scaled by dividing by the corresponding element of the standard deviation,

'fasteuclidean'  Euclidean distance computed by using an alternative algorithm that saves time
when the number of predictors is at least 10. In some cases, this faster
algorithm can reduce accuracy. Algorithms starting with
'fast' do not support sparse data. For details, see Algorithms. 
'fastsquaredeuclidean'  Squared Euclidean distance computed by using an alternative algorithm that
saves time when the number of predictors is at least 10. In some cases, this
faster algorithm can reduce accuracy. Algorithms starting with
'fast' do not support sparse data. For details, see Algorithms. 
'fastseuclidean'  Standardized Euclidean distance computed by using an alternative algorithm
that saves time when the number of predictors is at least 10. In some cases,
this faster algorithm can reduce accuracy. Algorithms starting with
'fast' do not support sparse data. For details, see Algorithms. 
'mahalanobis'  Mahalanobis distance, computed using the sample covariance of

'cityblock'  City block distance 
'minkowski'  Minkowski distance. The default exponent is 2. Use 
'chebychev'  Chebychev distance (maximum coordinate difference) 
'cosine'  One minus the cosine of the included angle between points (treated as vectors) 
'correlation'  One minus the sample correlation between points (treated as sequences of values) 
'hamming'  Hamming distance, which is the percentage of coordinates that differ 
'jaccard'  One minus the Jaccard coefficient, which is the percentage of nonzero coordinates that differ 
'spearman'  One minus the sample Spearman's rank correlation between observations (treated as sequences of values) 
@  Custom distance function handle. A distance function has the form function D2 = distfun(ZI,ZJ) % calculation of distance ...
If your data is not sparse, you can generally compute distances more quickly by using a builtin distance metric instead of a function handle. 
For definitions, see Distance Metrics.
When you use 'seuclidean'
,
'minkowski'
, or 'mahalanobis'
, you
can specify an additional input argument DistParameter
to control these metrics. You can also use these metrics in the same way as
the other metrics with the default value of
DistParameter
.
Example:
'minkowski'
Data Types: char
 string
 function_handle
DistParameter
— Distance metric parameter values
positive scalar  numeric vector  numeric matrix
Distance metric parameter values, specified as a positive scalar, numeric vector, or
numeric matrix. This argument is valid only when you specify
Distance
as 'seuclidean'
,
'minkowski'
, or 'mahalanobis'
.
If
Distance
is'seuclidean'
,DistParameter
is a vector of scaling factors for each dimension, specified as a positive vector. The default value isstd(X,'omitnan')
.If
Distance
is'minkowski'
,DistParameter
is the exponent of Minkowski distance, specified as a positive scalar. The default value is 2.If
Distance
is'mahalanobis'
,DistParameter
is a covariance matrix, specified as a numeric matrix. The default value iscov(X,'omitrows')
.DistParameter
must be symmetric and positive definite.
Example:
'minkowski',3
Data Types: single
 double
cache
— Size of Gram matrix in megabytes
1e3
(default)  positive scalar  "maximal"
Size of the Gram matrix in megabytes, specified as a positive scalar or
"maximal"
. The pdist
function can use CacheSize=cache
only when the
Distance
argument is
'fasteuclidean'
,
'fastsquaredeuclidean'
, or
'fastseuclidean'
.
If cache
is "maximal"
,
pdist
tries to allocate enough memory for an
entire intermediate matrix whose size is
M
byM
, where M
is the number of rows of the input data X
. The cache
size does not have to be large enough for an entire intermediate matrix, but
must be at least large enough to hold an M
by1 vector.
Otherwise, pdist
uses the standard algorithm for
computing Euclidean distances.
If the distance argument is 'fasteuclidean'
,
'fastsquaredeuclidean'
, or
'fastseuclidean'
and the cache
value is too large or "maximal"
,
pdist
might try to allocate a Gram matrix
that exceeds the available memory. In this case, MATLAB^{®} issues an error.
Example: "maximal"
Data Types: double
 char
 string
Output Arguments
D
— Pairwise distances
numeric row vector
Pairwise distances, returned as a numeric row vector of length
m(m–1)/2, corresponding to pairs
of observations, where m is the number of observations in
X
.
The distances are arranged in the order (2,1), (3,1), ..., (m,1), (3,2), ..., (m,2), ..., (m,m–1), i.e., the lowerleft triangle of the mbym distance matrix in column order. The pairwise distance between observations i and j is in D((i1)*(mi/2)+ji) for i≤j.
You can convert D
into a symmetric matrix by using
the squareform
function.
Z = squareform(D)
returns an
mbym matrix where
Z(i,j)
corresponds to the pairwise distance between
observations i and j.
If observation i or j contains
NaN
s, then the corresponding value in
D
is NaN
for the builtin
distance functions.
D
is commonly used as a dissimilarity matrix in
clustering or multidimensional scaling. For details, see Hierarchical Clustering and the function reference pages for
cmdscale
, cophenet
, linkage
, mdscale
, and optimalleaforder
. These
functions take D
as an input argument.
More About
Distance Metrics
A distance metric is a function that defines a distance between
two observations. pdist
supports various distance
metrics: Euclidean distance, standardized Euclidean distance, Mahalanobis distance,
city block distance, Minkowski distance, Chebychev distance, cosine distance,
correlation distance, Hamming distance, Jaccard distance, and Spearman
distance.
Given an mbyn data matrix
X
, which is treated as m
(1byn) row vectors
x_{1},
x_{2}, ...,
x_{m}, the various distances between
the vector x_{s} and
x_{t} are defined as follows:
Euclidean distance
$${d}_{st}^{2}=({x}_{s}{x}_{t})({x}_{s}{x}_{t}{)}^{\prime}.$$
The Euclidean distance is a special case of the Minkowski distance, where p = 2.
Standardized Euclidean distance
$${d}_{st}^{2}=({x}_{s}{x}_{t}){V}^{1}({x}_{s}{x}_{t}{)}^{\prime},$$
where V is the nbyn diagonal matrix whose jth diagonal element is (S(j))^{2}, where S is a vector of scaling factors for each dimension.
Mahalanobis distance
$${d}_{st}^{2}=({x}_{s}{x}_{t}){C}^{1}({x}_{s}{x}_{t}{)}^{\prime},$$
where C is the covariance matrix.
City block distance
$${d}_{st}={\displaystyle \sum _{j=1}^{n}\left{x}_{sj}{x}_{tj}\right}.$$
The city block distance is a special case of the Minkowski distance, where p = 1.
Minkowski distance
$${d}_{st}=\sqrt[p]{{\displaystyle \sum _{j=1}^{n}{\left{x}_{sj}{x}_{tj}\right}^{p}}}.$$
For the special case of p = 1, the Minkowski distance gives the city block distance. For the special case of p = 2, the Minkowski distance gives the Euclidean distance. For the special case of p = ∞, the Minkowski distance gives the Chebychev distance.
Chebychev distance
$${d}_{st}={\mathrm{max}}_{j}\left\{\left{x}_{sj}{x}_{tj}\right\right\}.$$
The Chebychev distance is a special case of the Minkowski distance, where p = ∞.
Cosine distance
$${d}_{st}=1\frac{{x}_{s}{{x}^{\prime}}_{t}}{\sqrt{\left({x}_{s}{{x}^{\prime}}_{s}\right)\left({x}_{t}{{x}^{\prime}}_{t}\right)}}.$$
Correlation distance
$${d}_{st}=1\frac{\left({x}_{s}{\overline{x}}_{s}\right){\left({x}_{t}{\overline{x}}_{t}\right)}^{\prime}}{\sqrt{\left({x}_{s}{\overline{x}}_{s}\right){\left({x}_{s}{\overline{x}}_{s}\right)}^{\prime}}\sqrt{\left({x}_{t}{\overline{x}}_{t}\right){\left({x}_{t}{\overline{x}}_{t}\right)}^{\prime}}},$$
where
$${\overline{x}}_{s}=\frac{1}{n}{\displaystyle \sum _{j}{x}_{sj}}$$ and $${\overline{x}}_{t}=\frac{1}{n}{\displaystyle \sum _{j}{x}_{tj}}$$.
Hamming distance
$${d}_{st}=(\#({x}_{sj}\ne {x}_{tj})/n).$$
Jaccard distance
$${d}_{st}=\frac{\#\left[\left({x}_{sj}\ne {x}_{tj}\right)\cap \left(\left({x}_{sj}\ne 0\right)\cup \left({x}_{tj}\ne 0\right)\right)\right]}{\#\left[\left({x}_{sj}\ne 0\right)\cup \left({x}_{tj}\ne 0\right)\right]}.$$
Spearman distance
$${d}_{st}=1\frac{\left({r}_{s}{\overline{r}}_{s}\right){\left({r}_{t}{\overline{r}}_{t}\right)}^{\prime}}{\sqrt{\left({r}_{s}{\overline{r}}_{s}\right){\left({r}_{s}{\overline{r}}_{s}\right)}^{\prime}}\sqrt{\left({r}_{t}{\overline{r}}_{t}\right){\left({r}_{t}{\overline{r}}_{t}\right)}^{\prime}}},$$
where
r_{sj} is the rank of x_{sj} taken over x_{1}_{j}, x_{2}_{j}, ...x_{mj}, as computed by
tiedrank
.r_{s} and r_{t} are the coordinatewise rank vectors of x_{s} and x_{t}, i.e., r_{s} = (r_{s}_{1}, r_{s}_{2}, ... r_{sn}).
$${\overline{r}}_{s}=\frac{1}{n}{\displaystyle \sum _{j}{r}_{sj}}=\frac{\left(n+1\right)}{2}$$.
$${\overline{r}}_{t}=\frac{1}{n}{\displaystyle \sum _{j}{r}_{tj}}=\frac{\left(n+1\right)}{2}$$.
Algorithms
Fast Euclidean Distance Algorithm
The values of the Distance
argument that begin
fast
(such as 'fasteuclidean'
and
'fastseuclidean'
) calculate Euclidean distances using an
algorithm that uses extra memory to save computational time. This algorithm is named
"Euclidean Distance Matrix Trick" in Albanie [1] and elsewhere.
Internal testing shows that this algorithm saves time when the number of predictors
is at least 10.
To find the matrix D of distances between all the points x_{i} and x_{j}, where each x_{i} has n variables, the algorithm computes distance using the final line in the following equations:
$$\begin{array}{c}{D}_{i,j}^{2}=\Vert {x}_{i}{x}_{j}{\Vert}^{2}\\ ={(}^{{x}_{i}}({x}_{i}{x}_{j})\\ =\Vert {x}_{i}{\Vert}^{2}2{x}_{i}^{T}{x}_{j}+\Vert {x}_{j}{\Vert}^{2}.\end{array}$$
The matrix $${x}_{i}^{T}{x}_{j}$$ in the last line of the equations is called the Gram matrix. Computing the set of squared distances is faster, but slightly less numerically stable, when you compute and use the Gram matrix instead of computing the squared distances by squaring and summing. For a discussion, see Albanie [1].
To store the Gram matrix, the software uses a cache with the default size of
1e3
megabytes. You can set the cache size using the
cache
argument. If the value of cache
is
too large or "maximal"
, pdist
might
try to allocate a Gram matrix that exceeds the available memory. In this case,
MATLAB issues an error.
References
[1] Albanie, Samuel. Euclidean Distance Matrix Trick. June, 2019. Available at https://www.robots.ox.ac.uk/%7Ealbanie/notes/Euclidean_distance_trick.pdf.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Usage notes and limitations:
The distance input argument value (
Distance
) must be a compiletime constant. For example, to use the Minkowski distance, includecoder.Constant('Minkowski')
in theargs
value ofcodegen
.The distance input argument value (
Distance
) cannot be a custom distance function.pdist
does not support code generation for fast Euclidean distance computations, meaning those distance metrics whose names begin withfast
(for example,'fasteuclidean'
).The generated code of
pdist
usesparfor
(MATLAB Coder) to create loops that run in parallel on supported sharedmemory multicore platforms in the generated code. If your compiler does not support the Open Multiprocessing (OpenMP) application interface or you disable OpenMP library, MATLAB Coder™ treats theparfor
loops asfor
loops. To find supported compilers, see Supported Compilers. To disable OpenMP library, set theEnableOpenMP
property of the configuration object tofalse
. For details, seecoder.CodeConfig
(MATLAB Coder).
For more information on code generation, see Introduction to Code Generation and General Code Generation Workflow.
GPU Code Generation
Generate CUDA® code for NVIDIA® GPUs using GPU Coder™.
Usage notes and limitations:
The supported distance input argument values (
Distance
) for optimized CUDA code are'euclidean'
,'squaredeuclidean'
,'seuclidean'
,'cityblock'
,'minkowski'
,'chebychev'
,'cosine'
,'correlation'
,'hamming'
, and'jaccard'
.Distance
cannot be a custom distance function.Distance
must be a compiletime constant.
GPU Arrays
Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™.
Usage notes and limitations:
You cannot specify the
Distance
input argument as"fasteuclidean"
,"fastsquaredeuclidean"
,"fastseuclidean"
, or a custom distance function.
For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).
Version History
Introduced before R2006aR2023a: Fast Euclidean distance using a cache
The 'fasteuclidean'
, 'fastseuclidean'
, and
'fastsquaredeuclidean'
distance metrics accelerate the
computation of Euclidean distances by using a cache and a different algorithm (see
Algorithms). Set the size
of the cache using the cache
argument.
See Also
cluster
 clusterdata
 cmdscale
 cophenet
 dendrogram
 inconsistent
 linkage
 pdist2
 silhouette
 squareform
Apri esempio
Si dispone di una versione modificata di questo esempio. Desideri aprire questo esempio con le tue modifiche?
Comando MATLAB
Hai fatto clic su un collegamento che corrisponde a questo comando MATLAB:
Esegui il comando inserendolo nella finestra di comando MATLAB. I browser web non supportano i comandi MATLAB.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
 América Latina (Español)
 Canada (English)
 United States (English)
Europe
 Belgium (English)
 Denmark (English)
 Deutschland (Deutsch)
 España (Español)
 Finland (English)
 France (Français)
 Ireland (English)
 Italia (Italiano)
 Luxembourg (English)
 Netherlands (English)
 Norway (English)
 Österreich (Deutsch)
 Portugal (English)
 Sweden (English)
 Switzerland
 United Kingdom (English)