Main Content

isanomaly

Find anomalies in data using robust random cut forest

Since R2023a

    Description

    example

    tf = isanomaly(forest,Tbl) finds anomalies in the table Tbl using the RobustRandomCutForest model object forest and returns the logical array tf, whose elements are true when an anomaly is detected in the corresponding row of Tbl. You must use this syntax if you create forest by passing a table to the rrcforest function.

    tf = isanomaly(forest,X) finds anomalies in the matrix X. You must use this syntax if you create forest by passing a matrix to the rrcforest function.

    example

    tf = isanomaly(___,Name=Value) specifies options using one or more name-value arguments in addition to any of the input argument combinations in the previous syntaxes. For example, set ScoreThreshold=0.5 to identify observations with scores above 0.5 as anomalies.

    [tf,scores] = isanomaly(___) also returns an anomaly score in the range [0,Inf) for each observation in Tbl or X. A small positive value indicates a normal observation, and a large positive value indicates an anomaly.

    Examples

    collapse all

    Create a RobustRandomCutForest model object for uncontaminated training observations by using the rrcforest function. Then detect novelties (anomalies in new data) by passing the object and the new data to the object function isanomaly.

    Load the 1994 census data stored in census1994.mat. The data set contains demographic data from the US Census Bureau to predict whether an individual makes over $50,000 per year.

    load census1994

    census1994 contains the training data set adultdata and the test data set adulttest.

    Assume that adultdata does not contain outliers. Train a robust random cut forest model for adultdata. Specify StandardizeData as true to standardize the input data.

    rng("default") % For reproducibility
    [Mdl,tf,s] = rrcforest(adultdata,StandardizeData=true);

    Mdl is a RobustRandomCutForest model object. rrcforest also returns the anomaly indicators tf and anomaly scores s for the training data adultdata. If you do not specify the ContaminationFraction name-value argument as a value greater than 0, then rrcforest treats all training observations as normal observations, meaning all the values in tf are logical 0 (false). The function sets the score threshold to the maximum score value. Display the threshold value.

    Mdl.ScoreThreshold
    ans = 86.5315
    

    Find anomalies in adulttest by using the trained robust random cut forest model. Because you specified StandardizeData=true when you trained the model, the isanomaly function standardizes the input data by using the predictor means and standard deviations of the training data stored in the Mu and Sigma properties, respectively.

    [tf_test,s_test] = isanomaly(Mdl,adulttest);

    The isanomaly function returns the anomaly indicators tf_test and scores s_test for adulttest. By default, isanomaly identifies observations with scores above the threshold (Mdl.ScoreThreshold) as anomalies.

    Create histograms for the anomaly scores s and s_test. Create a vertical line at the threshold of the anomaly scores.

    histogram(s,Normalization="probability")
    hold on
    histogram(s_test,Normalization="probability")
    xline(Mdl.ScoreThreshold,"r-",join(["Threshold" Mdl.ScoreThreshold]))
    legend("Training Data","Test Data",Location="northwest")
    hold off

    Display the observation index of the anomalies in the test data.

    find(tf_test)
    ans = 3541
    

    The anomaly score distribution of the test data is similar to that of the training data, so isanomaly detects a small number of anomalies in the test data with the default threshold value.

    Zoom in to see the anomaly and the observations near the threshold.

    xlim([50 92])
    ylim([0 0.001])

    You can specify a different threshold value by using the ScoreThreshold name-value argument. For an example, see Specify Anomaly Score Threshold.

    Specify the threshold value for anomaly scores by using the ScoreThreshold name-value argument of isanomaly.

    Load the 1994 census data stored in census1994.mat. The data set contains demographic data from the US Census Bureau to predict whether an individual makes over $50,000 per year.

    load census1994

    census1994 contains the training data set adultdata and the test data set adulttest.

    Train a robust random cut forest model for adultdata. Specify StandardizeData as true to standardize the input data.

    rng("default") % For reproducibility
    [Mdl,tf,scores] = rrcforest(adultdata,StandardizeData=true);

    Plot a histogram of the score values. Create a vertical line at the default score threshold.

    histogram(scores,Normalization="probability");
    xline(Mdl.ScoreThreshold,"r-",join(["Threshold" Mdl.ScoreThreshold]))

    Find the anomalies in the test data using the trained robust random cut forest model. Use a different threshold from the default threshold value obtained when training the model.

    First, determine the score threshold by using the isoutlier function.

    [~,~,U] = isoutlier(scores)
    U = 14.0904
    

    Specify the value of the ScoreThreshold name-value argument as U.

    [tf_test,scores_test] = isanomaly(Mdl,adulttest,ScoreThreshold=U);
    histogram(scores_test,Normalization="probability")
    xline(U,"r-",join(["Threshold" U]))

    Input Arguments

    collapse all

    Trained robust random cut forest model, specified as a RobustRandomCutForest model object.

    Predictor data, specified as a table. Each row of Tbl corresponds to one observation, and each column corresponds to one predictor variable. Multicolumn variables and cell arrays other than cell arrays of character vectors are not allowed.

    If you train forest using a table, then you must provide predictor data by using Tbl, not X. All predictor variables in Tbl must have the same variable names and data types as those in the training data. However, the column order in Tbl does not need to correspond to the column order of the training data.

    Data Types: table

    Predictor data, specified as a numeric matrix. Each row of X corresponds to one observation, and each column corresponds to one predictor variable.

    If you train forest using a matrix, then you must provide predictor data by using X, not Tbl. The variables that make up the columns of X must have the same order as the training data.

    Data Types: single | double

    Name-Value Arguments

    Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

    Example: ScoreThreshold=0.75,UseParallel=true sets the threshold for the anomaly score to 0.75 and runs computations in parallel.

    Threshold for the anomaly score, specified as a numeric scalar in the range [0,Inf). isanomaly identifies observations with scores above the threshold as anomalies.

    The default value is the ScoreThreshold property value of forest.

    Example: ScoreThreshold=50

    Data Types: single | double

    Flag to run in parallel, specified as a numeric or logical 1 (true) or 0 (false). If you specify UseParallel=true, the isanomaly function executes for-loop iterations by using parfor. The loop runs in parallel when you have Parallel Computing Toolbox™.

    Example: UseParallel=true

    Data Types: logical

    Output Arguments

    collapse all

    Anomaly indicators, returned as a logical column vector. An element of tf is true when the observation in the corresponding row of Tbl or X is an anomaly, and false otherwise. tf has the same length as Tbl or X.

    isanomaly identifies observations with scores above the threshold (the ScoreThreshold value) as anomalies.

    Anomaly scores, returned as a numeric column vector with values in the range [0,Inf). scores has the same length as Tbl or X, and each element of scores contains an anomaly score for the observation in the corresponding row of Tbl or X. A small positive value indicates a normal observation, and a large positive value indicates an anomaly.

    More About

    collapse all

    Robust Random Cut Forest

    The robust random cut forest algorithm [1] classifies a point as a normal point or an anomaly based on the change in model complexity introduced by the point. Similar to the Isolation Forest algorithm, the robust random cut forest algorithm builds an ensemble of trees. The two algorithms differ in how they choose a split variable in the trees and how they define anomaly scores.

    The rrcforest function creates a robust random cut forest model (ensemble of robust random cut trees) for training observations and detects outliers (anomalies in the training data). Each tree is trained for a subset of training observations as follows:

    1. rrcforest draws samples without replacement from the training observations for each tree.

    2. rrcforest grows a tree by choosing a split variable in proportion to the ranges of variables, and choosing the split position uniformly at random. The function continues until every sample reaches a separate leaf node for each tree.

    Using the range information in to choose a split variable makes the algorithm robust to irrelevant variables.

    Anomalies are easy to describe, but make describing the remainder of the data more difficult. Therefore, adding an anomaly to a model increases the model complexity of a forest model [1]. The rrcforest function identifies outliers using anomaly scores that are defined based on the change in model complexity.

    The isanomaly function uses a trained robust random cut forest model to detect anomalies in the data. For novelty detection (detecting anomalies in new data with uncontaminated training data), you can train a robust random cut forest model with uncontaminated training data (data with no outliers) and use it to detect anomalies in new data. For each observation of the new data, the function finds the corresponding leaf node in each tree, computes the change in model complexity introduced by the leaf nodes, and returns an anomaly indicator and score.

    Anomaly Scores

    The robust random cut forest algorithm uses collusive displacement as an anomaly score. The collusive displacement of a point x indicates the contribution of x to the model complexity of a forest model. A small positive anomaly score value indicates a normal observation, and a large positive value indicates an anomaly.

    As defined in [1], the model complexity |M(T)| of a tree T is the sum of path lengths (the distance from the root node to the leaf nodes) over all points in the training data Z.

    |M(T)|=yZf(y,Z,T),

    where f(y,Z,T) is the depth of y in tree T. The displacement of x is defined to indicate the expected changes in the model complexity introduced by x.

    Disp(x,Z)=T,yZ{x}P(T)(f(y,Z,T)f(y,Z{x},T)),

    where T' is a tree over Z – {x}. Disp(x,Z) is the expected number of points in the sibling node of the leaf node containing x. This definition is not robust to duplicates or near-duplicates, and can cause outlier masking. To avoid outlier masking, the robust random cut forest algorithm uses the collusive displacement CoDisp, where a set C includes x and the colluders of x.

    CoDisp(x,Z)=ET[maxxCZ1|C|yZC(f(y,Z,T)f(y,ZC,T))],

    where T" is a tree over ZC, and |C| is the number of points in the subtree of T for C.

    The default value for the CollusiveDisplacement name-value argument of rrcforest is "maximal". For each tree, by default, the software finds the set C that maximizes the ratio Disp(x,C)/|C| by traversing from the leaf node of x to the root node, as described in [2]. If you specify CollusiveDisplacement="average", the software computes the average of the ratios for each tree, and uses the averaged values to compute the collusive displacement value.

    Algorithms

    isanomaly considers NaN, '' (empty character vector), "" (empty string), <missing>, and <undefined> values in Tbl and NaN values in X to be missing values.

    isanomaly uses observations with missing values to find splits on variables for which these observations have valid values. The function might place these observations in a branch node, not a leaf node. Then isanomaly computes the ratio (Disp(x,C)/|C|) by traversing from the branch node to the root node for each tree. The function places an observation with all missing values in the root node. Therefore, the ratio and the anomaly score become the number of training observations for each tree, which is the maximum possible anomaly score for the trained robust random cut forest model. You can specify the number of training observations for each tree by using the NumObservationsPerLearner name-value argument.

    References

    [1] Guha, Sudipto, N. Mishra, G. Roy, and O. Schrijvers. "Robust Random Cut Forest Based Anomaly Detection on Streams," Proceedings of The 33rd International Conference on Machine Learning 48 (June 2016): 2712–21.

    [2] Bartos, Matthew D., A. Mullapudi, and S. C. Troutman. "rrcf: Implementation of the Robust Random Cut Forest Algorithm for Anomaly Detection on Streams." Journal of Open Source Software 4, no. 35 (2019): 1336.

    Extended Capabilities

    Version History

    Introduced in R2023a