Main Content

Configure Tuning Options in Fuzzy Logic Designer

Since R2023a

To select an algorithm for tuning your fuzzy inference system (FIS) or FIS tree and configure the algorithm options in the Fuzzy Logic Designer app, open the Tuning Options dialog box. On the Tuning tab, click Tuning Options.

In the Tuning Options dialog box, you can:

  • Select the type of optimization to perform.

  • Select a tuning algorithm. You can choose between several Global Optimization Toolbox methods or adaptive neuro-fuzzy inference system (ANFIS) tuning.

  • Configure k-fold cross validation to prevent overfitting to your training data.

For more information on FIS tuning, see Tuning Fuzzy Inference Systems.

Optimization Type and Method

Select one of the following types of tuning.

Optimization TypeDescription
Tuning

Optimize the existing input, output, and rule parameters without learning new rules.

Learning

Learn new rules up to a maximum number of rules. To specify the maximum number of rules, use the Max number of rules option.

This type of optimization is not supported for ANFIS tuning.

To select a tuning algorithm, in the Method drop-down list, select one of the following tuning methods.

MethodDescription
Genetic algorithmPopulation-based global optimization method that searches randomly by mutation and crossover among population members
Particle swarm optimizationPopulation-based global optimization method in which population members step throughout a search region
Pattern searchDirect-search local optimization method that searches a set of points near the current point to find a new optimum
Simulated annealingA local optimization method that simulates a heating and cooling process to find a new optimal point near the current point
Adaptive neuro-fuzzy inferenceBack-propagation algorithm that tunes membership function parameters — Supported only for type-1 Sugeno systems with a single output)

The first four tuning methods require Global Optimization Toolbox software.

To use default tuning options for any method, select the Use default method options parameter.

Global Optimization Toolbox Method Options

To configure options for one of the Global Optimization Toolbox tuning methods, you must configure two sets of options: algorithm-specific options and FIS tuning options.

Algorithm-Specific Options

To specify algorithm-specific options, expand the Method Options section and add optimization options. Any options that you do not specify use their default values.

To configure an option, in the leftmost drop-down list, select the option category. In the next drop-down list, select the optimization option. Then, specify the option value.

For example, the following figure shows how to configure the following options for the genetic algorithm tuning method.

  • Maximum number of generations, where:

    • The option category is Run time limits.

    • The option is Max generations.

    • The option value is 20.

  • Population size, where:

    • The option category is Population settings.

    • The option is Population size.

    • The option value is 100.

Genetic algorithm tuning options.

To add or remove options, click the corresponding + or , respectively.

For more information on the algorithm-specific tuning options, click the question mark icon to see the Global Optimization Toolbox documentation.

FIS Tuning Options

For all Global Optimization Toolbox optimization methods, you can specify the following FIS tuning options.

Validation OptionDescription
Max number of rules

Maximum number of rules, NR, in a FIS after optimization when using the Learning optimization type. The number of rules in a FIS (after optimization) can be less than NR, since duplicate rules with the same antecedent values are removed from the rule base during tuning.

To automatically set NR based on the number of input variables and the number of membership functions for each input variable, select the auto parameter.

This option is ignored when the optimization type is Tuning.

Random number seed

Select a method for setting the random number generator seed before tuning. For more information, see rng.

  • Initialize generator with seed zero — Initialize the generator with a seed of zero.

  • Initialize Mersenne Twister generator with seed 0 for reproducible results — Initialize the Mersenne Twister generator with seed 0. Use this option for reproducible tuning results. This is the default setting at the start of each MATLAB® session.

  • Initialize generator based on the current time for different sequences — Initialize the generator based on the current time, resulting in a different sequence for each tuning process.

Distance metric

Type of distance metric used for computing the cost for the optimized parameter values with respect to the training data, specified as one of the following:

  • Root mean square error — Root-mean-squared error

  • Vector 1-norm — Vector 1-norm

  • Vector 2-norm — Vector 2-norm

Ignore invalid parametersSelect this parameter to invalid parameter values generated during the tuning process.
Use parallel computingSelect this parameter to use parallel computation in the optimization process. Using parallel computing requires Parallel Computing Toolbox™ software.

ANFIS Tuning Options

To configure the ANFIS tuning algorithm, specify the following tuning options. For more information, see Neuro-Adaptive Learning and ANFIS.

ANFIS OptionDescription
Optimization method

Optimization method used in membership function parameter training. In the drop-down list, select one of the following:

  • Backpropagation with gradient descent — A steepest-decent backpropagation approach for all parameters.

  • Least squares integration with backpropagation — Hybrid method consisting of backpropagation for the parameters associated with the input membership functions, and least squares estimation for the parameters associated with the output membership functions.

Epoch number

Maximum number of training epochs, specified as a positive integer.

Error goal

Training error goal, specified as positive scalar. The training process stops when the training error is less than or equal to the training error goal.

Initial step size

Initial training step size, specified as a positive scalar.

During training, the software updates the step size according to the following rules:

  • If the error undergoes four consecutive reductions, increase the step size by multiplying it by the step-size increase rate.

  • If the error undergoes two consecutive combinations of one increase and one reduction, decrease the step size by multiplying it by the step-size decrease rate.

Step size decrease rate

Step-size decrease rate, specified as a positive scalar less than 1.

Step size increase rate

Step-size increase rate, specified as a scalar greater than 1.

Input validation data

To specify input validation data, in the drop-down list:

  • To use data previously imported into the app, select a data set under Imported Data Sets.

  • To use data from the MATLAB workspace, select a data set under Workspace Data Sets.

Output validation data

To specify output validation data, in the drop-down list:

  • To use data previously imported into the app, select a data set under Imported Data Sets.

  • To use data from the MATLAB workspace, select a data set under Workspace Data Sets.

K-Fold Cross Validation

When tuning a system using a Global Optimization Toolbox method, you can use k-fold cross-validation to prevent overfitting to your data. To configure the validation, in the Tuning Options dialog box, on the Validation tab, specify the following options.

Validation OptionDescription
Number of cross-validation

Number of cross validations to perform, NV specified as a nonnegative integer less than or equal to the number of rows in the training data.

When NV is 0 or 1, the tuning algorithm uses the entire input data set for training and does not perform validation.

Otherwise, the tuning algorithm randomly partitions the input data into NV subsets of approximately equal size. The algorithm then performs NV training-validation iterations. For each iteration, one data subset is used as validation data with the remaining subsets used as training data.

Validation tolerance

Maximum allowable increase in validation cost when using k-fold cross validation, specified as a scalar value in the range [0,1]. A higher validation tolerance value produces a longer training-validation iteration, with an increased possibility of data overfitting.

The increase in validation cost, ΔC, is the difference between the average validation cost and the minimum validation cost, Cmin, for the current training-validation iteration. The average validation cost is a moving average with a window size specified using the Validation window size option.

The app stops the current training-validation iteration when the ratio between ΔC and Cmin exceeds the validation tolerance.

Validation window size

Window size for computing average validation cost, specified as a positive integer. The validation cost moving average is computed over the last NW validation cost values, where NW is the validation window size. A higher window size value produces a longer training-validation iteration, with an increased possibility of data overfitting. A lower window size can cause early termination of the tuning process when the training data is noisy.

K-fold cross validation is not supported for ANFIS tuning.

For more information on k-fold cross validation, see Optimize FIS Parameters with K-Fold Cross-Validation.

See Also

Apps

Related Topics