# Is it bad practice to have nonlinear constraints array change size depending on the input solution in a genetic algorithm?

1 view (last 30 days)

Show older comments

In a nonlinear ga, the nonlin constraint function is evaluated with each solution, and the output is [c,ceq]. These arrays contain in each element a constraint that is checked with each solution.

The function is called as:

ConstraintFunction = @(x) Func_Nonlinear_Constraints(x,extra_parameters)

In my case, the amount of checks to make depends on the amount of ones in a given solution (e.g., x=[0 1 1 0 0 1 0 1]).

One way to implement this, is to have a fixed size for "c" that does not depend on the solution. Then, all the scalar values that result from each check is placed into the array. This results in the array containing both the actual constraints and a bunch of zeros.

Another way to implement this, would be to have the size of "c" to be the same as the number of checks to make.

If there are 4 checks, in the first case "c" would have size -for example- 10x1 (including the 4 checks and 6 zeros), while the second approach would be c=4x1.

In order to speed up the code in my particular problem, working with a "c" of changing size may be useful. However, I am concerned it might be an issue. At the same time, I reckon that the GA does not care about how many extra zeros (or satisfied constraints) exist, but rather only look at how many unsatisfied constraints there are as a result. If this is true, then the "c" of changing size should not be an issue.

Is this correct? Or would a "c" (and "ceq") of changing size be a problem for the algorithm? Is it bad practice?

##### 0 Comments

### Accepted Answer

Walter Roberson
on 29 Oct 2020

### More Answers (0)

### See Also

### Categories

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!