Hacky unconstrained vs. constrained numerical optimization
1 visualizzazione (ultimi 30 giorni)
Mostra commenti meno recenti
Short version: Why is unconstrained optimization with a constraint "hacked" into the objective function working while constrained optimization isn't?
Long version:
I'm trying to find a p x n subspace U which explains an arbitrary amount of variance in my m x p dataset X. The objective function is:
projection = U'*X';
fracVar = sum(var(projection'))/totalVar;
fVal = (targetVar - fracVar)^2;
I tried using `fmincon()` with a nonlinear equality constraint that U must have orthogonal columns of unit norm:
temp = norm(U'*U - eye(size(U,2)));
I can not get this to work. It fails in different ways, including not getting anywhere near the desired fraction of variance and failing to meet the constraints.
However, if I use `fminsearch()` instead and change the objective to:
projection = orth(U)'*X';
fracVar = sum(var(projection'))/totalVar;
fVal = abs(targetVar - fracVar);
and then use `orth()` again on the result, it seems to work perfectly. I thought this would be the "wrong" way as I'm basically hacking the constraint into the objective.
What is an explanation for this? Is there a "correct" approach to this problem?
0 Commenti
Risposte (0)
Vedere anche
Categorie
Scopri di più su Get Started with Optimization Toolbox in Help Center e File Exchange
Prodotti
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!