Decision-Review-System (DRS) optimizer
Versione 1.0.0 (3,6 KB) da
praveen kumar
This program implements DRS optimizer to minimize sphere function.
1. Notation and problem statement
We solve the box-constrained minimization problemminx∈Ωf(x),Ω={x∈Rd: ℓ≤x≤u},\min_{x\in\Omega} f(x),\qquad \Omega = \{x\in\mathbb{R}^d:\; \ell \le x \le u\},x∈Ωminf(x),Ω={x∈Rd:ℓ≤x≤u},
where f:Rd→Rf:\mathbb{R}^d\to\mathbb{R}f:Rd→R is the objective (for example Sphere f(x)=∑j=1dxj2f(x)=\sum_{j=1}^d x_j^2f(x)=∑j=1dxj2), ddd is dimension, and ℓ,u∈Rd\ell,u\in\mathbb{R}^dℓ,u∈Rd are lower/upper bounds.
Algorithmic parameters:
- NNN — population size (agents).
- TTT — max iterations.
- RRR — initial reviews per agent.
- kkk — number of probes per review (local tries).
- s0s_0s0 — initial exploration step fraction (relative to bounds).
- ρ∈(0,1)\rho\in(0,1)ρ∈(0,1) — review step reduction factor.
- (Optional) τ\tauτ and ppp — replenishment period and fraction.
Population at iteration ttt:
- xi(t)∈Ω, i=1,…,Nx_i^{(t)}\in\Omega,\; i=1,\dots,Nxi(t)∈Ω,i=1,…,N (agent solutions).
- fi(t)=f(xi(t))f_i^{(t)} = f(x_i^{(t)})fi(t)=f(xi(t)).
- ri(t)∈{0,1,… }r_i^{(t)}\in\{0,1,\dots\}ri(t)∈{0,1,…} reviews left for agent iii.
Global best:xt∗=argminifi(t),ft∗=minifi(t).x^*_t = \arg\min_{i} f_i^{(t)},\qquad f^*_t = \min_i f_i^{(t)}.xt∗=argiminfi(t),ft∗=iminfi(t).
2. Step-size schedule
A decreasing step fraction:s(t)=s0⋅(1−tT),t=0,1,…,T.s(t) = s_0\cdot\left(1 - \frac{t}{T}\right),\qquad t=0,1,\dots,T.s(t)=s0⋅(1−Tt),t=0,1,…,T.
Absolute step per coordinate uses the box range Δ=u−ℓ\Delta = u-\ellΔ=u−ℓ. A scalar/vector multiplication is elementwise.
3. Candidate proposal (on-field decision)
For agent iii at iteration ttt generate a candidate by Gaussian perturbation:x~i=xi(t)+s(t) Δ⊙ξ,ξ∼N(0,Id),\tilde{x}_i = x_i^{(t)} + s(t)\,\Delta\odot \xi,\qquad \xi\sim\mathcal{N}(0,I_d),x~i=xi(t)+s(t)Δ⊙ξ,ξ∼N(0,Id),
then project to bounds:
x~i←proj[ℓ,u](x~i)(elementwise clamp).\tilde{x}_i \leftarrow \operatorname{proj}_{[\ell,u]}(\tilde{x}_i) \quad(\text{elementwise clamp}).x~i←proj[ℓ,u](x~i)(elementwise clamp).
Evaluate f~i=f(x~i) \tilde{f}_i = f(\tilde{x}_i)f~i=f(x~i).
Acceptance (standard Metropolis-free greedy):if f~i<fi(t)thenxi(t+1)=x~i, fi(t+1)=f~i.\text{if }\tilde{f}_i < f_i^{(t)}\quad\text{then}\quad x_i^{(t+1)}=\tilde{x}_i,\; f_i^{(t+1)} = \tilde{f}_i.if f~i<fi(t)thenxi(t+1)=x~i,fi(t+1)=f~i.
If f~i\tilde{f}_if~i improved global best, update xt∗,ft∗x_t^*, f_t^*xt∗,ft∗.
4. Review mechanism (intensification)
If f~i≥fi(t)\tilde{f}_i \ge f_i^{(t)}f~i≥fi(t) (proposal worse) and ri(t)>0r_i^{(t)}>0ri(t)>0, agent uses one review:
- decrement review: ri(t)←ri(t)−1r_i^{(t)}\leftarrow r_i^{(t)}-1ri(t)←ri(t)−1.
- perform kkk focused probes with reduced step:srev(t)=ρ s(t),yi,ℓ=xi(t)+srev(t) Δ⊙ζℓ,ζℓ∼N(0,Id),s_{\text{rev}}(t) = \rho\;s(t),\qquad y_{i,\ell} = x_i^{(t)} + s_{\text{rev}}(t)\,\Delta\odot \zeta_\ell,\quad \zeta_\ell\sim\mathcal{N}(0,I_d),srev(t)=ρs(t),yi,ℓ=xi(t)+srev(t)Δ⊙ζℓ,ζℓ∼N(0,Id),project each yi,ℓy_{i,\ell}yi,ℓ into [ℓ,u][\ell,u][ℓ,u], compute f(yi,ℓ)f(y_{i,\ell})f(yi,ℓ) for ℓ=1,…,k\ell=1,\dots,kℓ=1,…,k.
- let (yibest,fibest)=argminℓf(yi,ℓ)(y_i^\text{best}, f_i^\text{best})=\arg\min_{\ell} f(y_{i,\ell})(yibest,fibest)=argminℓf(yi,ℓ).
- If fibest<fi(t)f_i^\text{best} < f_i^{(t)}fibest<fi(t) accept xi(t+1)=yibestx_i^{(t+1)} = y_i^\text{best}xi(t+1)=yibest, else keep xi(t+1)=xi(t)x_i^{(t+1)}=x_i^{(t)}xi(t+1)=xi(t).
So reviews allow an agent to perform a short, intensified local search to “overturn” a worse on-field decision.
5. No-review case (exploration fallback)
If f~i≥fi(t)\tilde{f}_i\ge f_i^{(t)}f~i≥fi(t) and ri(t)=0r_i^{(t)}=0ri(t)=0, perform a small jitter:x^i=xi(t)+γ s(t) Δ⊙η,η∼U([−1,1]d),\hat{x}_i = x_i^{(t)} + \gamma\, s(t)\,\Delta\odot \eta,\quad \eta\sim\mathcal{U}([-1,1]^d),x^i=xi(t)+γs(t)Δ⊙η,η∼U([−1,1]d),
with tiny γ\gammaγ (e.g. 0.05). Accept if improves.
6. Replenishment (optional)
Every τ\tauτ iterations, restore reviews to a random subset of agents: choose m=⌈pN⌉m=\lceil pN\rceilm=⌈pN⌉ agents and set rj←Rr_j \leftarrow Rrj←R for those. This encourages intermittent deep search. In practice τ≈T/5\tau\approx T/5τ≈T/5, p∈(0,0.3)p\in(0,0.3)p∈(0,0.3).
7. Projection operator
proj[ℓ,u](z)=min(max(z,ℓ),u)(elementwise).\operatorname{proj}_{[\ell,u]}(z) = \min(\max(z,\ell),u) \quad \text{(elementwise)}.proj[ℓ,u](z)=min(max(z,ℓ),u)(elementwise).
8. Termination
Stop after t=Tt=Tt=T or earlier if ft∗f^*_tft∗ below desired tolerance ϵ\epsilonϵ.
9. Algorithm summary (pseudocode)
For completeness:
- Initialize xi(0)∼ℓ+Δ⊙Uniform(0,1)dx_i^{(0)}\sim \ell + \Delta\odot\text{Uniform}(0,1)^dxi(0)∼ℓ+Δ⊙Uniform(0,1)d, fi(0)f_i^{(0)}fi(0), and ri(0)=Rr_i^{(0)}=Rri(0)=R.
- For t=0t=0t=0 to T−1T-1T−1:
- compute s(t)s(t)s(t).
- for each agent iii:
- propose x~i\tilde{x}_ix~i and f~i\tilde{f}_if~i.
- if f~i<fi(t)\tilde{f}_i < f_i^{(t)}f~i<fi(t) accept.
- else if ri(t)>0r_i^{(t)}>0ri(t)>0 use review: run kkk probes with reduced step, accept best probe if improves.
- else attempt small jitter and accept if improves.
- optionally replenish some reviews every τ\tauτ iterations.
- update global best.
- Return xT∗x^*_TxT∗, fT∗f^*_TfT∗.
10. Complexity
Let CfC_fCf be cost per evaluation of fff. Per iteration:
- baseline proposals: NNN evaluations.
- reviews: each used review adds up to kkk extra evaluations. If average reviews used per iter is rˉ\bar{r}rˉ, cost per iter ≈(N+rˉk)Cf\approx (N + \bar{r}k)C_f≈(N+rˉk)Cf.Total cost O(T (N+rˉk) Cf)\mathcal{O}\big(T\,(N + \bar{r}k)\,C_f\big)O(T(N+rˉk)Cf).
11. Behavior & design rationale
- Greedy acceptance enforces monotone improvement for each agent.
- Reviews introduce occasional intensified local search (exploitation) but are limited by RRR, preserving exploration.
- Step schedule s(t)s(t)s(t) reduces step sizes over time for convergence-like behavior.
- Replenishment injects episodic intensification to escape stagnation.
- Projection keeps iterates feasible.
12. Parameter recommendations (practical)
- NNN: 20–100 (40 is a good default for moderate dimensions).
- TTT: depends on budget — 200–1000 typical.
- RRR: 1–4 reviews per agent (2 is standard).
The DRS optimization algorithm provides a balance of exploration (standard proposals) and exploitation (review-based local searches) inspired by the cricket DRS process.
It ensures monotonic improvement under greedy acceptance, bounded search space handling, and probabilistic intensification through limited review opportunities.
- kkk: 3–8 probes per review (4 used in code).
- s0s_0s0: 0.2–0.6 (0.4 moderate).
- ρ\rhoρ: 0.3–0.7 (0.5 reduces step notably during reviews).
- γ\gammaγ (jitter scale): 0.01–0.1.
- τ\tauτ: T/5T/5T/5, ppp: 0.1–0.25 if using replenishment.
13. Specialization to Sphere
If f(x)=∑j=1dxj2f(x)=\sum_{j=1}^d x_j^2f(x)=∑j=1dxj2, the method is derivative-free and will converge because:
- decreasing step s(t)→0s(t)\to 0s(t)→0,
- occasional local probes allow moving toward origin,
- greedy acceptance ensures nonincreasing per-agent objective.
14. Variants and extensions
- Replace Gaussian proposal with Lévy or differential moves.
- Use acceptance with Metropolis probability to allow uphill moves.
- Maintain an external archive of best samples for crossover.
- Adaptive reviews: increase RRR for agents achieving large improvements.
Cita come
praveen kumar (2026). Decision-Review-System (DRS) optimizer (https://it.mathworks.com/matlabcentral/fileexchange/182348-decision-review-system-drs-optimizer), MATLAB Central File Exchange. Recuperato .
Compatibilità della release di MATLAB
Creato con
R2025b
Compatibile con qualsiasi release
Compatibilità della piattaforma
Windows macOS LinuxTag
Scopri Live Editor
Crea script con codice, output e testo formattato in un unico documento eseguibile.
DRS
| Versione | Pubblicato | Note della release | |
|---|---|---|---|
| 1.0.0 |
