Skip to content

Commit 79e857f

Browse files
committed
Version 2024.6. Introduced MiniBiteOpt parallel optimizer.
Improved solution generators 1 and 3. Code cleanup.
1 parent b8e4cd4 commit 79e857f

11 files changed

+696
-788
lines changed

README.md

+41-32
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ optimization attempts to reach optimum.
1818
Instead of iterating through different "starting guesses" to find optimum
1919
like in deterministic methods, this method requires optimization attempts
2020
with different random seeds. The stochastic nature of the method allows it to
21-
automatically "fall" into different competing minima at each attempt. If there
21+
automatically "fall" into different competing minima on each attempt. If there
2222
are no competing minima in a function present (or the true/global minimum is
2323
rogue and cannot be detected), this method in absolute majority of attempts
2424
returns the same optimum.
@@ -34,7 +34,8 @@ Python binding is available as a part of [fcmaes library](https://github.com/die
3434

3535
If you are regularly using BiteOpt in a commercial environment, you may
3636
consider donating/sponsoring the project. Please contact the author via
37-
37+
[email protected] or [email protected]. A solver for AMPL NL models is
38+
available commercially.
3839

3940
## Comparison ##
4041

@@ -275,7 +276,7 @@ suggested constraint tolerance is 10<sup>-4</sup>, but a more common
275276
10<sup>-6</sup> can be also used; lower values are not advised for use. Models
276277
with up to 200 constraints, both equalities and non-equalities, were tested
277278
with this method. In practice, on a large set of problems, this method finds a
278-
feasible solution in up to 96% of cases (with 20-30 attempts per problem).
279+
feasible solution in up to 97% of cases (with 20-30 attempts per problem).
279280

280281
```c
281282
real_value = cost;
@@ -314,15 +315,16 @@ BiteOpt is able to solve binary combinatorial problems, if the cost function
314315
is formulated as a sum of differences between bit values and continuous
315316
variables in the range [0; 1] - these differences can be used as usual
316317
constraints while binary value equality tolerance can be set to as low as
317-
10<sup>-12/sup>.
318+
10<sup>-12</sup>.
318319

319320
## Multi-Objective Optimization ##
320321

321322
BiteOpt does not offer MOO "out of the box". However, BiteOpt can successfully
322-
solve MOO problems via direct hyper-volume optimization. This approach
323-
requires a hyper-volume tracker which keeps track of a certain number of
324-
improving solutions, and updates its state (and hyper-volume estimate) on each
325-
objective function evaluation (optcost). The approach is demonstrated in
323+
solve MOO problems via direct optimization of hypervolume of a set of points.
324+
This approach requires a hypervolume tracker which keeps track of a certain
325+
number of improving solutions, and updates its state (and hypervolume
326+
estimate) on each objective function evaluation (optcost). The approach is
327+
demonstrated in
326328
[fcmaes tutorial - quantumcomm.py](https://github.com/dietmarwo/fast-cma-es/blob/master/examples/esa2/quantumcomm.py).
327329

328330
## Convergence Proof ##
@@ -620,6 +622,7 @@ biological DNA crossing-over, but on a single-bit scale.
620622
6. The "short-cut" parameter vector generation.
621623
622624
$$ z=x_\text{best}[\text{rand}(1\ldots N)] $$
625+
623626
$$ x_\text{new}[i]=z, \quad i=1,\ldots,N $$
624627
625628
7. A solution generator that randomly combines solutions from the main and
@@ -655,6 +658,36 @@ sampling around centroid.
655658
value space, in a randomized fashion: each parameter value receives a DE
656659
operation value of a randomly-chosen parameter.
657660
661+
## SpherOpt ##
662+
663+
This is a "converging hyper-spheroid" optimization method (or hyper-sphere,
664+
depending on optimization space's bounds). While it is not as effective as,
665+
for example, CMA-ES, it also stands parameter space scaling, offsetting, and
666+
rotation well. Since version 2021.1 it is used as a companion (parallel
667+
optimizer) to BiteOpt, with excellent results.
668+
669+
This method is in parts similar to SMA-ES, but instead of keeping track of
670+
per-parameter sigmas, covariance matrix, and using Gaussian sampling, SpherOpt
671+
simply selects random points on a hyper-spheroid (with a bit of added jitter
672+
at lower dimensions), which eventually converges to a point. This makes the
673+
method very computationally-efficient, but at the same time provides immunity
674+
to coordinate axis rotations.
675+
676+
This method uses the same self-optimization technique as BiteOpt which is,
677+
however, not a vital element of the method.
678+
679+
## MiniBiteOpt ##
680+
681+
This solver is a minimized version of BiteOpt. This version incorporates the
682+
most effective solution generators reminiscent of early BiteOpt versions. This
683+
solver is used as an additional parallel optimizer in BiteOpt.
684+
685+
## NMSeqOpt ##
686+
687+
The CNMSeqOpt class implements sequential Nelder-Mead simplex method with
688+
the "stall count" tracking. This optimizer is used as an additional parallel
689+
optimizer in BiteOpt.
690+
658691
## SMA-ES ##
659692
660693
This is a working optimization method called "SigMa Adaptation Evolution
@@ -698,30 +731,6 @@ controlled via the `EvalFac` parameter, which adjusts method's overhead
698731
with only a minor effect on convergence property. Method's typical
699732
observed complexity is O(N<sup>1.6</sup>).
700733
701-
## SpherOpt ##
702-
703-
This is a "converging hyper-spheroid" optimization method (or hyper-sphere,
704-
depending on optimization space's bounds). While it is not as effective as,
705-
for example, CMA-ES, it also stands parameter space scaling, offsetting, and
706-
rotation well. Since version 2021.1 it is used as a companion (parallel
707-
optimizer) to BiteOpt, with excellent results.
708-
709-
This method is in parts similar to SMA-ES, but instead of keeping track of
710-
per-parameter sigmas, covariance matrix, and using Gaussian sampling, SpherOpt
711-
simply selects random points on a hyper-spheroid (with a bit of added jitter
712-
at lower dimensions), which eventually converges to a point. This makes the
713-
method very computationally-efficient, but at the same time provides immunity
714-
to coordinate axis rotations.
715-
716-
This method uses the same self-optimization technique as BiteOpt which is,
717-
however, not a vital element of the method.
718-
719-
## NMSeqOpt ##
720-
721-
The CNMSeqOpt class implements sequential Nelder-Mead simplex method with
722-
the "stall count" tracking. This optimizer is used as an additional parallel
723-
optimizer in BiteOpt.
724-
725734
## DEOpt ##
726735
727736
The CDEOpt class implements a Differential Evolution-alike DFO solver, but in

0 commit comments

Comments
 (0)