@@ -18,7 +18,7 @@ optimization attempts to reach optimum.
18
18
Instead of iterating through different "starting guesses" to find optimum
19
19
like in deterministic methods, this method requires optimization attempts
20
20
with different random seeds. The stochastic nature of the method allows it to
21
- automatically "fall" into different competing minima at each attempt. If there
21
+ automatically "fall" into different competing minima on each attempt. If there
22
22
are no competing minima in a function present (or the true/global minimum is
23
23
rogue and cannot be detected), this method in absolute majority of attempts
24
24
returns the same optimum.
@@ -34,7 +34,8 @@ Python binding is available as a part of [fcmaes library](https://github.com/die
34
34
35
35
If you are regularly using BiteOpt in a commercial environment, you may
36
36
consider donating/sponsoring the project. Please contact the author via
37
-
37
+ [email protected] or
[email protected] . A solver for AMPL NL models is
38
+ available commercially.
38
39
39
40
## Comparison ##
40
41
@@ -275,7 +276,7 @@ suggested constraint tolerance is 10<sup>-4</sup>, but a more common
275
276
10<sup >-6</sup > can be also used; lower values are not advised for use. Models
276
277
with up to 200 constraints, both equalities and non-equalities, were tested
277
278
with this method. In practice, on a large set of problems, this method finds a
278
- feasible solution in up to 96 % of cases (with 20-30 attempts per problem).
279
+ feasible solution in up to 97 % of cases (with 20-30 attempts per problem).
279
280
280
281
``` c
281
282
real_value = cost;
@@ -314,15 +315,16 @@ BiteOpt is able to solve binary combinatorial problems, if the cost function
314
315
is formulated as a sum of differences between bit values and continuous
315
316
variables in the range [ 0; 1] - these differences can be used as usual
316
317
constraints while binary value equality tolerance can be set to as low as
317
- 10<sup >-12/sup>.
318
+ 10<sup >-12< /sup >.
318
319
319
320
## Multi-Objective Optimization ##
320
321
321
322
BiteOpt does not offer MOO "out of the box". However, BiteOpt can successfully
322
- solve MOO problems via direct hyper-volume optimization. This approach
323
- requires a hyper-volume tracker which keeps track of a certain number of
324
- improving solutions, and updates its state (and hyper-volume estimate) on each
325
- objective function evaluation (optcost). The approach is demonstrated in
323
+ solve MOO problems via direct optimization of hypervolume of a set of points.
324
+ This approach requires a hypervolume tracker which keeps track of a certain
325
+ number of improving solutions, and updates its state (and hypervolume
326
+ estimate) on each objective function evaluation (optcost). The approach is
327
+ demonstrated in
326
328
[ fcmaes tutorial - quantumcomm.py] ( https://github.com/dietmarwo/fast-cma-es/blob/master/examples/esa2/quantumcomm.py ) .
327
329
328
330
## Convergence Proof ##
@@ -620,6 +622,7 @@ biological DNA crossing-over, but on a single-bit scale.
620
622
6. The "short-cut" parameter vector generation.
621
623
622
624
$$ z=x_\text{best}[\text{rand}(1\ldots N)] $$
625
+
623
626
$$ x_\text{new}[i]=z, \quad i=1,\ldots,N $$
624
627
625
628
7. A solution generator that randomly combines solutions from the main and
@@ -655,6 +658,36 @@ sampling around centroid.
655
658
value space, in a randomized fashion: each parameter value receives a DE
656
659
operation value of a randomly-chosen parameter.
657
660
661
+ ## SpherOpt ##
662
+
663
+ This is a "converging hyper-spheroid" optimization method (or hyper-sphere,
664
+ depending on optimization space's bounds). While it is not as effective as,
665
+ for example, CMA-ES, it also stands parameter space scaling, offsetting, and
666
+ rotation well. Since version 2021.1 it is used as a companion (parallel
667
+ optimizer) to BiteOpt, with excellent results.
668
+
669
+ This method is in parts similar to SMA-ES, but instead of keeping track of
670
+ per-parameter sigmas, covariance matrix, and using Gaussian sampling, SpherOpt
671
+ simply selects random points on a hyper-spheroid (with a bit of added jitter
672
+ at lower dimensions), which eventually converges to a point. This makes the
673
+ method very computationally-efficient, but at the same time provides immunity
674
+ to coordinate axis rotations.
675
+
676
+ This method uses the same self-optimization technique as BiteOpt which is,
677
+ however, not a vital element of the method.
678
+
679
+ ## MiniBiteOpt ##
680
+
681
+ This solver is a minimized version of BiteOpt. This version incorporates the
682
+ most effective solution generators reminiscent of early BiteOpt versions. This
683
+ solver is used as an additional parallel optimizer in BiteOpt.
684
+
685
+ ## NMSeqOpt ##
686
+
687
+ The CNMSeqOpt class implements sequential Nelder-Mead simplex method with
688
+ the "stall count" tracking. This optimizer is used as an additional parallel
689
+ optimizer in BiteOpt.
690
+
658
691
## SMA-ES ##
659
692
660
693
This is a working optimization method called "SigMa Adaptation Evolution
@@ -698,30 +731,6 @@ controlled via the `EvalFac` parameter, which adjusts method's overhead
698
731
with only a minor effect on convergence property. Method's typical
699
732
observed complexity is O(N<sup>1.6</sup>).
700
733
701
- ## SpherOpt ##
702
-
703
- This is a "converging hyper-spheroid" optimization method (or hyper-sphere,
704
- depending on optimization space's bounds). While it is not as effective as,
705
- for example, CMA-ES, it also stands parameter space scaling, offsetting, and
706
- rotation well. Since version 2021.1 it is used as a companion (parallel
707
- optimizer) to BiteOpt, with excellent results.
708
-
709
- This method is in parts similar to SMA-ES, but instead of keeping track of
710
- per-parameter sigmas, covariance matrix, and using Gaussian sampling, SpherOpt
711
- simply selects random points on a hyper-spheroid (with a bit of added jitter
712
- at lower dimensions), which eventually converges to a point. This makes the
713
- method very computationally-efficient, but at the same time provides immunity
714
- to coordinate axis rotations.
715
-
716
- This method uses the same self-optimization technique as BiteOpt which is,
717
- however, not a vital element of the method.
718
-
719
- ## NMSeqOpt ##
720
-
721
- The CNMSeqOpt class implements sequential Nelder-Mead simplex method with
722
- the "stall count" tracking. This optimizer is used as an additional parallel
723
- optimizer in BiteOpt.
724
-
725
734
## DEOpt ##
726
735
727
736
The CDEOpt class implements a Differential Evolution-alike DFO solver, but in
0 commit comments