Skip to content

Commit

Permalink
readme
Browse files Browse the repository at this point in the history
  • Loading branch information
shtadinada committed Nov 13, 2024
1 parent 7c8c688 commit 1d94478
Show file tree
Hide file tree
Showing 6 changed files with 198 additions and 51 deletions.
53 changes: 29 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,12 @@
[![CI](https://github.com/ZIB-IOL/AbsSmoothFW.jl/actions/workflows/CI.yml/badge.svg)](https://github.com/ZIB-IOL/AbsSmoothFW.jl/actions/workflows/CI.yml)
[![DOI](https://zenodo.org/badge/793075266.svg)](https://zenodo.org/doi/10.5281/zenodo.11198550)

This package is a toolbox for Abs-Smooth Frank-Wolfe algorithm.
This package is a toolbox for non-smooth version of the Frank-Wolfe algorithm.

## Overview
## Overview
Abs-Smooth Frank-Wolfe algorithms are designed to solve optimization problems of the form $\min\limits_{x\in C}$ $f(x)$ , for convex compact $C$ and an [abs-smooth](https://optimization-online.org/wp-content/uploads/2012/09/3597.pdf) function $f$.

Abs-Smooth Frank-Wolfe algorithms are designed to solve optimization problems of the form $\min\limits_{x\in C}$ $f(x)$ , for convex compact $C$ and an [abs-smooth](https://optimization-online.org/wp-content/uploads/2012/09/3597.pdf) function $f$.
We solve such problems by using [ADOLC.jl](https://github.com/TimSiebert1/ADOLC.jl/tree/master) for the AD toolbox and using the [FrankWolfe.jl](https://github.com/ZIB-IOL/FrankWolfe.jl) for conditional gradient methods.


## Installation
Expand All @@ -22,38 +23,32 @@ Pkg.add("AbsSmoothFrankWolfe")
or the main branch:

```julia
Pkg.add(url="https://github.com/ZIB-IOL/AbsSmoothFrankWolfe.jl")
Pkg.add(url="https://github.com/ZIB-IOL/AbsSmoothFrankWolfe.jl", rev="main")
```

## Getting started

Let's say we want to minimize the [LASSO](https://www.jstor.org/stable/2346178?seq=1) problem: $\frac{1}{2}\|Ax - y\|_2^2 + \rho \|x\|_1$, subjected to simple box constraints.
This is what the code looks like:
Let us consider the minimization of the abs-smooth function $max(x_1^4+x_2^2, (2-x_1)^2+(2-x_2)^2, 2*e^{(x_2-x_1)})$ subjected to simple box constraints $-5\leq x_i \leq 5$. Here is what the code will look like:

```julia
julia> using AbsSmoothFrankWolfe,FrankWolfe,LinearAlgebra,JuMP,HiGHS
julia> using AbsSmoothFrankWolfe

julia> import MathOptInterface

julia> const MOI = MathOptInterface
julia> using FrankWolfe

julia> n = 5 # choose lenght(x)
julia> using LinearAlgebra

julia> p = 3 # choose lenght(y)
julia> using JuMP

julia> rho = 0.5
julia> using HiGHS

julia> A = rand(p,n) # randomly choose matrix A

julia> y = rand(p) # randomly choose y
julia> import MathOptInterface

#define the LASSO function
julia> function f(x)

return 0.5*(norm(A*x - y))^2 + rho*norm(x)
julia> const MOI = MathOptInterface

julia> function f(x)
return max(x[1]^4+x[2]^2, (2-x[1])^2+(2-x[2])^2, 2*exp(x[2]-x[1]))
end

# evaluation point x_base
julia> x_base = ones(n)*1.0

Expand Down Expand Up @@ -115,7 +110,17 @@ julia> x, v, primal, dual_gap, traj_data = AbsSmoothFrankWolfe.as_frank_wolfe(
verbose=true
)

```

Beyond those presented in the documentation, more test problems can be found in the `examples` folder.
Vanilla Abs-Smooth Frank-Wolfe Algorithm.
MEMORY_MODE: FrankWolfe.InplaceEmphasis() STEPSIZE: FixedStep EPSILON: 1.0e-7 MAXITERATION: 1.0e7 TYPE: Float64
MOMENTUM: nothing GRADIENTTYPE: Vector{Float64}
LMO: AbsSmoothLMO
[ Info: In memory_mode memory iterates are written back into x0!

-------------------------------------------------------------------------------------------------
Type Iteration Primal ||delta x|| Dual gap Time It/sec
-------------------------------------------------------------------------------------------------
I 1 8.500000e+01 2.376593e+00 1.281206e+02 0.000000e+00 Inf
Last 7 2.000080e+00 2.000000e-05 3.600149e-04 2.885519e+00 2.425907e+00
-------------------------------------------------------------------------------------------------
x_final = [1.00002, 1.0]

1 change: 1 addition & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ makedocs(;
format = Documenter.HTML(),
pages = [
"Home" => "index.md",
"Examples" => "examples.md",
"References" => "references.md"
],
)
Expand Down
94 changes: 94 additions & 0 deletions docs/src/examples.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
# LASSO problem

Let's say we want to minimize the [LASSO](https://www.jstor.org/stable/2346178?seq=1) problem: $\frac{1}{2}\|Ax - y\|_2^2 + \rho \|x\|_1$, subjected to simple box constraints.
This is what the code looks like:

```julia
julia> using AbsSmoothFrankWolfe,FrankWolfe,LinearAlgebra,JuMP,HiGHS

julia> import MathOptInterface

julia> const MOI = MathOptInterface

julia> n = 5 # choose lenght(x)

julia> p = 3 # choose lenght(y)

julia> rho = 0.5

julia> A = rand(p,n) # randomly choose matrix A

julia> y = rand(p) # randomly choose y

#define the LASSO function
julia> function f(x)

return 0.5*(norm(A*x - y))^2 + rho*norm(x)

end

# evaluation point x_base
julia> x_base = ones(n)*1.0

# box constraints
julia> lb_x = [-5 for in in x_base]

julia> ub_x = [5 for in in x_base]

# call the abs-linear form of f
julia> abs_normal_form = AbsSmoothFrankWolfe.abs_linear(x_base,f)

# gradient formula in terms of abs-linearization
julia> alf_a = abs_normal_form.Y

julia> alf_b = abs_normal_form.J

julia> z = abs_normal_form.z

julia> s = abs_normal_form.num_switches

julia> sigma_z = AbsSmoothFrankWolfe.signature_vec(s,z)

julia> function grad!(storage, x)
c = vcat(alf_a', alf_b'.* sigma_z)
@. storage = c
end

# define the model using JuMP with HiGHS as inner solver
julia> o = Model(HiGHS.Optimizer)

julia> MOI.set(o, MOI.Silent(), true)

julia> @variable(o, lb_x[i] <= x[i=1:n] <= ub_x[i])

# initialise dual gap
julia> dualgap_asfw = Inf

# abs-smooth lmo
julia> lmo_as = AbsSmoothFrankWolfe.AbsSmoothLMO(o, x_base, f, n, s, lb_x, ub_x, dualgap_asfw)

# define termination criteria using Frank-Wolfe 'callback' function
julia> function make_termination_callback(state)
return function callback(state,args...)
return state.lmo.dualgap_asfw[1] > 1e-2
end
end

julia> callback = make_termination_callback(FrankWolfe.CallbackState)

# call abs-smooth-frank-wolfe
julia> x, v, primal, dual_gap, traj_data = AbsSmoothFrankWolfe.as_frank_wolfe(
f,
grad!,
lmo_as,
x_base;
gradient = ones(n+s),
line_search = FrankWolfe.FixedStep(1.0),
callback=callback,
verbose=true
)

```

Beyond those presented in the documentation, more test problems can be found in the `examples` folder.

53 changes: 29 additions & 24 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,12 @@ EditURL = "https://github.com/ZIB-IOL/AbsSmoothFrankWolfe.jl/tree/main/README.md
[![CI](https://github.com/ZIB-IOL/AbsSmoothFW.jl/actions/workflows/CI.yml/badge.svg)](https://github.com/ZIB-IOL/AbsSmoothFW.jl/actions/workflows/CI.yml)
[![DOI](https://zenodo.org/badge/793075266.svg)](https://zenodo.org/doi/10.5281/zenodo.11198550)

This package is a toolbox for Abs-Smooth Frank-Wolfe algorithm.
This package is a toolbox for non-smooth version of the Frank-Wolfe algorithm.

## Overview
## Overview
Abs-Smooth Frank-Wolfe algorithms are designed to solve optimization problems of the form $\min\limits_{x\in C}$ $f(x)$ , for convex compact $C$ and an [abs-smooth](https://optimization-online.org/wp-content/uploads/2012/09/3597.pdf) function $f$.

Abs-Smooth Frank-Wolfe algorithms are designed to solve optimization problems of the form $\min\limits_{x\in C}$ $f(x)$ , for convex compact $C$ and an [abs-smooth](https://optimization-online.org/wp-content/uploads/2012/09/3597.pdf) function $f$.
We solve such problems by using [ADOLC.jl](https://github.com/TimSiebert1/ADOLC.jl/tree/master) for the AD toolbox and using the [FrankWolfe.jl](https://github.com/ZIB-IOL/FrankWolfe.jl) for conditional gradient methods.


## Installation
Expand All @@ -26,38 +27,32 @@ Pkg.add("AbsSmoothFrankWolfe")
or the main branch:

```julia
Pkg.add(url="https://github.com/ZIB-IOL/AbsSmoothFrankWolfe.jl")
Pkg.add(url="https://github.com/ZIB-IOL/AbsSmoothFrankWolfe.jl", rev="main")
```

## Getting started

Let's say we want to minimize the [LASSO](https://www.jstor.org/stable/2346178?seq=1) problem: $\frac{1}{2}\|Ax - y\|_2^2 + \rho \|x\|_1$, subjected to simple box constraints.
This is what the code looks like:
Let us consider the minimization of the abs-smooth function $max(x_1^4+x_2^2, (2-x_1)^2+(2-x_2)^2, 2*e^{(x_2-x_1)})$ subjected to simple box constraints $-5\leq x_i \leq 5$. Here is what the code will look like:

```julia
julia> using AbsSmoothFrankWolfe,FrankWolfe,LinearAlgebra,JuMP,HiGHS
julia> using AbsSmoothFrankWolfe

julia> import MathOptInterface

julia> const MOI = MathOptInterface
julia> using FrankWolfe

julia> n = 5 # choose lenght(x)
julia> using LinearAlgebra

julia> p = 3 # choose lenght(y)
julia> using JuMP

julia> rho = 0.5
julia> using HiGHS

julia> A = rand(p,n) # randomly choose matrix A

julia> y = rand(p) # randomly choose y
julia> import MathOptInterface

#define the LASSO function
julia> function f(x)

return 0.5*(norm(A*x - y))^2 + rho*norm(x)
julia> const MOI = MathOptInterface

julia> function f(x)
return max(x[1]^4+x[2]^2, (2-x[1])^2+(2-x[2])^2, 2*exp(x[2]-x[1]))
end

# evaluation point x_base
julia> x_base = ones(n)*1.0

Expand Down Expand Up @@ -119,7 +114,17 @@ julia> x, v, primal, dual_gap, traj_data = AbsSmoothFrankWolfe.as_frank_wolfe(
verbose=true
)

```

Beyond those presented in the documentation, more test problems can be found in the `examples` folder.
Vanilla Abs-Smooth Frank-Wolfe Algorithm.
MEMORY_MODE: FrankWolfe.InplaceEmphasis() STEPSIZE: FixedStep EPSILON: 1.0e-7 MAXITERATION: 1.0e7 TYPE: Float64
MOMENTUM: nothing GRADIENTTYPE: Vector{Float64}
LMO: AbsSmoothLMO
[ Info: In memory_mode memory iterates are written back into x0!

-------------------------------------------------------------------------------------------------
Type Iteration Primal ||delta x|| Dual gap Time It/sec
-------------------------------------------------------------------------------------------------
I 1 8.500000e+01 2.376593e+00 1.281206e+02 0.000000e+00 Inf
Last 7 2.000080e+00 2.000000e-05 3.600149e-04 2.885519e+00 2.425907e+00
-------------------------------------------------------------------------------------------------
x_final = [1.00002, 1.0]

2 changes: 1 addition & 1 deletion examples/small_example.jl
Original file line number Diff line number Diff line change
Expand Up @@ -75,5 +75,5 @@ x, v, primal, dual_gap, traj_data = as_frank_wolfe(
max_iteration=1e7
)

@show x_base
println("x_final = ", x_base)

46 changes: 44 additions & 2 deletions src/as_frank_wolfe.jl
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ function as_frank_wolfe(
momentum=nothing,
epsilon=1.0e-7,
max_iteration=1.0e7,
print_iter=1.0,
print_iter=1000,
trajectory=false,
verbose=false,
memory_mode::FrankWolfe.MemoryEmphasis=FrankWolfe.InplaceEmphasis(),
Expand Down Expand Up @@ -193,7 +193,49 @@ function as_frank_wolfe(

x = FrankWolfe.muladd_memory_mode(memory_mode, x, gamma, d)

end
end

# recompute everything once for final verfication / do not record to trajectory though for now!
# this is important as some variants do not recompute f(x) and the dual_gap regularly but only when reporting
# hence the final computation.

step_type = FrankWolfe.ST_LAST
grad!(gradient, x)
v = v
primal = f(x)
dual_gap = lmo.dualgap_asfw[1]
tot_time = (time_ns() - time_start) / 1.0e9
gamma = FrankWolfe.perform_line_search(
line_search,
t,
f,
grad!,
gradient,
x,
d,
1.0,
linesearch_workspace,
memory_mode,
)
if callback !== nothing
state = FrankWolfe.CallbackState(
t,
primal,
primal - dual_gap,
dual_gap,
tot_time,
x,
v,
d,
gamma,
f,
grad!,
lmo,
gradient,
step_type,
)
callback(state)
end

return x, v, primal, dual_gap, traj_data, x0
end

0 comments on commit 1d94478

Please sign in to comment.