Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Best practice for DiffOpt.jl implementation with Flux (logsumexp) #228

Open
JinraeKim opened this issue Jul 18, 2022 · 20 comments
Open

Best practice for DiffOpt.jl implementation with Flux (logsumexp) #228

JinraeKim opened this issue Jul 18, 2022 · 20 comments

Comments

@JinraeKim
Copy link
Contributor

JinraeKim commented Jul 18, 2022

Hi, developers! Thanks for this promising and potentially useful package.

I'm studying differentiable convex optimisation and trying to implement it to the PLSE, a neural network that I proposed.
I used to use cvxpylayers but I'm sick of the slow speed of Python stuff. So I'm wondering if I can implement this through DiffOpt.jl.

Background

I have a neural network (called PLSE) f(x, u; \theta) with two inputs x (condition) and u (decision) and the network parameter theta. f(x, \cdot) is guaranteed to be convex, and the corresponding convex optimisation is exponential cone program (the original form is log-sum-exp). This is implemented in ParametrisedConvexApproximators.jl.

What I'm trying to do

It is pretty simple.
I wanna get the derivative du*/d\theta where the optimal decision u*(x, \theta) which minimises f(x, \cdot; \theta) possibly within a prescribed set (decision space) and the network parameter \theta.
You can find this idea with cvxpylayers here.

Issues with DiffOpt.jl

Before addressing this, I'm not familiar with this package. Please lmk if there are any workarounds that I missed.
So what I tried is following Custom ReLU example. For this, I need to define the objective function.
An example code would be

using ParametrisedConvexApproximators
using JuMP
import DiffOpt
import SCS
import ChainRulesCore
import Flux


function main()
    model = Model(() -> DiffOpt.diff_optimizer(SCS.Optimizer))
    n, m = 3, 2
    i_max = 20
    T = 1e-0
    h_array = [64]
    act = Flux.relu
    plse = PLSE(n, m, i_max, T, h_array, act)
    x = rand(n)
    @show plse(x, rand(m))
    @variable(model, u[1:m])
    # @objective(model, Min, plse(x, u)[1])
    # optimize!(model)
    # return value.(u)
end

Note that the output of plse is a vector with 1-element.

And the following is how to obtain the plse(x, u), which can be found here.

function (nn::PLSE)(x::AbstractArray, u::AbstractArray)
    @unpack T = nn
    is_vector = length(size(x)) == 1
    @assert is_vector == (length(size(u)) == 1)
    x = is_vector ? reshape(x, :, 1) : x
    u = is_vector ? reshape(u, :, 1) : u
    @assert size(x)[2] == size(u)[2]
    tmp = affine_map(nn, x, u)
    _res = T * Flux.logsumexp((1/T)*tmp, dims=1)
    res = is_vector ? reshape(_res, 1) : _res
    return res
end

And in the Flux.logsumexp, I encountered this error:

1|julia> Flux.logsumexp((1/T)*tmp, dims=1)
ERROR: MethodError: no method matching isless(::AffExpr, ::AffExpr)
Closest candidates are:
  isless(::Any, ::Missing) at /Applications/Julia-1.7.app/Contents/Resources/julia/share/julia/base/missing.jl:88
  isless(::Missing, ::Any) at /Applications/Julia-1.7.app/Contents/Resources/julia/share/julia/base/missing.jl:87
Stacktrace:
  [1] max(x::AffExpr, y::AffExpr)
    @ Base ./operators.jl:492
  [2] mapreduce_impl(f::typeof(identity), op::typeof(max), A::Matrix{AffExpr}, first::Int64, last::Int64)
    @ Base ./reduce.jl:635
  [3] _mapreducedim!(f::typeof(identity), op::typeof(max), R::Matrix{AffExpr}, A::Matrix{AffExpr})
    @ Base ./reducedim.jl:260
  [4] mapreducedim!
    @ ./reducedim.jl:289 [inlined]
  [5] _mapreduce_dim
    @ ./reducedim.jl:336 [inlined]
  [6] #mapreduce#731
    @ ./reducedim.jl:322 [inlined]
  [7] #_maximum#769
    @ ./reducedim.jl:916 [inlined]
  [8] _maximum
    @ ./reducedim.jl:916 [inlined]
  [9] #_maximum#768
    @ ./reducedim.jl:915 [inlined]
 [10] _maximum
    @ ./reducedim.jl:915 [inlined]
 [11] #maximum#746
    @ ./reducedim.jl:889 [inlined]
 [12] logsumexp(x::Matrix{AffExpr}; dims::Int64)
    @ NNlib ~/.julia/packages/NNlib/tvMmZ/src/softmax.jl:142
 [13] top-level scope
    @ none:1
 [14] eval
    @ ./boot.jl:373 [inlined]
 [15] eval_code(frame::JuliaInterpreter.Frame, expr::Expr)
    @ JuliaInterpreter ~/.julia/packages/JuliaInterpreter/4B89D/src/utils.jl:649
 [16] eval_code(frame::JuliaInterpreter.Frame, command::String)
    @ JuliaInterpreter ~/.julia/packages/JuliaInterpreter/4B89D/src/utils.jl:627
 [17] _eval_code(frame::JuliaInterpreter.Frame, code::String)
    @ Debugger ~/.julia/packages/Debugger/I4w2y/src/repl.jl:211
 [18] (::Debugger.var"#27#29"{Debugger.DebuggerState})(s::REPL.LineEdit.MIState, buf::IOBuffer, ok::Bool)
    @ Debugger ~/.julia/packages/Debugger/I4w2y/src/repl.jl:194
 [19] #invokelatest#2
    @ ./essentials.jl:716 [inlined]
 [20] invokelatest
    @ ./essentials.jl:714 [inlined]
 [21] run_interface(terminal::REPL.Terminals.TextTerminal, m::REPL.LineEdit.ModalInterface, s::REPL.LineEdit.MIState)
    @ REPL.LineEdit /Applications/Julia-1.7.app/Contents/Resources/julia/share/julia/stdlib/v1.7/REPL/src/LineEdit.jl:2493
 [22] run_interface(terminal::REPL.Terminals.TextTerminal, m::REPL.LineEdit.ModalInterface)
    @ REPL.LineEdit /Applications/Julia-1.7.app/Contents/Resources/julia/share/julia/stdlib/v1.7/REPL/src/LineEdit.jl:2487
 [23] RunDebugger(frame::JuliaInterpreter.Frame, repl::Nothing, terminal::Nothing; initial_continue::Bool)
    @ Debugger ~/.julia/packages/Debugger/I4w2y/src/repl.jl:167
 [24] macro expansion
    @ ~/.julia/packages/Debugger/I4w2y/src/Debugger.jl:137 [inlined]
 [25] main()
    @ Main ~/.julia/dev/ParametrisedConvexApproximators/test/tmp.jl:20
 [26] top-level scope
    @ REPL[2]:1
 [27] top-level scope
    @ ~/.julia/packages/CUDA/sCev8/src/initialization.jl:52

1|julia> maximum(tmp; dims=1)
ERROR: MethodError: no method matching isless(::AffExpr, ::AffExpr)
Closest candidates are:
  isless(::Any, ::Missing) at /Applications/Julia-1.7.app/Contents/Resources/julia/share/julia/base/missing.jl:88
  isless(::Missing, ::Any) at /Applications/Julia-1.7.app/Contents/Resources/julia/share/julia/base/missing.jl:87
Stacktrace:
  [1] max(x::AffExpr, y::AffExpr)
    @ Base ./operators.jl:492
  [2] mapreduce_impl(f::typeof(identity), op::typeof(max), A::Matrix{AffExpr}, first::Int64, last::Int64)
    @ Base ./reduce.jl:635
  [3] _mapreducedim!(f::typeof(identity), op::typeof(max), R::Matrix{AffExpr}, A::Matrix{AffExpr})
    @ Base ./reducedim.jl:260
  [4] mapreducedim!
    @ ./reducedim.jl:289 [inlined]
  [5] _mapreduce_dim
    @ ./reducedim.jl:336 [inlined]
  [6] #mapreduce#731
    @ ./reducedim.jl:322 [inlined]
  [7] #_maximum#769
    @ ./reducedim.jl:916 [inlined]
  [8] _maximum
    @ ./reducedim.jl:916 [inlined]
  [9] #_maximum#768
    @ ./reducedim.jl:915 [inlined]
 [10] _maximum
    @ ./reducedim.jl:915 [inlined]
 [11] #maximum#746
    @ ./reducedim.jl:889 [inlined]
 [12] top-level scope
    @ none:1
 [13] eval
    @ ./boot.jl:373 [inlined]
 [14] eval_code(frame::JuliaInterpreter.Frame, expr::Expr)
    @ JuliaInterpreter ~/.julia/packages/JuliaInterpreter/4B89D/src/utils.jl:649
 [15] eval_code(frame::JuliaInterpreter.Frame, command::String)
    @ JuliaInterpreter ~/.julia/packages/JuliaInterpreter/4B89D/src/utils.jl:627
 [16] _eval_code(frame::JuliaInterpreter.Frame, code::String)
    @ Debugger ~/.julia/packages/Debugger/I4w2y/src/repl.jl:211
 [17] (::Debugger.var"#27#29"{Debugger.DebuggerState})(s::REPL.LineEdit.MIState, buf::IOBuffer, ok::Bool)
    @ Debugger ~/.julia/packages/Debugger/I4w2y/src/repl.jl:194
 [18] #invokelatest#2
    @ ./essentials.jl:716 [inlined]
 [19] invokelatest
    @ ./essentials.jl:714 [inlined]
 [20] run_interface(terminal::REPL.Terminals.TextTerminal, m::REPL.LineEdit.ModalInterface, s::REPL.LineEdit.MIState)
    @ REPL.LineEdit /Applications/Julia-1.7.app/Contents/Resources/julia/share/julia/stdlib/v1.7/REPL/src/LineEdit.jl:2493
 [21] run_interface(terminal::REPL.Terminals.TextTerminal, m::REPL.LineEdit.ModalInterface)
    @ REPL.LineEdit /Applications/Julia-1.7.app/Contents/Resources/julia/share/julia/stdlib/v1.7/REPL/src/LineEdit.jl:2487
 [22] RunDebugger(frame::JuliaInterpreter.Frame, repl::Nothing, terminal::Nothing; initial_continue::Bool)
    @ Debugger ~/.julia/packages/Debugger/I4w2y/src/repl.jl:167
 [23] macro expansion
    @ ~/.julia/packages/Debugger/I4w2y/src/Debugger.jl:137 [inlined]
 [24] main()
    @ Main ~/.julia/dev/ParametrisedConvexApproximators/test/tmp.jl:20
 [25] top-level scope
    @ REPL[2]:1
 [26] top-level scope
    @ ~/.julia/packages/CUDA/sCev8/src/initialization.jl:52

It may be due to the lack of my background knowledge of how to use JuMP and DiffOpt stuff.
How can I realise my idea with DiffOpt.jl?

@odow
Copy link
Member

odow commented Jul 18, 2022

What is tmp? You can't call logsumexp outside JuMP macros: https://discourse.julialang.org/t/how-to-implment-logsumexp-function-in-jump/84376/2

@JinraeKim
Copy link
Contributor Author

tmp is a reshaped output of the output of an auxiliary neural network (e.g., feedforward NN),
obtained from an auxiliary function affine_map (you can find it here).

function affine_map(nn::ParametrisedConvexApproximator, x::AbstractArray, u::AbstractArray)
    @unpack NN, i_max, m = nn
    # @unpack NN1, NN2, i_max, m = nn
    d = size(x)[2]
    X = reshape(NN(x), i_max, m+1, d)
    tmp = hcat([(X[:, 1:end-1, i]*u[:, i] .+ X[:, end:end, i]) for i in 1:d]...)
    return tmp
end

@JinraeKim
Copy link
Contributor Author

JinraeKim commented Jul 18, 2022

According to this answer and #50 , it seems not solvable for now

@JinraeKim JinraeKim changed the title Best practice for DiffOpt.jl implementation with Flux Best practice for DiffOpt.jl implementation with Flux (logsumexp) Jul 18, 2022
@matbesancon
Copy link
Collaborator

You can reformulate a logsumexp as an exponential cone:
https://docs.mosek.com/modeling-cookbook/expo.html#log-sum-exp
which will work with DiffOpt, would that help?

@matbesancon
Copy link
Collaborator

Ah sorry, hadn't seen the second part. #50 should not be a bother, not sure how well it will work but there are no technical limitations anymore

@JinraeKim
Copy link
Contributor Author

Ah sorry, hadn't seen the second part. #50 should not be a bother, not sure how well it will work but there are no technical limitations anymore

@matbesancon
Hi, do you mean that DiffOpt.jl supports exponential cone as well?

@matbesancon
Copy link
Collaborator

that at least you can try them I think

@odow
Copy link
Member

odow commented Jul 18, 2022

@JinraeKim
Copy link
Contributor Author

JinraeKim commented Jul 18, 2022

@odow
I just realised from this example that all I need is just to build a logsumexp layer (the other part can be constructed out of the scope of DiffOpt).

And I also checked that I can construct forward function logsumexp with exponential cone program as

using JuMP
import DiffOpt
import SCS
import ChainRulesCore


function main()
    N, d = 5, 10
    y = rand(N, d)
    # x_star = matrix_relu(y)
    # x_star = @run matrix_relu(y)
    x_star = logsumexp(y)
end


function matrix_relu(
    y::Matrix;
    model = Model(() -> DiffOpt.diff_optimizer(SCS.Optimizer))
)
    layer_size, batch_size = size(y)
    empty!(model)
    set_silent(model)
    @variable(model, x[1:layer_size, 1:batch_size] >= 0)
    @objective(model, Min, x[:]'x[:] -2y[:]'x[:])
    @bp
    optimize!(model)
    return value.(x)
end


function logsumexp(
        y::Matrix;
        model = Model(() -> DiffOpt.diff_optimizer(SCS.Optimizer)),
    )
    N, d = size(y)
    x_star = zeros(N, d)
    for j in 1:d
        empty!(model)
        set_silent(model)
        @variable(model, x[1:N])
        @variable(model, u[1:N])
        @variable(model, t)
        @constraint(model, sum(u) <= 1)
        @constraint(model, [i=1:N], [u[i], 1, y[i, j]*x[i] - t] in MOI.ExponentialCone())    
        @objective(model, Min, t)
        optimize!(model)
        x_star[:, j] = value.(x)
    end
    x_star
end


function ChainRulesCore.rrule(::typeof(logsumexp), y::Matrix{T}) where T
    model = Model(() -> DiffOpt.diff_optimizer(SCS.Optimizer))
    pv = logsumexp(y, model=model)
    function pullback_logsumexp(dl_dx)
        x = model[:x]
        dl_dy = zeros(T, size(dl_dx))
        dl_dq = zeros(T, size(dl_dx))
        MOI.set.(model, DiffOpt.ReverseVariablePrimal(), x[:], dl_dx[:])
        DiffOpt.reverse_differentiate!(model)
        obj_exp
        error("todo")
    end
    return pv, pullback_logsumexp
end

(Note that I'm working on completing the ChainRulesCore.rrule).

But I don't actually get how to complete this.

I'll take a look at it deeply later

EDIT: I was trying to do that but I failed with an error when running the following code.

  • code
using JuMP
import DiffOpt
import SCS
import ChainRulesCore


function main()
    N, d = 5, 10
    y = rand(N, d)
    # x_star = matrix_relu(y)
    # x_star = @run matrix_relu(y)
    x_star = logsumexp(y)
end


function matrix_relu(
    y::Matrix;
    model = Model(() -> DiffOpt.diff_optimizer(SCS.Optimizer))
)
    layer_size, batch_size = size(y)
    empty!(model)
    set_silent(model)
    @variable(model, x[1:layer_size, 1:batch_size] >= 0)
    @objective(model, Min, x[:]'x[:] -2y[:]'x[:])
    @bp
    optimize!(model)
    return value.(x)
end


function logsumexp(
        y::Matrix;
        models = repeat([Model(() -> DiffOpt.diff_optimizer(SCS.Optimizer))], size(y)[2]),
    )
    N, d = size(y)
    x_star = zeros(N, d)
    for (j, model) in enumerate(models)
        empty!(model)
        set_silent(model)
        @variable(model, x[1:N])
        @variable(model, u[1:N])
        @variable(model, t)
        @constraint(model, sum(u) <= 1)
        @constraint(model, [i=1:N], [u[i], 1, y[i, j]*x[i] - t] in MOI.ExponentialCone())    
        @objective(model, Min, t)
        optimize!(model)
        x_star[:, j] = value.(x)
    end
    x_star
end


function ChainRulesCore.rrule(::typeof(logsumexp), y::Matrix{T}) where T
    models = repeat([Model(() -> DiffOpt.diff_optimizer(SCS.Optimizer))], size(y)[2])
    pv = logsumexp(y, models=models)
    function pullback_logsumexp(dl_dx)
        for (j, model) in enumerate(models)
            x = model[:x]
            MOI.set.(model, DiffOpt.ReverseVariablePrimal(), x[:], dl_dx[:, j])
            DiffOpt.reverse_differentiate!(model)
        end
        error("todo")
    end
    return pv, pullback_logsumexp
end
  • error
julia> DiffOpt.reverse_differentiate!(model)
ERROR: Trying to compute the reverse differentiation on a model with termination status DUAL_INFEASIBLE
Stacktrace:
 [1] error(s::String)
   @ Base ./error.jl:33
 [2] reverse_differentiate!(model::DiffOpt.Optimizer{MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.CachingOptimizer{SCS.Optimizer, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}})
   @ DiffOpt ~/.julia/packages/DiffOpt/LLsVt/src/moi_wrapper.jl:361
 [3] reverse_differentiate!
   @ ~/.julia/packages/DiffOpt/LLsVt/src/jump_moi_overloads.jl:276 [inlined]
 [4] reverse_differentiate!(model::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{DiffOpt.Optimizer{MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.CachingOptimizer{SCS.Optimizer, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}})
   @ DiffOpt ~/.julia/packages/DiffOpt/LLsVt/src/jump_moi_overloads.jl:272
 [5] reverse_differentiate!(model::Model)
   @ DiffOpt ~/.julia/packages/DiffOpt/LLsVt/src/jump_moi_overloads.jl:268
 [6] top-level scope
   @ REPL[68]:1

@odow
Copy link
Member

odow commented Jul 18, 2022

Oops. I always get this wrong. The Mosek and MOI conventions for Exponential cone are flipped. Try:

[y[i, j]*x[i] - t, 1, u[i]] in MOI.ExponentialCone()

@JinraeKim
Copy link
Contributor Author

JinraeKim commented Jul 19, 2022

@odow
Thx for your help. But it does not work as well.

using JuMP
import DiffOpt
import SCS
import ChainRulesCore


function main()
    N, d = 5, 10
    y = rand(N, d)
    # x_star = matrix_relu(y)
    # x_star = @run matrix_relu(y)
    models = repeat([Model(() -> DiffOpt.diff_optimizer(SCS.Optimizer))], size(y)[2])
    x_star = logsumexp(y; models=models)
    # test
    j = 1
    model = models[j]
    x = model[:x]
    dl_dx = rand(size(x)...)
    MOI.set.(model, DiffOpt.ReverseVariablePrimal(), x[:], dl_dx[:, j])
    DiffOpt.reverse_differentiate!(model)
end


# function matrix_relu(
#     y::Matrix;
#     model = Model(() -> DiffOpt.diff_optimizer(SCS.Optimizer))
# )
#     layer_size, batch_size = size(y)
#     empty!(model)
#     set_silent(model)
#     @variable(model, x[1:layer_size, 1:batch_size] >= 0)
#     @objective(model, Min, x[:]'x[:] -2y[:]'x[:])
#     @bp
#     optimize!(model)
#     return value.(x)
# end


function logsumexp(
        y::Matrix;
        models = repeat([Model(() -> DiffOpt.diff_optimizer(SCS.Optimizer))], size(y)[2]),
    )
    N, d = size(y)
    x_star = zeros(N, d)
    for (j, model) in enumerate(models)
        empty!(model)
        set_silent(model)
        @variable(model, x[1:N])
        @variable(model, u[1:N])
        @variable(model, t)
        @constraint(model, sum(u) <= 1)
        @constraint(model, [i=1:N], [y[i, j]*x[i] - t, 1, u[i]] in MOI.ExponentialCone())    
        @objective(model, Min, t)
        optimize!(model)
        x_star[:, j] = value.(x)
    end
    x_star
end


function ChainRulesCore.rrule(::typeof(logsumexp), y::Matrix{T}) where T
    models = repeat([Model(() -> DiffOpt.diff_optimizer(SCS.Optimizer))], size(y)[2])
    pv = logsumexp(y, models=models)
    function pullback_logsumexp(dl_dx)
        for (j, model) in enumerate(models)
            x = model[:x]
            MOI.set.(model, DiffOpt.ReverseVariablePrimal(), x[:], dl_dx[:, j])
            DiffOpt.reverse_differentiate!(model)
        end
        error("todo")
    end
    return pv, pullback_logsumexp
end
julia> main()
ERROR: Trying to compute the reverse differentiation on a model with termination status DUAL_INFEASIBLE
Stacktrace:
 [1] error(s::String)
   @ Base ./error.jl:33
 [2] reverse_differentiate!(model::DiffOpt.Optimizer{MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.CachingOptimizer{SCS.Optimizer, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}})
   @ DiffOpt ~/.julia/packages/DiffOpt/LLsVt/src/moi_wrapper.jl:361
 [3] reverse_differentiate!
   @ ~/.julia/packages/DiffOpt/LLsVt/src/jump_moi_overloads.jl:276 [inlined]
 [4] reverse_differentiate!(model::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{DiffOpt.Optimizer{MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.CachingOptimizer{SCS.Optimizer, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}})
   @ DiffOpt ~/.julia/packages/DiffOpt/LLsVt/src/jump_moi_overloads.jl:272
 [5] reverse_differentiate!(model::Model)
   @ DiffOpt ~/.julia/packages/DiffOpt/LLsVt/src/jump_moi_overloads.jl:268
 [6] main()
   @ Main ~/.julia/dev/tmp.jl:20
 [7] top-level scope
   @ REPL[2]:1

@odow
Copy link
Member

odow commented Jul 19, 2022

It's saying the problem is unbounded. Are you sure you don't mean

        @variable(model, u[1:N])
        @variable(model, t)
        @constraint(model, sum(u) <= 1)
        @constraint(model, [i=1:N], [u[i], 1, y[i, j] - t] in MOI.ExponentialCone())   

What does y * x represent? Are you missing other constraints on x?

@JinraeKim
Copy link
Contributor Author

JinraeKim commented Jul 19, 2022

It's saying the problem is unbounded. Are you sure you don't mean

        @variable(model, u[1:N])
        @variable(model, t)
        @constraint(model, sum(u) <= 1)
        @constraint(model, [i=1:N], [u[i], 1, y[i, j] - t] in MOI.ExponentialCone())   

What does y * x represent? Are you missing other constraints on x?

Yeah, that makes sense. I imposed a box constraint:

function logsumexp(
        y::Matrix;
        models = repeat([Model(() -> DiffOpt.diff_optimizer(SCS.Optimizer))], size(y)[2]),
        x_min = -10.0,
        x_max = 10.0,
    )
    N, d = size(y)
    x_star = zeros(N, d)
    for (j, model) in enumerate(models)
        empty!(model)
        set_silent(model)
        @variable(model, x[1:N])
        @variable(model, u[1:N])
        @variable(model, t)
        @constraint(model, sum(u) <= 1.0)
        @constraint(model, [i=1:N], [y[i, j]*x[i] - t, 1.0, u[i]] in MOI.ExponentialCone())    
        @constraint(model, x_min .<= x .<= x_max) 
        @objective(model, Min, t)
        optimize!(model)
        x_star[:, j] = value.(x)
    end
    x_star
end

and it gives still an error

julia> main()
ERROR: MethodError: no method matching MathOptInterface.ExponentialCone(::Int64)
Closest candidates are:
  MathOptInterface.ExponentialCone() at ~/.julia/packages/MathOptInterface/AiEiQ/src/sets.jl:400
Stacktrace:
  [1] set_with_dimension(#unused#::Type{MathOptInterface.ExponentialCone}, dim::Int64)
    @ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/AiEiQ/src/Utilities/matrix_of_constraints.jl:561
  [2] set_from_constants(#unused#::Vector{Float64}, #unused#::Type{MathOptInterface.ExponentialCone}, rows::UnitRange{Int64})
    @ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/AiEiQ/src/Utilities/matrix_of_constraints.jl:598
  [3] get(model::MathOptInterface.Utilities.MatrixOfConstraints{Float64, MathOptInterface.Utilities.MutableSparseMatrixCSC{Float64, Int64, MathOptInterface.Utilities.OneBasedIndexing}, Vector{Float64}, DiffOpt.ProductOfSets{Float64}}, #unused#::MathOptInterface.ConstraintSet, ci::MathOptInterface.ConstraintIndex{MathOptInterface.VectorAffineFunction{Float64}, MathOptInterface.ExponentialCone})
    @ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/AiEiQ/src/Utilities/matrix_of_constraints.jl:623
  [4] get(model::MathOptInterface.Utilities.GenericModel{Float64, MathOptInterface.Utilities.ObjectiveContainer{Float64}, MathOptInterface.Utilities.FreeVariables, MathOptInterface.Utilities.MatrixOfConstraints{Float64, MathOptInterface.Utilities.MutableSparseMatrixCSC{Float64, Int64, MathOptInterface.Utilities.OneBasedIndexing}, Vector{Float64}, DiffOpt.ProductOfSets{Float64}}}, attr::MathOptInterface.ConstraintSet, ci::MathOptInterface.ConstraintIndex{MathOptInterface.VectorAffineFunction{Float64}, MathOptInterface.ExponentialCone})
    @ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/AiEiQ/src/Utilities/model.jl:459
  [5] (::DiffOpt.var"#4#5"{Vector{Float64}, MathOptInterface.Utilities.GenericModel{Float64, MathOptInterface.Utilities.ObjectiveContainer{Float64}, MathOptInterface.Utilities.FreeVariables, MathOptInterface.Utilities.MatrixOfConstraints{Float64, MathOptInterface.Utilities.MutableSparseMatrixCSC{Float64, Int64, MathOptInterface.Utilities.OneBasedIndexing}, Vector{Float64}, DiffOpt.ProductOfSets{Float64}}}})(ci::MathOptInterface.ConstraintIndex{MathOptInterface.VectorAffineFunction{Float64}, MathOptInterface.ExponentialCone}, r::UnitRange{Int64})
    @ DiffOpt ~/.julia/packages/DiffOpt/LLsVt/src/diff_opt.jl:310
  [6] _map_rows!(f::DiffOpt.var"#4#5"{Vector{Float64}, MathOptInterface.Utilities.GenericModel{Float64, MathOptInterface.Utilities.ObjectiveContainer{Float64}, MathOptInterface.Utilities.FreeVariables, MathOptInterface.Utilities.MatrixOfConstraints{Float64, MathOptInterface.Utilities.MutableSparseMatrixCSC{Float64, Int64, MathOptInterface.Utilities.OneBasedIndexing}, Vector{Float64}, DiffOpt.ProductOfSets{Float64}}}}, x::Vector{Matrix{Float64}}, model::MathOptInterface.Utilities.GenericModel{Float64, MathOptInterface.Utilities.ObjectiveContainer{Float64}, MathOptInterface.Utilities.FreeVariables, MathOptInterface.Utilities.MatrixOfConstraints{Float64, MathOptInterface.Utilities.MutableSparseMatrixCSC{Float64, Int64, MathOptInterface.Utilities.OneBasedIndexing}, Vector{Float64}, DiffOpt.ProductOfSets{Float64}}}, cones::DiffOpt.ProductOfSets{Float64}, #unused#::Type{MathOptInterface.VectorAffineFunction{Float64}}, #unused#::Type{MathOptInterface.ExponentialCone}, map_mode::DiffOpt.Nested{Matrix{Float64}}, k::Int64)
    @ DiffOpt ~/.julia/packages/DiffOpt/LLsVt/src/diff_opt.jl:337
  [7] map_rows(f::Function, model::MathOptInterface.Utilities.GenericModel{Float64, MathOptInterface.Utilities.ObjectiveContainer{Float64}, MathOptInterface.Utilities.FreeVariables, MathOptInterface.Utilities.MatrixOfConstraints{Float64, MathOptInterface.Utilities.MutableSparseMatrixCSC{Float64, Int64, MathOptInterface.Utilities.OneBasedIndexing}, Vector{Float64}, DiffOpt.ProductOfSets{Float64}}}, cones::DiffOpt.ProductOfSets{Float64}, map_mode::DiffOpt.Nested{Matrix{Float64}})
    @ DiffOpt ~/.julia/packages/DiffOpt/LLsVt/src/diff_opt.jl:366
  [8] Dπ
    @ ~/.julia/packages/DiffOpt/LLsVt/src/diff_opt.jl:308 [inlined]
  [9] _gradient_cache(model::DiffOpt.ConicProgram.Model)
    @ DiffOpt.ConicProgram ~/.julia/packages/DiffOpt/LLsVt/src/ConicProgram/ConicProgram.jl:158
 [10] reverse_differentiate!(model::DiffOpt.ConicProgram.Model)
    @ DiffOpt.ConicProgram ~/.julia/packages/DiffOpt/LLsVt/src/ConicProgram/ConicProgram.jl:250
 [11] reverse_differentiate!(model::MathOptInterface.Bridges.LazyBridgeOptimizer{DiffOpt.ConicProgram.Model})
    @ DiffOpt ~/.julia/packages/DiffOpt/LLsVt/src/jump_moi_overloads.jl:276
 [12] reverse_differentiate!(model::DiffOpt.Optimizer{MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.CachingOptimizer{SCS.Optimizer, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}})
    @ DiffOpt ~/.julia/packages/DiffOpt/LLsVt/src/moi_wrapper.jl:367
 [13] reverse_differentiate!
    @ ~/.julia/packages/DiffOpt/LLsVt/src/jump_moi_overloads.jl:276 [inlined]
 [14] reverse_differentiate!(model::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{DiffOpt.Optimizer{MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.CachingOptimizer{SCS.Optimizer, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}})
    @ DiffOpt ~/.julia/packages/DiffOpt/LLsVt/src/jump_moi_overloads.jl:272
 [15] reverse_differentiate!(model::Model)
    @ DiffOpt ~/.julia/packages/DiffOpt/LLsVt/src/jump_moi_overloads.jl:268
 [16] main()
    @ Main ~/.julia/dev/tmp.jl:20
 [17] top-level scope
    @ REPL[2]:1

@odow
Copy link
Member

odow commented Jul 19, 2022

Ah. I fixed this one recently: jump-dev/MathOptInterface.jl#1941

As a work-around, just add this method to your code:

MOI.Utilities.set_with_dimension(::Type{MOI.ExponentialCone}, dim) = MOI.ExponentialCone()

@JinraeKim
Copy link
Contributor Author

JinraeKim commented Jul 19, 2022

Dang, it seems to work at least until DiffOpt.reverse_differentiate!(model)!

How lucky I am! You're helping me, who fixed that bug lol

@JinraeKim
Copy link
Contributor Author

Sorry but could you help me?
I followed https://jump.dev/DiffOpt.jl/stable/examples/chainrules_unit/#Reverse-mode-differentiation-of-the-solution-map but there was a dimension error

using JuMP
import DiffOpt
import SCS
import ChainRulesCore


# workaround to avoid an error: https://github.com/jump-dev/DiffOpt.jl/issues/228#issuecomment-1188512885
MOI.Utilities.set_with_dimension(::Type{MOI.ExponentialCone}, dim) = MOI.ExponentialCone()


function main()
    N, d = 5, 10
    y = rand(N, d)
    # x_star = matrix_relu(y)
    # x_star = @run matrix_relu(y)
    models = repeat([Model(() -> DiffOpt.diff_optimizer(SCS.Optimizer))], size(y)[2])
    x_star = logsumexp(y; models=models)
    # test
    j = 1
    model = models[j]
    x = model[:x]
    dl_dx = rand(size(x)...)
    MOI.set.(model, DiffOpt.ReverseVariablePrimal(), x[:], dl_dx[:])
    DiffOpt.reverse_differentiate!(model)
    c = model[:c]
    MOI.get.(model, DiffOpt.ReverseConstraintFunction(), c)
end


# function matrix_relu(
#     y::Matrix;
#     model = Model(() -> DiffOpt.diff_optimizer(SCS.Optimizer))
# )
#     layer_size, batch_size = size(y)
#     empty!(model)
#     set_silent(model)
#     @variable(model, x[1:layer_size, 1:batch_size] >= 0)
#     @objective(model, Min, x[:]'x[:] -2y[:]'x[:])
#     @bp
#     optimize!(model)
#     return value.(x)
# end


function logsumexp(
        y::Matrix;
        models = repeat([Model(() -> DiffOpt.diff_optimizer(SCS.Optimizer))], size(y)[2]),
        x_min = -1.0,
        x_max = 1.0,
    )
    N, d = size(y)
    x_star = zeros(N, d)
    for (j, model) in enumerate(models)
        empty!(model)
        set_silent(model)
        @variable(model, x[1:N])
        @variable(model, u[1:N])
        @variable(model, t)
        @constraint(model, sum(u) <= 1.0)
        @constraint(model, c[i=1:N], [y[i, j]*x[i] - t, 1.0, u[i]] in MOI.ExponentialCone())    
        @constraint(model, x_min .<= x .<= x_max) 
        @objective(model, Min, t)
        optimize!(model)
        x_star[:, j] = value.(x)
    end
    x_star
end


function ChainRulesCore.rrule(::typeof(logsumexp), y::Matrix{T}) where T
    models = repeat([Model(() -> DiffOpt.diff_optimizer(SCS.Optimizer))], size(y)[2])
    pv = logsumexp(y, models=models)
    function pullback_logsumexp(dl_dx)
        for (j, model) in enumerate(models)
            x = model[:x]
            MOI.set.(model, DiffOpt.ReverseVariablePrimal(), x[:], dl_dx[:, j])
            DiffOpt.reverse_differentiate!(model)
        end
        error("todo")
    end
    return pv, pullback_logsumexp
end
julia> main()
ERROR: DimensionMismatch("arrays could not be broadcast to a common size; got a dimension with lengths 3 and 11")
Stacktrace:
  [1] _bcs1
    @ ./broadcast.jl:516 [inlined]
  [2] _bcs
    @ ./broadcast.jl:510 [inlined]
  [3] broadcast_shape
    @ ./broadcast.jl:504 [inlined]
  [4] combine_axes
    @ ./broadcast.jl:499 [inlined]
  [5] instantiate
    @ ./broadcast.jl:281 [inlined]
  [6] lazy_combination(op::typeof(-), α::SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}, a::SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}, β::SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}, b::SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true})
    @ DiffOpt ~/.julia/packages/DiffOpt/LLsVt/src/diff_opt.jl:255
  [7] lazy_combination
    @ ~/.julia/packages/DiffOpt/LLsVt/src/diff_opt.jl:263 [inlined]
  [8] lazy_combination(op::typeof(-), a::Vector{Float64}, b::Vector{Float64}, i::UnitRange{Int64}, I::UnitRange{Int64})
    @ DiffOpt ~/.julia/packages/DiffOpt/LLsVt/src/diff_opt.jl:269
  [9] _get_dA
    @ ~/.julia/packages/DiffOpt/LLsVt/src/ConicProgram/ConicProgram.jl:338 [inlined]
 [10] get(model::DiffOpt.ConicProgram.Model, #unused#::DiffOpt.ReverseConstraintFunction, ci::MathOptInterface.ConstraintIndex{MathOptInterface.VectorAffineFunction{Float64}, MathOptInterface.ExponentialCone})
    @ DiffOpt ~/.julia/packages/DiffOpt/LLsVt/src/diff_opt.jl:277
 [11] get(b::MathOptInterface.Bridges.LazyBridgeOptimizer{DiffOpt.ConicProgram.Model}, attr::DiffOpt.ReverseConstraintFunction, ci::MathOptInterface.ConstraintIndex{MathOptInterface.VectorAffineFunction{Float64}, MathOptInterface.ExponentialCone})
    @ MathOptInterface.Bridges ~/.julia/packages/MathOptInterface/AiEiQ/src/Bridges/bridge_optimizer.jl:1391
 [12] get(model::DiffOpt.Optimizer{MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.CachingOptimizer{SCS.Optimizer, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}, attr::DiffOpt.ReverseConstraintFunction, ci::MathOptInterface.ConstraintIndex{MathOptInterface.VectorAffineFunction{Float64}, MathOptInterface.ExponentialCone})
    @ DiffOpt ~/.julia/packages/DiffOpt/LLsVt/src/moi_wrapper.jl:500
 [13] get(b::MathOptInterface.Bridges.LazyBridgeOptimizer{DiffOpt.Optimizer{MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.CachingOptimizer{SCS.Optimizer, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}}, attr::DiffOpt.ReverseConstraintFunction, ci::MathOptInterface.ConstraintIndex{MathOptInterface.VectorAffineFunction{Float64}, MathOptInterface.ExponentialCone})
    @ MathOptInterface.Bridges ~/.julia/packages/MathOptInterface/AiEiQ/src/Bridges/bridge_optimizer.jl:1391
 [14] get(model::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{DiffOpt.Optimizer{MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.CachingOptimizer{SCS.Optimizer, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}, attr::DiffOpt.ReverseConstraintFunction, index::MathOptInterface.ConstraintIndex{MathOptInterface.VectorAffineFunction{Float64}, MathOptInterface.ExponentialCone})
    @ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/AiEiQ/src/Utilities/cachingoptimizer.jl:911
 [15] get(model::Model, attr::DiffOpt.ReverseConstraintFunction, con_ref::ConstraintRef{Model, MathOptInterface.ConstraintIndex{MathOptInterface.VectorAffineFunction{Float64}, MathOptInterface.ExponentialCone}, VectorShape})
    @ DiffOpt ~/.julia/packages/DiffOpt/LLsVt/src/jump_moi_overloads.jl:20
 [16] _broadcast_getindex_evalf
    @ ./broadcast.jl:670 [inlined]
 [17] _broadcast_getindex
    @ ./broadcast.jl:643 [inlined]
 [18] getindex
    @ ./broadcast.jl:597 [inlined]
 [19] copy
    @ ./broadcast.jl:899 [inlined]
 [20] materialize(bc::Base.Broadcast.Broadcasted{Base.Broadcast.DefaultArrayStyle{1}, Nothing, typeof(MathOptInterface.get), Tuple{Base.RefValue{Model}, Base.RefValue{DiffOpt.ReverseConstraintFunction}, Vector{ConstraintRef{Model, MathOptInterface.ConstraintIndex{MathOptInterface.VectorAffineFunction{Float64}, MathOptInterface.ExponentialCone}, VectorShape}}}})
    @ Base.Broadcast ./broadcast.jl:860
 [21] main()
    @ Main ~/.julia/dev/tmp.jl:26
 [22] top-level scope
    @ REPL[2]:1

@odow
Copy link
Member

odow commented Jul 19, 2022

This is where I'm not sure, sorry. I've never used DiffOpt, or dug into how it works. My guess is that it's still missing some features for ExponentialCone properly, or at least, ExponentialCone hasn't been tested. @matbesancon or @joaquimg are the people who would know.

@JinraeKim
Copy link
Contributor Author

Yup, thank you so much @odow for your help!

@matbesancon
Copy link
Collaborator

Thanks a lot @odow for the troubleshooting. Mmmh I'm wondering if this is not linked to the direct_model issue.
@blegat will know better for the lazy_combination part 😅

@odow
Copy link
Member

odow commented Apr 12, 2023

@matbesancon is the exponential cone meant to work? #50 is open, and the docs mention only PSD and SOC.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants