Optim.jl univariate bounded optimization confusing output when using Inf as bound

Optim.jl univariate bounded optimization confusing output when using Inf as bound

By : user3099612
Date : January 11 2021, 03:34 PM
I wish this help you As you note in your question - you use bounded optimization algorithm but you pass an unbounded interval to it.
Citing the documentation (https://julianlsolvers.github.io/Optim.jl/latest/#user/minimization/), which is precise about it the optimize function is for Minimizing a univariate function on a bounded interval.
code :
new_minimizer = x_lower + golden_ratio*(x_upper-x_lower)
julia> objective1(Inf)

julia> objective2(Inf)
julia> optimize(objective1, 0.0001, 1.0e308)
Results of Optimization Algorithm
 * Algorithm: Brent's Method
 * Search Interval: [0.000100, 100000000000000001097906362944045541740492309677311846336810682903157585404911491537163328978494688899061249669721172515611590283743140088328307009198146046031271664502933027185697489699588559043338384466165001178426897626212945177628091195786707458122783970171784415105291802893207873272974885715430223118336.000000]
 * Minimizer: 1.000005e+308
 * Minimum: -Inf
 * Iterations: 1000
 * Convergence: max(|x - x_upper|, |x - x_lower|) <= 2*(1.5e-08*|x|+2.2e-16): false
 * Objective Function Calls: 1001
julia> objective1(1.0e307)

julia> objective1(1.0e308)
julia> results1.converged

julia> results2.converged

Share : facebook icon twitter icon
univariate nonlinear optimization with quadratic constraint in R

univariate nonlinear optimization with quadratic constraint in R

By : Nurullah Acun
Date : March 29 2020, 07:55 AM
this one helps. If you solve g(x)=0 for x by the usual quadratic formula then that just gives you another set of bounds on x. If your x^2 coefficent is negative then g(x) > 0 between the solutions, otherwise g(x)>0 outside the solutions, so within (-Inf, x1) and (x2, Inf).
In this case, g(x)>0 for -1.927 < x < 2.59. So in this case both your constraints cannot be simultaneously achieved (g(x) is LESS THAN 0 for 600
R optimization with optim

R optimization with optim

By : user3654281
Date : March 29 2020, 07:55 AM
Hope this helps There is nothing wrong with the code (in terms of a programming error) . If you feel that it shouldn't return the upper limit then you probably made a mistake with the definition of the function in your code.
First of all when you only have one parameter to estimate your method should be Brent like this:
code :
> optim(lambda,likelihood, method='Brent', lower=1.5, upper=200)$par
[1] 200
plot(likelihood, from=-100, to=1000, xlab='lambda')
NLopt with univariate optimization

NLopt with univariate optimization

By : ovbg
Date : March 29 2020, 07:55 AM
this will help There are a couple of things that you're missing here.
You need to specify the gradient (i.e. first derivative) of your function within the function. See the tutorial and examples on the github page for NLopt. Not all optimization algorithms require this, but the one that you are using LD_MMA looks like it does. See here for a listing of the various algorithms and which require a gradient. You should specify the tolerance for conditions you need before you "declare victory" ¹ (i.e. decide that the function is sufficiently optimized). This is the xtol_rel!(opt,1e-4) in the example below. See also the ftol_rel! for another way to specify a different tolerance condition. According to the documentation, for example, xtol_rel will "stop when an optimization step (or an estimate of the optimum) changes every parameter by less than tol multiplied by the absolute value of the parameter." and ftol_rel will "stop when an optimization step (or an estimate of the optimum) changes the objective function value by less than tol multiplied by the absolute value of the function value. " See here under the "Stopping Criteria" section for more information on various options here. The function that you are optimizing should have a unidimensional output. In your example, your output is a vector (albeit of length 1). (x.^2 in your output denotes a vector operation and a vector output). If you "objective function" doesn't ultimately output a unidimensional number, then it won't be clear what your optimization objective is (e.g. what does it mean to minimize a vector? It's not clear, you could minimize the norm of a vector, for instance, but a whole vector - it isn't clear).
code :
using NLopt    

count = 0 # keep track of # function evaluations    

function myfunc(x::Vector, grad::Vector)
    if length(grad) > 0
        grad[1] = 2*x[1]

    global count
    count::Int += 1


opt = Opt(:LD_MMA, 1)    


min_objective!(opt, myfunc)
(minf,minx,ret) = optimize(opt, [1.234])    

println("got $minf at $minx (returned $ret)")
Error in optim(): searching for global minimum for a univariate function

Error in optim(): searching for global minimum for a univariate function

By : Terrence
Date : March 29 2020, 07:55 AM
I think the issue was by ths following , Minimization or Maximization?
Although ?optim says it can do maximization, but that is in a bracket, so minimization is default:
code :
fn: A function to be minimized (or maximized) ...
EMV <- function(data, par) {

    Mi  <- par
    Phi <- 2
    N   <- NROW(data)

    Resultado <- log(Mi/(Mi + Phi))*sum(data) + N*Phi*log(Phi/(Mi + Phi))
    return(-1 * Resultado)

optim(par = theta, fn = EMV, data = data, method = "Brent", lower = 0, upper = 1E5)
Optimization Error: Box constraint optimization (Julia Optim.jl)

Optimization Error: Box constraint optimization (Julia Optim.jl)

By : larryrenna
Date : March 29 2020, 07:55 AM
it helps some times You have to fix two things in your code and all will work:
ux must contain floats, so you should change its definition to ux = [5.0,10.0] init_guess must be within the optimization bounds so you can e.g. set it to init_guess = (lx+ux)/2
code :
julia> result = optimize(min_function, lx, ux, init_guess, Fminbox(LBFGS()))
 * Status: success

 * Candidate solution
    Minimizer: [5.00e+00, 5.00e+00]
    Minimum:   1.417223e+03

 * Found with
    Algorithm:     Fminbox with L-BFGS
    Initial Point: [2.50e+00, 7.50e+00]

 * Convergence measures
    |x - x'|               = 8.88e-16 ≰ 0.0e+00
    |x - x'|/|x'|          = 1.26e-16 ≰ 0.0e+00
    |f(x) - f(x')|         = 0.00e+00 ≤ 0.0e+00
    |f(x) - f(x')|/|f(x')| = 0.00e+00 ≤ 0.0e+00
    |g(x)|                 = 8.87e+01 ≰ 1.0e-08

 * Work counters
    Seconds run:   0  (vs limit Inf)
    Iterations:    6
    f(x) calls:    2571
    ∇f(x) calls:   2571
Related Posts Related Posts :
  • SIMD instructions lowering CPU frequency
  • Complexity of Integer vs. Binary Constraints in CPLEX
  • how I can seperate negative and positive variables?
  • Optimal Sharing of heavy computation job using Snow and/or multicore
  • Pyomo: How to write a constraint for each (i,j) pair
  • How to disable all optimization when using COSMIC compiler?
  • How to maximize a var int that is larger than 32 bits?
  • blocking call on two Queues?
  • How to improve the optimization speed of z3?
  • In Lisp, is it more idiomatic to use let* or setf?
  • How to efficiently iterate over a list of objects in julia?
  • Long latency instruction
  • accessing dummy variable in fortran prevents optimization?
  • How to tell LLVM that it can optimize away stores?
  • CpoException: Can not execute command 'cpoptimizer -angel'. Please check availability of required executable file
  • How does CP Optimizer compare to other constraint programming solvers/
  • using GEKKO to optimize with a large number of variables, bounds and constraints
  • Optimising or restructuring n-n relationship Neo4j queries causing very slow response times
  • shadow
    Privacy Policy - Terms - Contact Us © festivalmusicasacra.org