|
Boost : |
Subject: Re: [boost] Proposal for a Differential Evolution C++ library
From: Topher Cooper (topher_at_[hidden])
Date: 2012-01-10 14:00:03
On 1/10/2012 1:37 AM, Adrian Michel wrote:
> The way I see it, the only common point between different optimization
> algorithms (if they are generic enough) is the objective function,
> which should have a very simple prototype:
>
> double f( std::vector< double > );
>
> so in theory if different algorithms are implemented to accept the
> same types (in C++ sense) of objective functions, then you could run
> them interchangeably between GA, DE etc without any modification to
> the function or algorithm.
That ignores multi-objective optimization. In that case the return type
would be, at a minimum std:vector<double>, and might require information
on what characteristics are maximized.
Also, this assumes for the input that we are talking about straight real
numerical optimization. Even if we stay with numerical optimization we
could be dealing with complex number, vector quantities, matrix
quantities, perhaps even more sophisticated types like tensors.
And still within what most would consider within the bounds of classic
numerical optimization would be discrete constraints (of course, Goedel
numbering allows us to encode /anything/ formal as one big integer which
can then be encoded as a arbitrarily large vector of doubles, but then
why bother with a program types at all? ... just Goedel number it).
Its a small step from classic numerical optimization to numerical
approximation/fitting, where the task is to select a formula in addition
to selecting optimum parameters for it (currently on my todo list is to
look at the "Eureqa" program, e.g.,
http://hplusmagazine.com/2011/03/25/eureqa-signs-of-the-singularity/,
which is of this type).
But the suggestion was for a general learning library. That's perhaps
too broad, but combinatorial optimization or metaheuristic optimization
seems reasonable. And these lead to a much broader range of inputs.
Furthermore, many problems are much more efficient working with an
objective delta function rather than directly with the objective
function. The input here is the previous state and the changes made to
it. For example, I'm currently developing a "School Timetabling"
package. Re-evaluating the effects on the objective function of all the
assignments (including their side effects on other assignments, e.g.,
when a timeslot is assigned to a section one term, it is highely
desirable but not required that the corresponding timeslot be assigned
to the same section in the next term) that remain the same between each
increment would add a very large (albeit, linear in some descriptions of
problem size) cost to the system.
But just looking at the objective function is too narrow a view. We
really only have two commonalities to every algorithm: an objective
function and a possible solution (or possible partial solution)
representation. The latter is clearly very open-ended while, as I
discussed above, even the interface to the former has a lot of
variation. But, addressing only what is universal misses the point, I
think.
There /is /a lot of pairwise overlap in various algorithms. Almost all
involve some concept of mutators (to choose one term for them). If I
want, for example, to try both a genetic algorithm and simulated
annealing I might wish to use the same "local operators" in both. That
would apply even more strongly if I wanted to compare two closely
related algorithms, like, for example, simulated annealing and quantum
annealing (no, the latter does not require a quantum computer), or
neural networks and state machines. Also, the right representation
would simplify constructing hybrid algorithms (e.g., in my school
timetabling system, I first apply a greedy rule-based system to build a
"fairly good" solution that meets all the hard constraints then apply
simulated annealing with validity preserving mutators; or one could
apply simulated annealing to each mutation in a genetic algorithm, or
one might accept candidates in a genetic algorithm using the Boltzmann
criteria from simulated annealing, or, again from my current work, my
validity preserving operator in my SA phase is to randomly undo a some
number of existing assignments and then use the rule based system with
some random noise added to bring it back to a complete valid solution).
I think delaying inclusion until all this has been nailed down would be
a mistake. But thinking about traits, concepts, standards (e.g., does
optimization mean minimization or maximization?), decisions about how
both algorithm specific and somewhat shared algorithm parametrization
(e.g., cooling schedule, population size) are to be specified, mutator
interfaces (parametrized, presumably, with candidate solution
representation), and certainly things that I have not yet considered,
before locking it down, would probably be useful -- avoiding having to
put too thick an adaptor interface around it later.
I'd volunteer to work on this, but this note is one of the biggest
breaks except for life necessities I've taken from working on this
projectfor several weeks (I've run over, of course, and from here on
out, I'm working for free). Perhaps later I'll be able to make a
commitment -- but I am not even making a commitment to later making a
commitment later.
Topher
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk