AI algorithm for multi dimension solution optimization / prediction

时光总嘲笑我的痴心妄想 提交于 2019-12-21 06:44:34

问题


I have 6 int parameters ranging from 0 to 100

The total combination of the numbers are 100^6 and each combination gives a result ranging approx. from -10000 to 100000 or even more.

Input data example:
MySimulation (57, 78, 20, 10, 90, 50) = 300  <- Best Result
MySimulation (50, 80, 10, 90, 35, 8) = 200
MySimulation (4, 55, 40, 99, 40, 50) = -50 <- Worst Result

The higher the result the better the combination of numbers are, I already have the calculation which gives a result, I only need AI to find a better combination of numbers which gives a higher result.

Output data example:
55, 70, 25, 15, 95, 52   <- Let's say these combination of
                            numbers was choosen by AI and would 
                            give a result of 400 with my simulation

Note: The order of the numbers is also important.

How can I reduce the total combinations of 100^6 with AI, in order to get the best result without iterating through all 100^6 of combinations ?

I plan to use Accord.NET in C# (or is there something better?), an example of code would be helpful because I am new to AI.


回答1:


Welcome to the domain of Multi-Objective Optimization. This is a field I wrote a dissertation on. There are numerous algorithms for solving problems of this kind, but perhaps the two most-famous are NSGA-II and SPEA2.

Of course, you only have a single-objective: whatever it is your scoring function is yielding. I'd submit that multi-objective algorithms would also apply, because you're interested in not just a single solution, but populations of them.

Might I point you to http://jmetalnet.sourceforge.net/?

The idea is that you will be generating populations of random vectors containing inputs that range across your domain of 100^6 possible solutions. Those populations will be mutated and mated with each other to generate new solutions, and from those new populations, they are down-selected in such a way that the more preferred solutions are elected to remain (and survive the evolution).

In the multi-world, you might have challenges comparing the fitness of different solutions. But in your single-objective world, comparing fitnesses are easy: you just need to decide whether you want higher numbers or lower. It seems like you want higher.

Outlined

  1. Create random population of solutions.
  2. Randomly mutate/crossover across your solutions.
  3. Calculate fitness of everyone, and sort.
  4. Down-sample back to the initial population size of best solutions.
  5. Repeat Steps2-4 [until convergence: until avg fitness > threshold?]
  6. Output final generation.

Results:

Here's a poor analysis, and to note, you could do much better by averaging result of (say) 20 runs at each parameter level. Right off the bat, you can tell that mutation rate should stay low, and obviously, a higher population size can help (to a converging point).

Results formatted as average score before,after, with a max of 600

PopSize=100,NumGens=50,MutRate=0.2,CrossRate=0.8
295.23,542.12

PopSize=100,NumGens=500,MutRate=0.2,CrossRate=0.8
298.53,565

PopSize=1000,NumGens=50,MutRate=0.2,CrossRate=0.8
301.814,579.334

PopSize=10000,NumGens=500,MutRate=0.2,CrossRate=0.8
299.8901,588

PopSize=1000,NumGens=50,MutRate=0.4,CrossRate=0.8
306.22,385.55

Code

I wrote this code in like 20 minutes, so it's not meant to be elegant or awesome. I hope it just gets the point across.

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;

namespace moo_in_csharp
{
    internal class Individual
    {
        public int[] Decisions;
        public double Fitness;
        private int _numdecisions = 6;

        /// <summary>
        /// Default constructor.
        /// </summary>
        public Individual()
        {
            Decisions = new int[_numdecisions];
        }

        /// <summary>
        /// Replaces first half of decisions with those of the mate.
        /// </summary>
        /// <param name="mate"></param>
        public void Crossover(Individual mate)
        {
            int crossoverPoint = _numdecisions / 2;
            for (int i = 0; i < crossoverPoint; i++)
            {
                Decisions[i] = mate.Decisions[i];
            }
        }

        /// <summary>
        /// Simple fitness function that computes sum of a+b+c+d+e+f.
        /// </summary>
        public double Evaluate()
        {
            Fitness = Decisions.Sum();
            return Fitness;
        }

        /// <summary>
        /// Assigns random values to its decisions.
        /// </summary>
        public void Generate()
        {
            for (int i = 0; i < _numdecisions; i++)
            {
                Decisions[i] = Program.rand.Next(0, 101);
            }
        }

        /// <summary>
        /// Randomly mutate select decisions.
        /// </summary>
        public void Mutate()
        {
            for (int i = 0; i < _numdecisions; i++)
            {
                Decisions[i] = Program.rand.Next(0, 101);
            }
        }
    }

    internal class Program
    {
        public static Random rand = new Random();

        private static void Main(string[] args)
        {
            //parameters
            int populationSize = 100;
            int numGenerations = 50;
            double mutationRate = 0.2;
            double crossoverRate = 0.8;

            //build initial population
            List<Individual> solutions = new List<Individual>();
            for (int i = 0; i < populationSize; i++)
            {
                var solution = new Individual();
                solution.Generate();
                solution.Evaluate();
                solutions.Add(solution);
            }

            //calculate average score of initial population
            var averageScoreBefore = solutions.Select(x => x.Evaluate()).Average();

            //iterate across number of generations
            for (int i = 0; i < numGenerations; i++)
            {
                //build offspring by mating sequential pairs of solutions
                var offspring = new List<Individual>();
                for (int e = 0; e < solutions.Count() - 1; e += 2)
                {
                    if (rand.NextDouble() < crossoverRate)
                    {
                        var newIndividual = new Individual();
                        solutions[e].Decisions.CopyTo(newIndividual.Decisions, 0);
                        newIndividual.Crossover(solutions[e + 1]);
                        offspring.Add(newIndividual);
                    }
                }

                //add our offspring to our solutions
                solutions.AddRange(offspring);

                //mutate solutions at a low rate
                foreach (var solution in solutions)
                {
                    if (rand.NextDouble() < mutationRate)
                    {
                        solution.Mutate();
                    }
                }

                //sort our solutions and down-sample to initial population size
                solutions = solutions.OrderByDescending(x => x.Evaluate()).ToList();
                solutions = solutions.Take(populationSize).ToList();
            }

            //calculate average score after
            var averageScoreAfter = solutions.Select(x => x.Evaluate()).Average();
            Debug.WriteLine(averageScoreBefore + "," + averageScoreAfter);
        }
    }
}

Other Notes

Your runtime will mostly be dependent on your fitness scoring function. For simple math functions, this runtime won't be hard. Obviously, if there's a process involved instead, you'd want to keep the number of evaluations to a minimum. This is what I studied for my Ph.D., and developed a new algorithm entitled GALE:

  • http://ieeexplore.ieee.org/document/7105950/



回答2:


You could try to use a Metaheuristic / Stochastic Local Search algorithm to solve this type of problem as mentioned in many comments and the solution by BurnsCA.

Simulated Annealing and Genetic Algorithms are examples but there are many more. Whether this approach is feasible, depends on how quickly you can calculate changes in your objective function, i.e. evaluate the quality of a solution and changes in it.

If the output of your simulation has the property that minor changes dramatically changes the result, it may or may not be a better approach than brute forcing through some number of random assignments and taking the best one. You'll have to experiment.

Implementation of these algorithms themselves is usually not very complicated, and I think you can even use a library like NMath, e.g. take a look at

http://www.centerspace.net/doc/NMath/user/simulated-annealing-85273.htm

Your "objective function", the value that you're attempting to maximize (or minimize) is the output of the simulation.

Although the implementation of the algorithms themselves is not hard, efficient implementation of various aspects of them are.

What you need to do is to define a neighbourhood function, i.e. a way of getting from the solution (or state if you prefer) to another solution. In your case, this could involve the code example suggested by BurnsCA, which would be a 1-opt move, because you would randomly select another value for one parameter. You could also try 2-opt moves or more if 1-opt doesn't show improvements quickly enough.

The next thing you need is a way to evaluate the effect of taking a move. In other words, what is the difference in objective function between your current value and the value you will have if you take the move. If at all possible, you'll want a way to evaluate a move, without having to perform the entire simulation again every time.

The most naive approach (which is often called Descent) is to move to a neighbour solution at random, and if it found a better (higher objective function in your case) value, make that the new solution and then repeat that step until you can find no more improvements.

The problem with that approach is that you'll get stuck in a local maximum pretty quickly. Simulated annealing provides one method of attempting to avoid that, by not selecting only improvements, but selecting non-improving moves with a probability that depends on the current 'temperature' which is just a variable you decrease every iteration according to some annealing schedule you define.

Since the crux of implementing these methods is not in the overall algorithms themselves (although it does take a bit of time) but in the implementation of your neighbourhood and neighbourhood-evaluation functions, I personally don't think there's a lot of time saved by using some framework for that.

If this is a one time thing and the above is not feasible, you could also consider parallelizing the computations across thousands of machines to obtain an optimal solution. E.g. Azure Batch or similar services. Since you can test 50 mio combinations in 30 minutes (on one machine without parallelizing?), you could in principle provision 20 000 virtual machines and test all combinations in 30 minutes.




回答3:


You don't need a machine learning framework to implement a local optimization algorithm.

// Example linear calculation
public int Calculation(int a, int b, int c, int d, int e, int f)
{
    int result = 0;
    unchecked
    {
        result = (int)((double)a * Math.Tan((double)b * Math.PI / ((double)c + 0.001)));
        result += d + e + f;
    }

    return result;
}


var rand = new Random();

// The currently best known solution set
var best = new int[6] { 1, 1, 1, 1, 1, 1 };

// Score for best known solution set
int bestScore = int.MinValue;

// loop variables
int score;
var work = new int[6];

// Loop as appropriate.
for (int i=0; i<500; i++)
{
    // Copy over the best solution to modify it
    best.CopyTo(work, 0);

    // Change one of the parameters of the best solution
    work[rand.Next() % 6] = rand.Next() % 101;

    // Calculate new score with modified solution
    score = Calculation(work[0], work[1], work[2], work[3], work[4], work[5]);

    // Only keep this solution if it's better than anything else
    if (score > bestScore)
    {
        work.CopyTo(best, 0);
        bestScore = score;
    }
}

The above converges on a solution pretty quickly, mostly because the Calculation function is so friendly. After 500 iterations:

int[6] { 98, 1, 2, 98, 99, 100 }

Where an optimal solution would be { 100, 1, 2, 100, 100, 100 }

Note that this local-optimization approach only works for mostly linear problems.

Not shown, but you'd also want to see different starting values, or rerun the entire thing a bunch of times.

There's a nifty image on wikipedia page for hill climbing algorithms that kind of shows the essence of what this approach is trying to do.



来源:https://stackoverflow.com/questions/46584964/ai-algorithm-for-multi-dimension-solution-optimization-prediction

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!