 # Using MathNet to convert an Excel Solver problem to C#

#1

Hello, I’m completely new to Math.Net and hoping for some guidance on how to solve a particular problem I’m having.

Essentially, what I’m wanting to do is recreate the Solver functionality from Excel in C#. The Excel Solver uses a Generalized Reduced Gradient (GRG2) Algorithm to minimize an error value by modifying several dependent variables.

More specifically, the error value is calculated as such:
errror = √ (Σ(calculated_value - measured_value)^2) / number_of_measurements )

I want to determine values for the dependent variables to minimize the error value.

The function to calculate `calculated_value` for each measurement is very complex, so I don’t know if it’s possible to generate things like gradients and partial derivatives which seem to be needed for many nonlinear least squares algorithms.

I’m essentially looking for a function that can find values for the dependent variables by minimizing the calculated overall error as described above.

Hopefully I’ve given enough information. It’s probably obvious that I don’t have a lot of technical knowledge in this domain right now.

(Peter Vanderwaart) #2

Since, apparently, no one has a good idea, let me post a couple of experiments that demonstrate how the code can be put together. Neither of these worked especially well, though.

The first example uses the BfgsBMinimizer. This uses an approximation for the gradient so you don’t have to work out the calculus. In this example it’s ForwardDifferenceGradientObjectiveFunction.

``````    private void LogFit_Click(object sender, EventArgs e)
{
Random RanGen = new Random();
Vector x = new DenseVector(100);
Vector y = new DenseVector(100);

// fit exponential expression with three parameters
double a = 0.50;
double b = 0.50;
double c = 0.5;
// create data set
for (int i = 0; i < 100; i++) x[i] = Convert.ToDouble(i); // values span 10 to 100
for (int i = 0; i < 100; i++)
{
double y_val = x[i];
y[i] = y_val + 0.1 * RanGen.NextDouble() * y_val;  // add error term scaled to y-value
}

// create optimizer
var f0 = new Func<Vector<double>, Vector<double>>((p) => x.Map(z1 => p + p * z1 + p * z1 * z1));  // create vector of estimated y values at point p

var f1 = new Func<Vector<double>, double>((p) => ConsoleMsg(y, f0(p), MathNet.Numerics.Distance.SSD(f0(p), y)));    // sum squared error at point p
var obj = ObjectiveFunction.Value(f1);
var fdgof = new ForwardDifferenceGradientObjectiveFunction(obj, new DenseVector(new[] { -1000.0, -1000.0, -1000.0 }), new DenseVector(new[] { 1000.0, 1000.0, 1000.0 }));
var solver = new BfgsBMinimizer(0.1, 0.1, 0.1, 10000);

try
{
var result = solver.FindMinimum(fdgof, new DenseVector(new[] { -1000.0, -1000.0, -1000.0 }), new DenseVector(new[] { +1000.0, +1000.0, +1000.0 }), new DenseVector(new[] { 0.0, 1.0, 0.0 }));

Console.WriteLine(result.MinimizingPoint.ToString());
Console.WriteLine("# iterations = " + result.Iterations.ToString());
Console.WriteLine("Reason = " + result.ReasonForExit.ToString());
}
catch (Exception eSolve)
{

Console.WriteLine(eSolve.Message);
}
}``````

(Peter Vanderwaart) #3

The second example uses NelderMeadSimplex. I believe this method does not use gradients at all. Note that the code to evaluate the function has been put in a separate subroutine.

``````    private void btnMinFit_Click(object sender, EventArgs e)
{
Random RanGen = new Random();
x = new double;
y = new double;

// fit exponential expression with three parameters
double a = 5.0;
double b = 0.5;
double c = 0.05;
// create data set
for (int i = 0; i < 100; i++) x[i] = 10 + Convert.ToDouble(i) * 90.0 / 99.0; // values span 10 to 100
for (int i = 0; i < 100; i++)
{
double y_val = a + b * Math.Exp(c * x[i]);
y[i] = y_val + 0.1 * RanGen.NextDouble() * y_val;  // add error term scaled to y-value
}

var f1 = new Func<Vector<double>, double>(x => LogEval(x));
var obj = ObjectiveFunction.Value(f1);
var solver = new NelderMeadSimplex(1e-5, maximumIterations: 10000);
var initialGuess = new DenseVector(new[] { 3.0, 6.0, 0.6 });

var result = solver.FindMinimum(obj, initialGuess);

Console.WriteLine(result.MinimizingPoint.ToString());

}

private double LogEval(Vector<double> v)
{
double err = 0;
for (int i = 0; i < 100; i++)
{
double y_val = v + v * Math.Exp(v * x[i]);
err += Math.Pow(y_val - y[i], 2);
}

Console.WriteLine(v.ToString() + "Err=  " + err.ToString());

return err;

}
``````

These routines were written as experiments, and have output to aid de-bugging and general comprehension. All I really can say in their favor is that they do execute.

#4

Thanks for the code examples Peter. It has been a great help to see an example.

I’m still in the process of creating adapters for my code to map my values to `Vector<double>`, so I cant yet provide feedback on whether the examples are working yet. Should have something in the next 24 hours.

Thanks again for your help!