I need help with Optimizer to Find Minimum Error

Hello guys,

I have a Plant data and try to fitting it with 2nd Differential Equation that described APMonitor by Find Minimum of the Sum of Relative Error (modified from Sum of Absolute Error to get 0 - 100% error value).

I have some insight by following https://discuss.mathdotnet.com/t/how-to-use-the-optimization-class/545, but i have trouble to have divergent result. Either i get NaN or Infinity Sum of Relative Error. I tried to to manipulate it by override the Sum of Relative Error Value that resulted NaN or Infinity to last good value.

But i get exception on my Application

System.overflow: ‘Value was either too large or too small for a Decimal.’

I still don’t understand how to get convergent result properly. Please guide me to solve this.

If you need the code i will edit my post :slight_smile:


  1. I Use ForwardDifferenceGradientObjectiveFunction to get Gradient
  2. I use ConjugateGradientMinimizer to evaluate the gradient
  3. after all i do FindMinimum

Thanks in advance
Andre Mailoa
Process Control Engineer

As noted towards the end of the link in your second paragraph, the choice of the initial guesses is critical. It may be your function will only converge with very good guesses. Also, the Conjugate Gradient method assumes the surface is smooth, i.e. can be approximated by a polynomial.

I suggest trying to prove the code is correct by using it to find the parameters of a simple function. Once you know the code is OK, you can work on adjusting the input parameters of the hard function.

Thank You very much for the Answers, I will try your suggestion. It will be a good start point for me to understand the algorithm.

in waiting for the answers,

I tried to change the algorithm to bsgfMinimizer, it’s work better for me by add the manipulation if Error reach NaN or Infinity or -Infinity value.

After that, i tried to fit the data that i have know the parameter.

But i don’t understand, why ForwardDifferenceGradientObjectiveFunction, that have the boundary of parameter(max/min of parameter) still try the outside boundary parameter value. So i print the every Error Evaluation on the console, and i see that the algorithm tried the value outside the boundary like shown on this picture. Suddenly the value was jumped from 47 to -133113308 (for the first parameter i set min=0.1 to max=100.0). Even the correct parameter was around 50. Do you have any explanation about this? and What can i do to prevent this? Thanks before :slight_smile:


I can’t speak to the programming because I don’t know what’s under MathNets covers, and I can only speak to the mathematics in a general way. My only real exposure to conjugate gradient methods was long ago and I never did have a clear idea of how it worked.

However, step-wise optimization methods have a problem when the gradients get close to zero. If the gradient in a particular direction is very close to zero, then even a small delta can cause big leap. (Think of Newton’s method where the gradient is in the denominator.)

1 Like

Since you have to use an approximation for gradient, it might be better to use a method that doesn’t rely on gradients at all. The NelderMeadSimplex method is very simple to implement and probably worth a try. An example of how to set it up can be found here: https://github.com/mathnet/mathnet-numerics/blob/master/src/Numerics.Tests/OptimizationTests/NelderMeadSimplexTests.cs

1 Like

Thank you for your response and suggestion, i will try the algorithm. i will inform for the update