Non-linear regression least squares curve fitting with constraints/bounds



Let me start of by saying that I’m not an expert in mathematics but I have used curve fitting algorithms in the past. I need to translate Python code (which uses Scipy) to C#. I need to fit a (co)sinewave function with 4 parameters over 30 data points with several constraints in place. In Python one would simply write:

parameters,pcovariance = curve_fit(cos_function,x_values,y_values,initial_parameters,bounds=bounds,max_nfev=100)

The function is: A/2.0*(1.0+np.cos(2.0np.pixB+2.0C*np.pi)+D)

In Math.NET there is the Fit.Curve method but it seems the method only supports functions up to 3 parameters and more importantly I can’t set constraints/bounds. I know that Fit.Curve uses a Minimizer behind the scenes and this is where I’m totally lost.

I was looking into the BfgsBMinimizer which accepts bounds but this minimizer requires an ObjectiveFunction with the gradient. Our inhouse C# Levenberg-Marquardt algorithm also requires the gradient (derivatives with respect to each parameter) and is given through a method with this signature:

public Vector GetGradient(double x, Vector parameters)

But the minimizers expect a function with a signature like this:

public Vector GetGradient(Vector parameters)

I don’t understand how I can calculate the derivatives without the ‘x’ value. In general I don’t understand the relationship between the Minimizers and curve fitting. For me a curve fitting algorithm accepts a function, data points, the initial guess and eventually bounds. As far as I have seen it seems the Minimizers are only a part of the curve fitting process that can be easily customized in Math.NET. Again this is not my field of expertise.

Any help would be appreciated since other libraries like NMath and Extreme Optimization use almost identical ways of curve fitting.


(Peter Vanderwaart) #2

BfgsBMinimizer is a general purpose function, not just a curve fitter. To use it for your purpose, you have to create a function that computes the error of a sample fit. This is generally the sum-squared-error between the sample data and the model estimates.

The “gradient” referred to is the gradient function, not the numeric value of the gradient at a particular point.