Hello,

I am trying to use the C# Math.Net Numerics curve fitting capabilities to fit data

points to a sine curve.

The sine curve should be of the form:

f(x) = a * sin(b * (x+c)) + d

The amplitude of the sine wave (a) is what I am actually trying to determine.

I think my case is very simple. I have nine values for “f(x)” (aka “yValues”),

and my “xValues” are simply 0, 1, 2, 3, 4…

I followed an example I found on stack overflow to come up with the following code, but I am not getting a correct result for the amplitude value. I’m pretty sure I need to tell (somehow) the Math.Net curve fitter that I am trying to fit to a sine wave rather than some other kind of curve, but I am not sure how to do this.

Does anyone have some insight into what I am doing wrong?

Regards,

Ben

C# Code:

// Return the amplitude value from the fitted sine curve.

private double GetCurveFitAmplitude(List yValues)

{

List xValues = new List(); // List of 0, 1, 2, 3…

```
for (int i = 0; i < yValues.Count; ++i)
{
// The x-values of the sine curve simply increment from zero.
xValues.Add((double)i);
}
var X = DenseMatrix.OfColumns(new[]
{
// Weighted factor - use a "neutral" 1.0 for every array entry.
new DenseVector(Enumerable.Repeat(1.0, yValues.Count).ToArray()),
// X-values 0.0 through 8.0, incrementing by 1.
new DenseVector(xValues.ToArray())
});
// Y values are from the passed list.
var yData = new DenseVector(yValues.ToArray());
// Fit the curve.
var p = X.QR().Solve(yData);
// The first member of the return array is the sine wave amplitude.
return p[0];
```

}