Exponential Fit


(J Wallace Dev) #1

Hi There

I’ve recently used Fit.Polynomial to calculate poly lines - and this worked really well.

I’m just wondering if there is something similar to calculate an Exponential fit?

I.e. fitting Y = Ae^rX to a set of given a set of x,y points?

Many Thanks


Curve fitting: Power
(Christoph Rüegg) #2

You can, by transforming it, similar to Linearizing non-linear models by transformation. Something along the lines of the following should work:

double[] Exponential(double[] x, double[] y,
    DirectRegressionMethod method = DirectRegressionMethod.QR)
{
    double[] y_hat = Generate.Map(y, Math.Log);
    double[] p_hat = Fit.LinearCombination(x, y_hat, method, t => 1.0, t => t);
    return new[] {Math.Exp(p_hat[0]), p_hat[1]}; 
}

Example usage:

double[] x = new[] { 1.0, 2.0, 3.0 };
double[] y = new[] { 2.0, 4.1, 7.9 };
double[] p = Exponential(x,y); // a=1.017, r=0.687
double[] yh = Generate.Map(x,k => p[0]*Math.Exp(p[1]*k)) // 2.02, 4.02, 7.98

(J Wallace Dev) #3

Thank you Christoph I will give that a go


(J Wallace Dev) #4

Yes that is working, thank you.

Just one more related question.

If the original data set has quite a large gap in x values that particular section of the resulting line is very straight (as you would imagine).

Have you any ideas or tips for extending the solution above so that the resulting line follows a smooth curve from beginning to end?

I’m thinking of some kind of interpolation perhaps? But not sure how to fit it in with the solution.

Many Thanks


(J Wallace Dev) #5

I managed to solve the above issue by putting the results of the exponential through an IInterpolation


#6

Hi Christoph,
in my case I have to fit (x, y) data to y = a + bExp(cx).
So far I was not able to transform it in a way it can be used. Could you please help me?

Thank you in advance,
Alexander


(Peter Vanderwaart) #7

Alexander,

I took this as a challenge. My program is below. I tried to use as many Math.Net functions as possible. The idea is to create a non-linear regression using a Math.Net minimization function. I believe my program is correct and works properly, but I’m not so sure it’s useful. Your equation is difficult to model. The two parameters b and c both make the function grow. The variable b causes linear growth and the variable c causes non-linear (curved) growth. Unless x values cover a large enough range for the data to show a distinct curve, it’s hard to separate the effects. If the plot of the data is pretty straight, it only takes a little noise to make the program return values not much like the true values for b and c.

            Random RanGen = new Random();
            Vector x = new DenseVector(100);
            Vector y = new DenseVector(100);

            // fit exponential expression with three parameters
            double a = 50;
            double b = 0.5;
            double c = 0.1;
            // create data set
            for (int i = 0; i < 100; i++) x[i] = 1.01 + 2.0 * Convert.ToDouble(i) / 99.0; // values span 1.1 to 3.
            for (int i = 0; i < 100; i++)
            {
                double y_val = a + b * Math.Exp(c * x[i]);
                y[i] = y_val + 0.1 * RanGen.NextDouble() * y_val;  // add error term scaled to y-value
            }

            // find point to start search
            int n = x.MaximumIndex();
            double a_est = 0.5 * y.Minimum();   // estimate for a is minimum of y values
            double c_est = Math.Log((y[n] - a_est) / (y[0] - a_est)) / (x[n] - x[0]); // estimate for c.  With function for i=0 and n, move a = LHS, divide, solve for c
            double b_est = (y[n] - a) / Math.Exp(c * x[n]); // estimate for b. Evaluate at for i = n, solve for b.
            var init_val = new DenseVector(new[] { a_est, b_est, c_est });

            // create objective function
            var f0 = new Func<Vector<double>, Vector<double>>((p) => x.Map(z => p[0] + p[1] * Math.Exp(p[2] * z)));  // function to create vector of estimated y values at point p
            var f1 = new Func<Vector<double>, double>((p) => MathNet.Numerics.Distance.SSD(f0(p), y));                 // objective function = sum squared error at point p

            // create gradient = vector of partial differential equations evaluate at point p.
            // function y_diff creates a vector of difference between actual y and modeled y.  Simplifies following statements.
            var y_diff = new Func<Vector<double>, Vector<double>>(p => f0(p) - y);
            var g0 = new Func<Vector<double>, double>((p) => (2.0 * y_diff(p)).Sum());                                  // partial differential for first term.
            var g1 = new Func<Vector<double>, double>((p) => ((2.0 * y_diff(p)) * x.Map(z => Math.Exp(p[2] * z))));     // partial differential for second term
            var g2 = new Func<Vector<double>, double>((p) => ((2.0 * y_diff(p)) * x.Map(z => p[1] * z * Math.Exp(p[2] * z))));   // partial differential for third term
            var g = new Func<Vector<double>, Vector<double>>((p) => new DenseVector(new[] { g0(p), g1(p), g2(p) }));   // gradient vector

            var obj = ObjectiveFunction.Gradient(f1, g);
            var solver = new BfgsMinimizer(1e-9, 1e-9, 1e-9, 1000);
            var result = solver.FindMinimum(obj, init_val);


            Console.WriteLine("# Iterations = "+result.Iterations);

            Console.WriteLine("Solution = "+result.MinimizingPoint);

            Console.WriteLine("Gradient at solution = "+result.FunctionInfoAtMinimum.Gradient);

Note: Edited to correct expression for g2
Note: Another edit. Starting value was not set correctly


(Peter Vanderwaart) #8

I know the original post was quite a while ago, but curiosity got the best of me… Another approach to Alexander’s question.

Note that if c is known, the values for a & b can be determined by linear regression. Therefore, this could be attacked as a minimization function in the single variable c. I wrote a program (below) to test if this would really work. The coding is a bit more elaborate than it needs to be, but was helpful to me for clarity. I also changed from double[] to vectors in the middle and there is still at least one place where it could be done better. But it does seem to work about as well as the program above, and there was no need to do any calculus for gradients.

As I indicated above, I don’t think least squares is a good approach for fitting this function. It seems a bit unstable with my test data which is a lot of observations (100) and a very moderate noise level.

private void btnLogFit5_Click(object sender, EventArgs e)
        {
            Random RanGen = new Random();
            int n = 100;  // number of observations
            Vector<double> x = new DenseVector(new double[n]);
            Vector<double> y = new DenseVector(new double[n]);

            // fit exponential expression with three parameters
            double a = 100.0;
            double b = 0.5;
            double c = 0.05;
            // create data set
            for (int i = 0; i < n; i++) x[i] = 10 + Convert.ToDouble(i) * 90.0 / 99.0; // values span 10 to 100
            for (int i = 0; i < n; i++)
            {
                double y_val = a + b * Math.Exp(c * x[i]);
                y[i] = y_val + RanGen.NextDouble() * y_val;  // add error term scaled to y-value
            }

            LogLin ll = new LogLin(x, y); // create object with data


            //var r = ll.cSolve(c);
            var f0 = new Func<double, Vector<double>>((c0) => ll.cSolve(c0)); // finds a and b for any c.
            var f1 = new Func<Vector<double>, Vector<double>>((p) => x.Map(z => p[0] + p[1] * Math.Exp(p[2] * z)));  // create vector of estimated y values at point p
            var f2 = new Func<Vector<double>, double>((z) => MathNet.Numerics.Distance.SSD(f1(z), y));    // sum squared error at point p

            var f = new Func<double, double>((c0) => f2(f0(c0)));

            Console.WriteLine("Graph of Objective function");
            double c_min = 1000000;  // c0 to begin search
            double f_min = 1000000;
            for (double c0 = 0.01; c0 < 0.15; c0 += 0.01)
            {
                double f_est = f(c0);
                if (f_est < f_min)
                {
                    c_min = c0;
                    f_min = f_est;
                }
                Console.WriteLine(c0.ToString() + ",  " + f_est.ToString());
            }

            // find best c0
            var obj = ObjectiveFunction.ScalarValue(f);
            var r1 = GoldenSectionMinimizer.Minimum(obj, Math.Max(c_min - 0.02,0.001), c_min + 0.02);

            Console.WriteLine("Iterations: " + r1.Iterations.ToString());
            Console.WriteLine("Solution: "+f0(r1.MinimizingPoint).ToString());

        }

        class LogLin
        {
            Vector<double> x;
            Vector<double> y;
            double a;
            double b;
            double c;

            public LogLin(Vector<double> x_in, Vector<double> y_in)
            {
                x = x_in;
                y = y_in;
                a = 0;
                b = 0;
                c = 0;
            }

            public Vector<double> cSolve(double c)
            {
                double[][] mx = new double[x.Count][];
                double[] rr; // regression result
                Vector<double> r = new DenseVector(new double[3]); // return value

                for (int i = 0; i < x.Count; i++)
                {
                    mx[i] = new double[1];
                    mx[i][0] = Math.Exp(c * x[i]);
                }

                rr = MultipleRegression.NormalEquations(mx, y.ToArray(), intercept: true);
                r[0] = rr[0];
                r[1] = rr[1];
                r[2] = c;
                //Console.WriteLine(r.ToString());
                return r;
            }
            
        }