experchange > python

Madhavan Bomidi (03-28-19, 01:54 PM)
Hi,

I have x and y variables data arrays. These two variables are assumed to berelated as y = A * exp(x/B). Now, I wanted to use Levenberg-Marquardt non-linear least-squares fitting to find A and B for the best fit of the data.. Can anyone suggest me how I can proceed with the same. My intention is toobtain A and B for best fit.

Look forward to your suggestions and sample code as an example.

Thanks and regards,
Madhavan
William Ray Wing (03-28-19, 04:51 PM)
> On Mar 28, 2019, at 7:54 AM, Madhavan Bomidi <blmadhavan> wrote:
> Hi,
> I have x and y variables data arrays. These two variables are assumed to be related as y = A * exp(x/B). Now, I wanted to use Levenberg-Marquardt non-linear least-squares fitting to find A and B for the best fit of the data. Can anyone suggest me how I can proceed with the same. My intention is to obtain A and B for best fit.


Have you looked at the non-linear least-squares solutions in scicpy?
Specifically, a system I’ve had to solve several times in the past uses it and it works quite well.

from scipy.optimize import curve_fit

def func2fit(x,a,b,c):
return a - b * np.exp(-c * x)

Bill
[..]
Madhavan Bomidi (03-28-19, 06:08 PM)
Hi Bill,

Thanks for your suggestion. Where am I actually using the curve_fit in the defined function func2fit? Don't I need to initial assumption of a, b and c values so that optimized convergence occur with iteration of the function for the input data of x?

Look forward to your suggestions,
Madhavan
William Ray Wing (03-29-19, 05:26 AM)
Below I’ve included the code I ran, reasonably (I think) commented. Note the reference to the example. The data actually came from a pandas data frame that was in turn filled from a 100 MB data file that included lots of other data not needed for this, which was a curve fit to a calibration run.

Bill

PS: If you want, I can probably still find a couple of the plots of the raw data and fitted result.
---------------------
import numpy as np, matplotlib.pyplot as plt
#
# Inverted exponential that axymptotically approaches "a" as x gets large
#
def func2fit(x,a,b,c):
return a - b * np.exp(-c * x)

# Curve fitting below from:
from scipy.optimize import curve_fit
def fit(xdata, ydata, run_num):
ll = len(xdata)
#
# The next four lines shift and scale the data so that the curve fit routine can
# do its work without needing to use 8 or 16-byte precision. After fitting, we
# will scale everything back.
#
ltemp = [ydata[i] - ydata[0] for i in range(ll)]
ytemp = [ltemp[i] * .001 for i in range(ll)]
ltemp = [xdata[i] - xdata[0] for i in range(ll)]
xtemp = [ltemp[i] * .001 for i in range(ll)]
#
# popt is a list of the three optimized fittine parameters [a, b, c]
# we are interested in the value of a.
# cov is the 3 x 3 covariance matrix, the standard deviation (error) of the fit is
# the square root of the diagonal.
#
popt,cov = curve_fit(func2fit, xtemp, ytemp)
#
# Here is what the fitted line looks like for plotting
#
fitted = [popt[0] - popt[1] * np.exp(-popt[2] * xtemp[i]) for i in range(ll)]
#
# And now plot the results to check the fit
#
fig1, ax1 = plt.subplots()
plt.title('Normalized Data ' + str(run_num))
color_dic = {0: "red", 1: "green", 2: "blue", 3: "red", 4: "green", 5: "blue"}
ax1.plot(xtemp, ytemp, marker = '.', linestyle = 'none', color = color_dic[run_num])
ax1.plot(xtemp, fitted, linestyle = '-', color = color_dic[run_num])
plt.savefig('Normalized ' + str(run_num))
perr = np.sqrt(np.diag(cov))
return popt, cov, xdata[0], ydata[0], fitted, perr[0]
edmondo.giovannozzi (03-29-19, 11:35 AM)
> ltemp = [ydata[i] - ydata[0] for i in range(ll)]
> ytemp = [ltemp[i] * .001 for i in range(ll)]
> ltemp = [xdata[i] - xdata[0] for i in range(ll)]
> xtemp = [ltemp[i] * .001 for i in range(ll)]


Use the vectorization given by numpy:

ytemp = (ydata - ydata[0]) * 0.001
xtemp = (xdata - xdata[0]) * 0.001

.....
fitted = popt[0] - popt[1] * np.exp(-popt[2] * xtemp)

or better

fitted = func2fit(xtemp, *popt)
[..]
Similar Threads