This program allows you to find the optimium spacing of data points to distinguish between two possible models, for example one or two exponentials. It is a very advanced program and can only be used successfully if you know exactly how to define models, data ranges, and weighting factors. You should start by using the models supplied, before attempting to supply your own models as a dynamic link library, or as ASCII text files.
Before using this program you must know the theory in:
Optimal design for model discrimination using the F test
with nonlinear biochemical models.
Criteria for choosing
the number and spacing of experimental points.
Bardsley et al J. theor. Biol. (1989) 139, 85-102.
Optimal design: A computer program to study the best possible
spacing of design points for model discrimination.
Bardsley et al Computers Chem. (1996) 20, 145-157.
The program attempts a number of very difficult minimisations with objective functions defined by complicated integrals or summations, and needing ill-conditioned root finding. Numerical difficulties will be encountered unless you appreciate how to adjust the various scaling parameters to avoid singularities. Keep objective functions of order unity, avoid asymptotes, do not request too much precision, and study the models provided to see how to normalise your own models.
If you encounter failures with root finding or optimisation you will have to alter the tolerance parameters or starting estimates from the default values.
theta Parameter vector that is fixed in the true model phi Parameter vector that is estimated given theta g1(x) Deficient model with parameter vector phi g2(x) Correct model with parameter vector theta g2(Xstart) Value of g2 at Xstart (e.g., typically 0.1) g2(Xstop) Value of g2 at Xstop (e.g., typically 0.9) Xstart Usually calculated by root finding given theta Xstop Usually calculated by root finding given theta f(x) Density function for the spread of x values w(x) Weighting function Q(theta) Minimum with respect to phi of the integral of w(x)[g2(x) - g1(x)]^2.f(x).dx divided by the normalising integral of f(x).dx between the limits Xstart and Xstop as set by the values selected for g2(Xstart) and g2(Xstop) Q_ref Nonzero value of Q that is selected to be held fixed while Q is studied as a function of just one selected theta(i), or else the convergence of S(n) to Q_ref as a function of n is investigated. n Number of discrete points x(i) spaced according to the density function f(x) S(n) Sum converging to Q_ref for large n. It is the minimum with respect to phi of the sum from i = 1, to i = n of (1/n)*w(x_i)[g2(x_i) - g1(x_i)]^2 R(n) Modulus of (Q_ref - S)/Q_ref that tends to zero as S(n)converges to Q for large n.
Usually the default values of est(i) = 1 will be sufficient if the parameters theta are of order unity. In some circumstances where the optimisation proves difficult, say when the theta values are not of order unity, it may be necessary to adjust the starting estimates for phi. The starting estimates are used as a diagonal matrix to scale parameters phi into internal parameters p, i.e., phi(i) = est(i)*p(i).
method C05AZF, D01AJF, QNFIT1/LBFGSB In some versions E04 routines can be used instead of LBFGSB model Model equations used for g1(x) and g2(x) In some versions models can be supplied as ASCII text files nphi Number of phi parameters in g1(x) ntheta Number of theta parameters in g2(x) A,B Lower-, Upper- limits for automatically scaled internal parameters p(i) where phi(i) = est(i)*p(i) and A < p(i) < B epsabs Absolute error for integration (D01AJF) epsrel Relative error for integration (D01AJF) tolx Tolerance factor for root finding (C05AZF) scale Objective function scaling factor (QNFIT1/LBFGS) This may need to be adjusted so that the internal objective function scale*Q is of order unity S0,S1 Coefficients used to define variance V in mixed error type V(y) = S0^2 + (S1*y)^2/
x + theta(1)x^2 g2(x) = -------------------- 1 + 2x + theta(1)x^2 g1(x) = phi(1)x/[1 + phi(1)x]
x + 2theta(1)x^2 + theta(1)theta(2)x^3 g2(x) = ------------------------------------------- 1 + 3x + 3theta(1)x^2 + theta(1)theta(2)x^3 phi(1)x + phi(1)phi(2)x^2 g1(x) = ------------------------------ 1 + 2phi(1)x + phi(1)phi(2)x^2
theta(1)x [1 - theta(1)]theta(2)x g2(x) = --------- + ----------------------- 1 + x 1 + theta(2)x g1(x) = phi(1)x/(1 + phi(1)x)
theta(1)x theta(2)theta(3)x alpha*theta(4)x g2(x) = --------- + ----------------- + --------------- 1 + x 1 + theta(3)x 1 + theta(4)x alpha = 1 - theta(1) - theta(2) phi(1)x [1 - phi(1)]phi(2)x g1(x) = ------- + ------------------- 1 + x 1 + phi(2)x
g2(x) = theta(1)exp(-x) + [1 - theta(1)]exp(-theta(2)x) g1(x) = exp(-phi(1)x)
g2(x) = theta(1)exp(-x) + theta(2)exp(-theta(3)x) + [1 - theta(1) - theta(2)]exp(-theta(4)x) g1(x) = phi(1)exp(-x) + [1 - phi(1)]exp(-phi(2)x)
[theta(1)exp(-x) - exp(-theta(1)x)] g2(x) = 1 - ----------------------------------- theta(1) - 1 g1(x) = 1 - exp(-phi(1)x)
x^[theta(1)] g2(x) = ---------------- 1 + x^[theta(1)] phi(1)x g1(x) = ----------- 1 + phi(1)x
Models are scaled so that 0 =< g1(x) =< 1 and 0 =< g2(x) =< 1. Dimensionless parameters are used, e.g. theta(1) = K2/K1 in models 1/2, and theta(2) = K3/K1 in model 2. Alternatively think of x as K1x or K1 = 1 etc. Computational singularities (IFAIL not equal 0) can be avoided by changing EPSABS, EPSREL, TOLX or SCALE.