- The program is completely re-written. It was compiled with djgpp which
is a MS-DOS port of Gnu C. This means that it is a 32-bit protected mode program
and should run much faster than the previous version. Most importantly, the
code looks more beautiful.
- Previous version analysed only HOF models V, IV, II and I. Model III was
excluded since it used the same number of parameters as model IV and was regarded
as less interesting. Model V defines a skewed response, Model IV is a symmetric
unimodal response, Model III is a monotone response with a plateau, and Model
II is a monotone response limited by 0 and M. In the new scheme, Models V and
IV are fitted and tested first. If Model IV cannot be accepted, then Model
III is fitted and compared against model V. So Model V is accepted only if
it is significantly better than both Models IV and III. If either Model IV
or Model III is accepted, it is tested against Model II.
- The program uses Unix-style command line. The program prompts only for
absolutely vital information if not given on the command line. Some other parameters
can be changed only through command line switches. You can see the available
options invoking the program with
- The estimation is based on non-linear minimization which can fail in two
ways: (1) floating point error due to illegal mathematical operation since
the function wanders too far away from the solution, or (2) the fitted curve
is trapped in a local minimum (usually a flat line: It is funny to see in how
many ways a horizontal line can be parametrized with these models). I have
tried to parameterize the program so that this would happen rarely. The program
works much better with my test data sets than the previous version. The main
reasons for this are:
- Completely re-written likelihood functions and their derivatives with fewer
dirty tricks to avoid overflows.
- Changes in Numerical Recipes routines so that floating point exceptions
are rarer than in earlier versions.
- Better scheme for obtaining starting values.
- Floating point exception (e.g. ``divide by zero'') do not crash the program
any longer. Instead a warning is issued and the computations continue starting
with the last estimates of coefficients. In my test data sets, no new floating
point exceptions were observed when starting the iterations again. If the reported
deviance is in sensible magnitude, the iterations converged to something after
meeting the floating point exception. However, it is difficult to say if this
``something'' is a real optimum solution, and so it may be best to be cautious
with the results for this species. The warning is not issued with option
- More flexible input. Should accept now most CANOCO-formatted files. Gradient
data can contain missing values.
- Presentation graphics using DISLIN package. In addition to screen, graphics
can be stored in several file formats. The available formats can be seen invoking
the program with switch
Version 2.3 (released 16/10/98)
- Added Binomial error. Default is still the old Poisson error.
- Numerical Recipes reported some bugs in
lnsrch.c in their
patch release 2.08. These are transferred to the corresponding routines in
HOF. However, this doesn't change the fitting results much.
- Some internal changes which make a cleaner and faster code.
Version 2.2 (released 30/3/98)
- Added option to have Chi-squared tests when the data are not overdispersed
-k) or always (
-k given twice) instead of
- Test procedure in F-test changed so that the denominator is always from
Model V instead of the previous model. Thus the degrees of freedom are always
1 and n-4 in F-tests.
- Changes in graphics. Curves are drawn using adaptive plotting algorithm
so that plotting density changes with curvature. This should give smoother
curves (and slightly smaller plot files). Plot headers list now options needed
to repeat the analysis.
Version 2.1 (released 5/2/98)
HOF v2 was replaced swiftly with v2.1. The major change is that the program
does not crash with illegal floating point operations, but issues a warning
and tries to rescue the estimation in the next iteration. In my test data sets,
the next iterations indeed are successful, and the program converges to something.
However, it may be best to be cautious with these results. Next species should
be OK anyway. In addition, Numerical Recipes routines (
dfpmin, lnsrch) were
changed so that floating point exceptions are rarer than in v2.