[Diy_efi] RE: 60-2 Toothwheel - Kalman Filter or ( EKF )

Bernd Felsche bernie
Tue Apr 10 02:25:39 UTC 2007


On Sunday 08 April 2007 03:18, Bruce A Bowling wrote:
> >So how much of a difference do and can the "esoterics" like Kalman
> >filtering make to the operation of the engine in the real world?

> I think I can help lend an answer to this, based on observation
> and measurement. First, I will eliminate the Kalman name and
> esoterics (since this is just an implementation scheme) and work
> more fundamentally to the general question of is a higher order
> correction compared to simple interpolation of crank location
> pulse times based on two events.

The word is _extrapolation_. You can only interpolate between known
points. :-)

> First, a little history, based on MS experience, particularly
> MS-II. The ignition operation in MS-II is pretty much the same as
> you outlined earlier, utilizing input-capture and output compare
> hardware based on a 16-bit free-running timer. In order to
> maintain a high accuracy for high tooth count wheels the decision
> was made to go with a 1 usec timer click.  Of course, by doing
> this timer rollover becomes something to keep track of, otherwise
> there is insufficient cranking time available. Note that the 1
> usec is a bit arbitrary, slower tic rates will also work. Using 1
> usec kept some of the conversion math simpler, and it allowed to
> maintain use of integer artthmetic.

As a side issue, although desirable, there's no need to maintain
"human units" inside the machine, especially once a timebase has
been settled. Obviously it requires good documentations and
subsequent programmers to be able to "think the machine".

Note that when between two points as with input capture against a
free-running counter, that one already has an angular "velocity"
(well, period, actually) for the crankshaft. The first difference to
expected that is captured is a measure of the acceleration when
taken in proportion to the period.

The problem is the resolution and the mechanical effects that may
cause the captured value to be incorrect.

Also, because of the timer resolution and the need to add/subtract
two values based on that resolution, the error of the result doubles
so the notional error of any derived result is also doubled. This
introduces an error that may creep (accumulate) in one direction for
a few cycles until the feedback of the measurement "clicks over" and
the error can start again, perhaps from as far as the other end of
the error bounds.

The error bounds need to be kept narrow enough so that that effect
doesn't matter on the real engine. The effect may still be
measurable with sufficient instrumentation but should small enough
to be undetectable.

> First implementation was with what is called last interval
> interpolation.  Just take the last two time points, subtract, and
> use to scale future time events.

> After a period of time, there were numerous incidents of timing
> error jitter being reported. This was most prevalent in
> installations where they were using distributor setups where they
> had fixed the mechanical and vacuum advance and let the MS-II
> control timing. A very popular setup. But there was an apparent
> scatter that was visible with a timing light. In particular, off
> idle acceleration events caused instant spikes which quickly
> jumped back. Reports were that this often was associated with
> off-idle hesitation.

This is to be expected. That as well as "over-run" when the engine
slows...

> This is what prompted the investigation into spark stability, and
> the whole error analysis during acceleration.  Since the most
> popular setup was a four cylinder, 4-cycle distributor replacement
> setup we used this for the analysis.  Thus, a tach pulse every 180
> degrees of crank. The setup for analysis was to assume that for
> the first 180 degrees the crank was at a steady angular velocity,
> no accel component. Use the 0 and 180-degree point to generate a
> prediction. An acceleration is imparted on the crank right at 0
> degrees. To make this total worst case we assume the crank
> (non-physically) increases at the rate of 8000 RPM/sec. We have
> seen many datalogs of small engines exceeding this rate by a large
> margin in unloaded situations.

That's a very rapid change of rpm, but of little *use*. :-)

> Here is part of what we posted on the MS forums:
>
> "As far as equations, Assume wheel spinning at RPM0 with constant
> acceleration, RPM_dot.  Then from the previous eqs calculate the
> time to get from 0 180 deg(T180) and from 0 to 360 deg(T360). Then
> use the time from 0-180 deg (measured in the IC ISR) to represent
> the predicted time for the next tach input and compare this with
> the calculated T360  - T180. The prediction error = T180 - (T360 -
> T180).
>
>    T180 = [ -RPM0 + Sqrt(RPM0**2 + 60 *RPM_dot) ] / RPM_dot
>    T360 = [ -RPM0 + Sqrt(RPM0**2 + 120 *RPM_dot) ] / RPM_dot
>
> Use RPM_dot as 8000 rpm/sec, and calculate the error for an RPM0 =
> 1000 and an RPM0 = 8000.
>
> The numbers come out to be 4125 us (~25 deg error) for 1000 rpm
> and 14 us (~0.7 deg error) for 8000 rpm."

Those errors are quite a bit larger than my calculations. I'll go
back and check them in detail, then verify using other methods. I
got about 0.3 degrees error from 900 rpm (68 us) using about a
quarter of the acceleration and double the number of feedback
interrupts, making it a ~3 degree error which is an order of
magnitude different to your calculations. Back to the drawing
board for me...

> The analysis then progresses with the use of first and second time
> derivatives on the last interval calc in order to help with the
> overall prediction. Other than the first point where the
> acceleration is first noted (causality) applying higher
> corrections were helpful. But there were new issues. First, when
> an acceleration event ceases there is overshoot. Second, at steady
> state, the higher-order derivatives would introduce very small
> corrections that tended to stack up over time and cause jitter. So
> during steady-state conditions higher-order corrections were not
> applied.

You can't turn noise into signal :-)

Small signals (differences) wrt error produce a lot of noise in the
result.

> Now, the question of "is it needed". This is from observation from
> what I have learned from actual implementations. In an absolute
> sense the lack of having a higher order correction will not
> provent an engine from running, or even running better than a
> mechanical distributor setup. The best axample of this is MS1,
> which has 0.1 ms injector resolution, 100 RPM resolution, 8-bit
> ADC resolution, etc, probably one of the most resolution-lacking
> setups out there. Yet it has been used to break land-speed
> records, run chainsaws, etc. The coarse nature does not hinder
> engine tuning, but the lack of resolution does show up when from
> time to time. More than once I have chased down an issue that was
> the result of a lack of resolution. And enyone who has run huge
> injectors with idle pulsewidths of 1 ms will attest that tuning in
> 0.1ms steps (10%) is not optimal.

In terms of guessing the position of the crankshaft, the injector
duty cycle only becomes significant with direct injection;
especially compression-ignition as injection is ignition timing.

0.1ms would be borderline for ignition timing, spark or compression,
especially at idle and at higher rpm.

> When we did MS-II, we increased resolution on everything. 1 usec
> injector timing, 1 usec timer for ignition, etc. The increase in
> resolution is noticeable while tuning, signition/fuel steps are
> much smoother compared to MS1. The way we look at it is that if it
> is possible to easily increase a resolution, calculation dynamic
> range, etc then lets do it, as long as it does not affect any
> other system. Any place there is an opportunity to decrease errors
> it is worth investigating. And introducing alternate algorithms
> (such as x-tau transient fuel compensation) to evaluate is
> worthwhile - as long as there are the more fundamental methods in
> place to use if required.

Decreasing the noise introduced by coarse resolution is a good
thing. It may allow for _less_ filtering to be used to get a
nicely-responsive engine. But a coarse resolution also means that
you are less likely to even detect such things as torsional
vibration effects as their "signal" could be well within the
resolution of the measurement.

Knowing the resolution of the "instrumentation" is also important.
e.g. hysteresis of sensors, and their time/temperature response.
When times get down to microseconds, the sensor transients may
become bothersome.

-- 
/"\ Bernd Felsche - Innovative Reckoning, Perth, Western Australia
\ /  ASCII ribbon campaign | "If we let things terrify us,
 X   against HTML mail     |  life will not be worth living."
/ \  and postings          | Lucius Annaeus Seneca, c. 4BC - 65AD.






More information about the Diy_efi mailing list