[Diy_efi] RE: 60-2 Toothwheel - Kalman Filter or ( EKF )
Bruce A Bowling
bbowling
Sat Apr 7 19:18:54 UTC 2007
>
>So how much of a difference do and can the "esoterics" like Kalman
>filtering make to the operation of the engine in the real world?
>
I think I can help lend an answer to this, based on observation and measurement. First, I will eliminate the Kalman name and esoterics (since this is just an implementation scheme) and work more fundamentally to the general question of is a higher order correction compared to simple interpolation of crank location pulse times based on two events.
First, a little history, based on MS experience, particularly MS-II. The ignition operation in MS-II is pretty much the same as you outlined earlier, utilizing input-capture and output compare hardware based on a 16-bit free-running timer. In order to maintain a high accuracy for high tooth count wheels the decision was made to go with a 1 usec timer click. Of course, by doing this timer rollover becomes something to keep track of, otherwise there is insufficient cranking time available. Note that the 1 usec is a bit arbitrary, slower tic rates will also work. Using 1 usec kept some of the conversion math simpler, and it allowed to maintain use of integer artthmetic.
First implementation was with what is called last interval interpolation. Just take the last two time points, subtract, and use to scale future time events.
After a period of time, there were numerous incidents of timing error jitter being reported. This was most prevalent in installations where they were using distributor setups where they had fixed the mechanical and vacuum advance and let the MS-II control timing. A very popular setup. But there was an apparent scatter that was visible with a timing light. In particular, off idle acceleration events caused instant spikes which quickly jumped back. Reports were that this often was associated with off-idle hesitation.
This is what prompted the investigation into spark stability, and the whole error analysis during acceleration. Since the most popular setup was a four cylinder, 4-cycle distributor replacement setup we used this for the analysis. Thus, a tach pulse every 180 degrees of crank. The setup for analysis was to assume that for the first 180 degrees the crank was at a steady angular velocity, no accel component. Use the 0 and 180-degree point to generate a prediction. An acceleration is imparted on the crank right at 0 degrees. To make this total worst case we assume the crank (non-physically) increases at the rate of 8000 RPM/sec. We have seen many datalogs of small engines exceeding this rate by a large margin in unloaded situations.
Here is part of what we posted on the MS forums:
"As far as equations, Assume wheel spinning at RPM0 with constant acceleration,
RPM_dot. Then from the previous eqs calculate the time to get from 0 180 deg(T180)
and from 0 to 360 deg(T360). Then use the time from 0-180 deg (measured in the IC
ISR) to represent the predicted time for the next tach input and compare this with
the calculated T360 - T180. The prediction error = T180 - (T360 - T180).
T180 = [ -RPM0 + Sqrt(RPM0**2 + 60 *RPM_dot) ] / RPM_dot
T360 = [ -RPM0 + Sqrt(RPM0**2 + 120 *RPM_dot) ] / RPM_dot
Use RPM_dot as 8000 rpm/sec, and calculate the error for an RPM0 = 1000 and
an RPM0 = 8000.
The numbers come out to be 4125 us (~25 deg error) for 1000 rpm and 14 us (~0.7
deg error) for 8000 rpm."
The analysis then progresses with the use of first and second time derivatives on the last interval calc in order to help with the overall prediction. Other than the first point where the acceleration is first noted (causality) applying higher corrections were helpful. But there were new issues. First, when an acceleration event ceases there is overshoot. Second, at steady state, the higher-order derivatives would introduce very small corrections that tended to stack up over time and cause jitter. So during steady-state conditions higher-order corrections were not applied.
Back to the real engine, with the higher-order corrections the observed timing scatter at idle was reduced. Also, the timing error (the error between the next tooth prediction time and actual arrival time) was also reduced. The different correction modes (last interval, alpha-beta-gamma, derivative) are switchable in the code at runtime so the effect is immediate.
Now, the question of "is it needed". This is from observation from what I have learned from actual implementations. In an absolute sense the lack of having a higher order correction will not provent an engine from running, or even running better than a mechanical distributor setup. The best axample of this is MS1, which has 0.1 ms injector resolution, 100 RPM resolution, 8-bit ADC resolution, etc, probably one of the most resolution-lacking setups out there. Yet it has been used to break land-speed records, run chainsaws, etc. The coarse nature does not hinder engine tuning, but the lack of resolution does show up when from time to time. More than once I have chased down an issue that was the result of a lack of resolution. And enyone who has run huge injectors with idle pulsewidths of 1 ms will attest that tuning in 0.1ms steps (10%) is not optimal.
When we did MS-II, we increased resolution on everything. 1 usec injector timing, 1 usec timer for ignition, etc. The increase in resolution is noticeable while tuning, signition/fuel steps are much smoother compared to MS1. The way we look at it is that if it is possible to easily increase a resolution, calculation dynamic range, etc then lets do it, as long as it does not affect any other system. Any place there is an opportunity to decrease errors it is worth investigating. And introducing alternate algorithms (such as x-tau transient fuel compensation) to evaluate is worthwhile - as long as there are the more fundamental methods in place to use if required.
- Bruce
More information about the Diy_efi
mailing list