Jansun wrote: ↑
Fri Apr 20, 2018 1:21 pm
I was wondering if it is possible to calculate the time between injection start and the actual wideband reading.
I can't tell you how to calculate it on paper for any point in operation; but I find it as best I can for several locations and then have a rough idea what the delay is for the remainder of the range. As the processor and TS readouts are not all instantaneous for all data; I look at the output traces in MLV and determine the relative delay of the various data traces relative to the time, including O2 (Part1). Then find the delay from the beginning of an identifiable injection point event (perhaps AE) to the start of the responding readout of the event by O2 (Part2) relative to the timeline. Delay = Part2 - Part1
Having said that, Part1 is relatively small and varies from system-to-system and even at different data rates, and simply using Part2 (actual sensor delay) seems to generally get me in the ballpark for usable information. Hope that helps some, and I'm always looking for better ways to do it.
PS: Some WBO2 controllers can be programmed for certain advanced functions, such as the Innovate controllers that can be set for slower reporting but internally average the Lambda between reports. This is very useful to dampen a jumpy O2 gauge. The average is not a concern (a good thing), but be aware some controllers can have or be set for different reporting speeds, affecting the 'delay'. Take that into account for any controller, and especially if yours is one of the programmable types or set for that function, but it should still show in the data delay timeline if you don't know what it is.