PDA

View Full Version : another tuning tool



steveo
01-05-2023, 08:48 AM
it took way too much time to write.

i think it'll be really handy. it's very good at quickly asking questions about your datalogs

it also does some really neat stuff with tables that can come in handy, like comparisons and interpolations.

under the hood it uses bilinear table lookups just like most ECMs do so it's really good for taking a table from one operating system and mangling it into another one with a totally different layout.
anyway try it out and let me know what you think or not
64 bit windows only release for now
beta
might crash
probably some stuff doesn't work

http://ecmhack.com/tablehack/

bobcratchet555
01-05-2023, 04:33 PM
looks fun - the ability to get tables from other OS is very convenient. i havent tried it yet but will in the near future. appreciate other enthusiasts taking the time to share their work with the rest of us.

steveo
01-06-2023, 12:26 AM
some crash courses:

analyze log (example: VE)
1. press datalog and select log and hope it does not crash
2. press analysis
3. click table layout. choose your x/y data (MAP and RPM probably) and either type or copy and paste some column/row values in there, or press 'auto layout'. they can be any reasonable format, it'll figure it out.
3a. if you had to type them in, probably click SAVE so you can RESTORE it next time. this saves the layout for use across the tool
4. go back to the main screen and 'select data' (long term trim maybe)

done

add filters to suit, should be self explanitory

you can 'save' the entire analysis profile too, including data, filters, etc. and restore it for different datalogs later.

table interpolation (ex: port a VE or timing table from one ECM to another):

1. create a new 'table'
2. set a layout matching your original table
3. paste your data there
4. click layout again and enter your desired layout. it'll see you have data and ask what to do with it.

compare should be obvious, select two tables (not a datalog), but whats not obvious is the compare is dynamic, if you alter the source data the comparison is always up to date. its also totally interpolated so if you compare tables with different row/column values you can do linear interpolation there too.

graphing should be obvious too, right now 2d plot graphs only work for datalogs, 3d graphs for tables


it's really designed for people that would usually copy/paste their data too/from a spreadsheet, so it's the kind of thing you want your ordinary tuning tool open in another window and this just fills in any gaps

kur4o
01-06-2023, 01:47 AM
I have been thinking about something like this for a long time.
For linear layout a nice one will be-> set x,y-> initial;final points->step

And another very cool one, extrapolate a table with extra rows,colums.
You have 100,200,300,400 for a row->extrapolate and it adds extra rows with 120,140,160,180,200 and so on and fill the data in.

steveo
01-06-2023, 04:51 AM
You have 100,200,300,400 for a row->extrapolate and it adds extra rows with 120,140,160,180,200 and so on and fill the data in.

it does that!

the data that's 'off the edge' is the hardest. right now this tool lets you continue the slope or just set it null and you can fill manually.

i will add a spline interpolation method soon as well, for tables that we know are actually curved too, like a maf table or whatever.

steveo
01-06-2023, 04:54 AM
but another thing is it can compare two tables with the same linear interp

so you can directly compare a table that is 100,200,300,400 with 120,140,160,180,200,300,350,380,400 or whatever

this stuff is really common when you tune subarus and stuff thats' why i needed it so badly for myself

WASyL
01-06-2023, 10:24 PM
awsome tool, comes handy when doing lots of tuning with different ECM/PCM but similar engine setups.

steveo
01-07-2023, 05:29 AM
please test it. one thing i know is broken is 2d table interpolation. if you want to do 2d table interpolation for now just make a 3d table with a single column and it'll work, rather than making a 2d table.

next version:

- make csv parser more robust and faster
- use log timestamp if available
- XDF table layout import (since you can't copy paste that stuff from tunerpro, it takes too long to make them)

steveo
01-08-2023, 12:06 AM
http://ecmhack.com/tablehack/

new version:

- way better way faster CSV import. parses, error checks, and builds its table structures pretty quickly, average under 200msec per MB of log data on an i5, so even big 10MB+ logs should load in seconds. as a 64 bit program you could load gigs of logs if you wanted to, i've only tested up to 500mb of test data. added a progress bar for massive csv files so you know it has not crashed. handles backslashes, quotes, and literal quotes (""=") so it should work with any standards compliant csv generator. it still parses things in quotes as numbers if they are numbers, though (actually it stores two copies of your log, one as strings one as converted numbers where strings = 0.00)

- fixed some glitches with the table viewer, scrolling through hundreds of thousands of lines is pretty smooth

- some options for datalog import, optionally continue parsing a CSV with line bugs (columns per line != columns in header, etc), i know some log generators are super broken.

- enabled time axis (for lag filter in analyzer as well as grapher), must be a decimal time to work, will add a timestamp converter later.

- XDF importer. select your xdf and it will list any tables it feels are viable. seems to work well against EEX but please test! it will probably crash with a malformed XDF but who knows.

so now it'll load log(s), load xdf table, select data, and analyze anything vs anything vs anything filtered by anything with arbitrary data lag in a few clicks and a few seconds.

next up are more log viewing improvements like selection data tracing (for example if you select analysis cells of interest it'll highlight or filter your log appropriately) so you can view the parameters of knock events for example. you will then be able to select entries in the log and they will move a cursor in the graph if you have one, so you can 'drill down' into events more easily.

steveo
01-13-2023, 07:12 PM
i am working on a new version that will do much more advanced analysis for tables that the ECM does linear table lookups on (think most VE and maf tables)

doing all this linear/bilinear lookup stuff trying to make it 'think' like an ECM got me thinking

lets define an example table that has 20,40,60,80,100 as columns.

the traditional method, lets say we have a data point of 6 with a lookup value of 25, we go okay, the first cell is close to that value, so, add the data point 6 to the first cell's average. in otherwords we do nearest neighbour interpolation of the data only.

the results are good on a large sample set because of a crapload of averaging smoothing the results

but this is not really how the data point would be seen by the ECM for a table like a VE table that the ECM does linear interpolation on.

what we actually are saying when we log a lookup value of 25, is that we have a data point that affects LINE that has both its slope and gain defined by the values of the first two cells of that table

so in effect what we should do is calculate gain and slope of that line by manipulating the adjacent 2 (or 4 in the case of a 3d table) cells for each data point

does that make sense?

kur4o
01-13-2023, 08:02 PM
on motorola cpu build in table lookup opcodes. you need to have not only predefined decimal points, but need to have it hex.

the value used for lookup is usually set with min max as 00-ff or for signed ones and 16bit tables 0000-8000 or 0000-ffff than values are converted to account for that and being used in table lookup. if x-row is scaled from 00-ff 00=400; ff=3600 with some divider of lets say 16 points. lookup value is divided by 16 that it finds the row and between 2 adjacent cells make an average between the 2 cells and multiply with the factor that is left from devision. Hope it that makes sense.

That might not be the case with ppc cpu, or some other cpu that uses scalars set for each table that defines the axis points[ so it is not linear].


I hope to get you some decomplied routines used with PPC cpu so you can figure something out of it. Usually all gm 2d and 3d tables have scalar infront of the table that defines the number of x and y points and are used for devision.

The most newer stuff have mostly floating point tables, there may be totally different math.

some examples for linear motorola tables

100-200-300

add points from 100- to 200 it wil be best if you know how many points are in hex. but that may be too hard to guess.

so you get 100~110~120~130~140~150~160~170~180~190~200

value at 100=35 value at 200 = 70

so you find the spread between 70 and 35 in our case it is 35 and multiply with factor 1.1 for 110 , 1.2 for 120 and so on till 190 with *1.9

first draw horizontal axis than extrapolate vertival axis using newly added extrapolated x axis data.

I hope any of that make sense.

kur4o
01-13-2023, 08:09 PM
Some clarifications.

when you find spread lets say 20 and 100 spread is 80 you got 10 extrapolated points between

so for point 0[20] it is 80*0+20
point1 is 80*0.1+20
point2 is 80*0.2+20
and so on

steveo
01-14-2023, 12:15 AM
totally makes sense, although i think if i do the math in floating point, the results will be just as usable when scaling tables from lower resolution ecms. i'll have to do some tests on how well it works on a real VE table or whatever. when you have two axes (for a 3d table) things get a bit more complicated. https://en.wikipedia.org/wiki/Bilinear_interpolation

this is a great approximation that i've been using with success, the compiler reduces it to a few instructions and it runs really quickly



double x2x1, y2y1, x2x, y2y, yy1, xx1;
x2x1 = x2 - x1;
y2y1 = y2 - y1;
x2x = x2 - x;
y2y = y2 - y;
yy1 = y - y1;
xx1 = x - x1;
return 1.0 / (x2x1 * y2y1) * (
q11 * x2x * y2y +
q21 * xx1 * y2y +
q12 * x2x * yy1 +
q22 * xx1 * yy1
);

steveo
01-14-2023, 06:51 AM
ok i really suck at math and i wish i had paid attention in linear algebra but i THINK figured out how to effectively reverse 3d linear interpolation of a table lookup without having to cover an entire whiteboard in math. it's about four times as computationally intensive as just a lookup as i have to do four transforms, one for each 'encompassing cell' involved in the transform, as from each cell i change the 'viewpoint' of the 3d shape that effectively joins the four encompassing cells involved in the lookup. someone better at math might be able to do it more efficiently later but this seems to work. i will make it less crashy and do more testing. early results are very promising. data scattered mostly between the cells seems to resolve as expected.

kur4o
01-15-2023, 10:41 AM
This is decompiled code for 3d table lookup with e38 ecm. PPC cpu.
It doesn`t make sense to me. you can try to figure it out.

FOr sure v4 an v7 is the count of axis points.

steveo
01-15-2023, 07:11 PM
i used to do a little bit of ppc stuff as a kid let me see if it comes back to me

steveo
01-15-2023, 07:24 PM
by the way this project is showing me i should have paid attention in high school math class. it's really complicated.

kur4o
01-16-2023, 01:10 AM
You are pretty good at math so don`t complain that much.

I think ppc table lokup is much much closer to lt1 code as far as table lookup is concerned. The main difference is that it can do word and float table lookup, and use rescalable axis.

I can send you some dissasembly if you want to examine some more PPC code. latest version of IDA can decompile it on the fly, but it is not that accurate as it should be.

I don`t think it is any different than the already outlined table look up in the previous posts.

steveo
01-16-2023, 03:06 AM
You are pretty good at math so don`t complain that much.

as soon as i do anything in three dimensions my skills totally break down. i stole the math from wikipedia to determine a point in a 3d plane. i have no damn idea how it works, i am totally lost.


I think ppc table lokup is much much closer to lt1 code as far as table lookup is concerned. The main difference is that it can do word and float table lookup, and use rescalable axis.

that's my first impression at a glance but i haven't looked very hard yet.

in theory for a control system the result of a table lookup with two axes should pretty much always be a bilinear lookup, or a nearest neighbor lookup or something. despite the code differences you would figure the result would be the same, since it's the 'correct' way to look up a thing which is assumed to have straight lines between the points.

i've seen some stepped table lookups using both nearest neighbour and also left or right value aligned especially in older ECMs, for example with a table with an axis of 0,20,40,60,80 and values of 2,4,6,8,10,12 if you lookup 10 you'll get 2, and if you lookup 65 you'll get 10. there's no linear math done or interpolating at all, it takes barely any processing to determine the result, but your values are ALWAYS something that is actually in the table.

the whole point of my work right now is based on this thought that analysis should be the reverse of the process that produces the result.

follow along with me here and tell me im crazy.

if you have a table on an ECM that is doing linear interpolation with a single axis, lets say that axis has columns of 10,20,30,40,50 and you find a data point at x=15 and the data points value is 2.00

what you have actually found is not a value of 2 at all, because from the perspective of the table, that point is halfway up a line formed by the first two cells.

what you have in fact found is that your data point is 50% of the way between a straight line that is formed between the first two columns of that table (10 and 20) and that data point is encompassed by those two cells. in other words your data point affects the amplitude of that line, and what we need to be doing is, given the slope of the line and the x and y position of that point are known, solve for those two points.

that's really easy with a table with only one axis, but i am having trouble extending it to two axes because it's not as easy to do a reverse lookup. it involves math on a 3d plane, and i've been doing this crap all morning and my head hurts


az +
(
( ( ( ( bx - ax ) * ( cz - az ) ) - ( ( cx - ax ) * ( bz - az ) ) ) /
( ( ( bx - ax ) * ( cy - ay ) ) - ( ( cx - ax ) * ( by - ay ) ) ) )
* ( y - ay )
)
-
(
( ( ( ( by - ay ) * ( cz - az ) ) - ( ( cy - ay ) * ( bz - az ) ) ) /
( ( ( bx - ax ) * ( cy - ay ) ) - ( ( cx - ax ) * ( by - ay ) ) ) )
* ( x - ax)
);

anyway, my theory is that while our analysis tools have been throwing the value itself at the first two cells and the averaging has been quenching the noise, and the users have been doing manual common sense smoothing and interpolation by hand, what we actually should be doing is literally reading between the lines to determine the effect on the cells surrounding the data points.

kur4o
01-16-2023, 12:05 PM
Getting 3d lookup from pcm perspective can be split in 2 parts.

First you do horizontal lookup for x axis. Get the 2 adjacent cells[cell1-2] from x-axis value. do some math with them and store result.

Than pcm treats 3d table not as 3d but as a continuos 2d horizontal table. for example, where each rows are stitched[first row- second row -third row and so on]. if you get 16x16 table the pcm treats it as a single 256 cells 2d table

the pcm gets a row scale[yaxis scale factor taken from y-value] than multiplies it to column count and got the new 2 adjacent cells[cell3-4]. they are also done with the math used with first 2 selected cells.

than the pcm got 2 values derived from cell1-2 and from cell3-4 now you need a scale factor from y-axis and do the same math [1-2]-[3-4]*y scale factor +[1-2]

to get you the final result.

With extrapolation I think you need 2 stage pass. first extrapolate x axis and y axis and save result to buffer, than fill x-axis as first pass, than y-axis as second pass. that way you don`t have to do on the fly 3d but instead use 2 2d routines.

steveo
01-16-2023, 06:09 PM
i had a breakthrough in solving my problem while looking at a scatter plot. i was missing the simple solution.

the correct formula for solving an averaging table that will be subjected to linear interpolation is simply linear regression. this derives a straight line from the average of a data set.

it explains why averaging into the nearest cell on large sets yeilds results that look good, but in reality the closer the data is to the median between the cells, the more the results would be skewed.

steveo
01-16-2023, 07:19 PM
the algorithm is drafted, now for more migrane inducing math but it's just a matter of time. it will involve at least one linear regression algorithm run for each pair of cells but i believe i can make it run in close to real time if i really optimize it.

i wonder if any other tools are doing this kind of thing, it seems like something that if you were an actual computer scientist instead of an ameture like me you would have realized long ago

In-Tech
01-16-2023, 09:03 PM
steveo, I understand your frustration. Look at some ROM disassembly of old 2732 bs. It's only 8 bit crapola but I do understand that many people get lost because they started with C and can't relate :( Many can't understand old school serial data too.

steveo
01-17-2023, 07:35 AM
this theory is working out insanely well. it will totally blow the doors off the accuracy of other table reconstruction and analysis tools for sure. we have definitely been doing it wrong in some cases. clumps in data distribution between the cells can totally make or break an 'average into the nearest cell' type analysis, whereas once you are in the realm of linear math where tables have a shape in between the cells, clumps don't make a difference. once the linear analysis between each cell point is done, we can have up to 4 lines per cell in a 3d table to try to join up and further filter our data points. another side effect is with a regression algorithm you can easily calculate an error metric, so if you just have white noise data we can refuse to give results there based on a threshold. if you have ever looked at a scatter plot and tried to interpolate table points from it, all of this should make some sense.

steveo
01-18-2023, 08:48 AM
code is almost done. so here's how it works in 3d, it's pretty simple.

we establish a table of x-1,y-1 cells both in the horizontal and vertical directions, envision them as the lines between the table cells, but one for vertical linear analysis and one for horizontal. four adjacent cells form four containers.

we then parse/filter/lagfilter/store the log data in the appropriate 'container' using a nearest neighbor locating method which effectively places the data into the encompassing 'cells'

when that is done we run a linear regression algorithm on each container. this basically attempts to draw a 'best fit' straight line through the data, just like you would do from a 2d scatter plot, but in both horizontal and vertical directions. an 'average line' rather than just an average is really good at rejecting bad data. if you have a fairly straight line cluster of data and some outliers, the outliers just get rejected

now we establish a reliability metric for each line. if we have a clump of data on one end of the line it probably isn't accurate. we don't use lines that are inaccurate. i evaluate by data points per 1/3rd of the line. if we only have data on one half of the line, it needs to be stronger data than if it were on both ends. just like you would expect.

then we just join the ends of every reliable line at x and y.

if you look at a 3d table graph in tunerpro this should make sense to you.

we are basically collecting a scatter plot along each 'table line' and then we just join the ends of the line up.

suprisingly it tests pretty well for things like fuel trims, which aren't really sloped between cells, because being stored and retrieved in coarse blocks, the trims should meet the table with practically zero slope, so in that type of analysis the regression just improves noise rejection.

the first real test on a larger data set is i ran a fairly large (10mb) log and checked the timing advance output against load/rpm (so hard output data against hard output data that is synchronized properly) then compared it to the timing table. traditional 'nearest cell average' analysis had up to 4 degrees of error per cell. the linear regression analysis had pretty much zero.

steveo
01-18-2023, 08:58 AM
almost forgot the speed it flies.
on a low end i5
6 milliseconds for a 100,000 line log with regular averaging
13 milliseconds for the same log with linear regression
so it's pretty much real time
i bet i can shave it down to 10ms. maybe even 4-5ms if i did it in three threads.

steveo
01-22-2023, 01:18 AM
now that i am on a roll i've made a massive change, i am implementing a plain text equation compiler across the entire program.

might kick ms excel out of the dyno and back into the office cubicle it belongs in.

NomakeWan
01-24-2023, 04:45 AM
Well now. That all sounds quite exciting.

If the new tool can do the interpolations for changing fuel type and injector scaling, then I'd say it's indeed game over for Excel. Those were the last two applications that had me busting out spreadsheets. :thumbsup:

steveo
01-24-2023, 06:45 AM
it should help a lot with that, even if it doesn't, it'll help with other tables.

one thing i have driven in recently there is a straight up calculator table. it used to be a compare table but now math

so you can define any number of source tables, whether they be static data, analysis results, or other calculator tables, even if they have different column/row values, and give it an equation, it assigns each table a variable A-W (X-Y return current cell)

like A - B or B - A would compare tables A and B, or you could do division to figure out the percentage differences

but lets say you have to compare two timing tables which are actually a base and adder table with a multiplier for octane, and one is running at .90 modifier, you could do something like ( (A + B) * 0.90 ) - ( ( C + D) * 1.0 ).

steveo
02-02-2023, 07:55 AM
alrighty here's the new version, 1.2, the bilinear analysis is there but still has some kinks to work out. there are lots of changes. everything can be stored in the database and retrieved easily. there are color scales. math equation entry everywhere. once you figure out the weirdness of how this tool works, you can get anything from your data you want in a few seconds
http://ecmhack.com/tablehack/

ralmo94
04-01-2023, 06:53 PM
I missed this whole thread.

Nice new tool!

I love that it gives a preview of the data log csv!


Am I to understand that to compare STFT and LTFT I would need to use the calc? Or Bank 1 and Bank 2?
or am I missing something

steveo
04-01-2023, 06:59 PM
tell me what you're trying to do and ill walk you through it
to compare two banks or whatever does require making an analyzer for each thing and then using the calc
you can do it really quickly by saving/restoring though and its very powerful once done
future version will improve this

ralmo94
04-01-2023, 08:11 PM
I'd like to have the average of 2 banks the way trymalyzer does,
So I create let's say ve layout analyzer twice, one for each bank, then how do I use the calc to average them?
Hope I'm explaining good enough.

I only speak 2 languages, English and bad English, but I'm better at bad English. Lol

steveo
04-02-2023, 12:02 AM
make one analyzer for the left bank vs rpm vs map or whatever, set it up perfectly with your filters and stuff

use 'database' to save it as 'left bank'

make another analyzer, go to 'database' and load 'left bank'

now change that analyzer to read the right bank's data

go to database and save it as 'right bank'

now you have a seperate analysis of left and right bank to look at and they're saved for later use

to view their average, select both (control click them) and open a calculator

in the formula use ( A + B ) / 2

now you have left, right, and average all there

i know this is more complicated than trimalyzer but you can literally do any analysis or conversion with it

steveo
04-02-2023, 12:14 AM
a good example of what i was doing with this tool today, i have AFR analysis data from a wideband in AFR format vs load vs rpm, then i have a fueling table of load vs rpm in lambda format from the actual calibration.

so i load both and use a calculator table like (14.7 / A) - (B) to convert the afr table to lambda and do the comparsion

took like 15 seconds to get 'er going, but i see a problem where there's bunk data from closed loop in the table

add a few filters to the analyzer, comparsion is updated automatically

instead of just manually repairing the fueling table, i change the formula in the calculator to output a corrected fuel table. then copy/paste the thing back into the software

ralmo94
04-02-2023, 10:04 AM
I think I Understand I'll have to give it a try when I got time.

THis is so much more powerful than trimalyzer, I can't wait to learn to use it! Glad we have it now, appreciate that you are kind enough to share it with the community.

One thing I did notice, when I went to paste column labels for rpm that had comma's for rpm, I had to remove the coma's, it said non numeral input or something. Not a big deal, and a lot quicker then entering it all in manually.

steveo
04-02-2023, 06:14 PM
oh i thought i'd done that. my intention was it takes commas, spaces, tabs, or line feeds. i'll fix it in the next version

steveo
04-02-2023, 07:10 PM
i realize it's a bit tedious (duplicating things with the database and all because i have been too lazy to make a button for it) so here's a video how to average your two BLM data channels

obviously once you've saved your left and right analyzers in the database you can set it up again within seconds


https://www.youtube.com/watch?v=Tg9eANnCh7g

ralmo94
04-07-2023, 05:49 AM
I had a chance to mess with it some more, I even figured out how to combine data logs. Was very helpful for figuring out PE thresholds from data logs.

It didn't like the data at line 20 though, while I was able to select line 20, it didn't see the data there. I had to open csv in a spreadsheet and delete a lot of lines then it worked fine with data on line 6.

Also it didn't let me set up a EQ table because the data was not sequential.

steveo
04-07-2023, 05:55 AM
Also it didn't let me set up a EQ table because the data was not sequential.

what's an eq table?

it only works with data with sequential values in the columns/rows, this is required to do anything useful with cell selection. it's impossible to do a lookup on a table without sequential values.


It didn't like the data at line 20 though, while I was able to select line 20, it didn't see the data there. I had to open csv in a spreadsheet and delete a lot of lines then it worked fine with data on line 6.

can you give me an example of a log that isn't working?

ralmo94
04-07-2023, 07:13 AM
what's an eq table?
18976
I don't remember exactly what I was going to do now, but I was going to set up this PE timing table for comparing something in the log.

Edit, I remember now, I was going to set it up with KR vs Commanded Fuel Ratio, except the equivalent AFR, instead of EQ, because that's what's in the data log, since the table starts at 0.80, the AFR would be 18.3, and Go down from there, Since It;s Decreasing, It said it was non Sequential.




can you give me an example of a log that isn't working?
Here you go. I had to zip it to attach it here.

steveo
04-07-2023, 04:51 PM
i will look at the log
it only supports axis values in ascending order
this may change but unlikely as its really rare you need that

steveo
04-07-2023, 05:22 PM
oh its hptuners again. there is no CSV parser in the world that would read that unless it had a specific 'hptuners mode'.

edit: to be clear though i will fix it this weekend

steveo
04-09-2023, 05:48 PM
okay, i fixed a few bugs and the parser works with your hptuners file now

http://ecmhack.com/tablehack/

steveo
04-09-2023, 08:23 PM
i kept going because it is rainy out

fixed more bugs, made a new datalog loader window and progress thing, faster log loading because of course, and also finally made a clone button so you don't have to fiddle with the database as much

http://ecmhack.com/tablehack/