Page 2 of 3 FirstFirst 123 LastLast
Results 16 to 30 of 45

Thread: another tuning tool

  1. #16

  2. #17

  3. #18
    Fuel Injected!
    Join Date
    Mar 2013
    Posts
    1,470
    You are pretty good at math so don`t complain that much.

    I think ppc table lokup is much much closer to lt1 code as far as table lookup is concerned. The main difference is that it can do word and float table lookup, and use rescalable axis.

    I can send you some dissasembly if you want to examine some more PPC code. latest version of IDA can decompile it on the fly, but it is not that accurate as it should be.

    I don`t think it is any different than the already outlined table look up in the previous posts.

  4. #19
    LT1 specialist steveo's Avatar
    Join Date
    Aug 2013
    Posts
    4,008
    You are pretty good at math so don`t complain that much.
    as soon as i do anything in three dimensions my skills totally break down. i stole the math from wikipedia to determine a point in a 3d plane. i have no damn idea how it works, i am totally lost.

    I think ppc table lokup is much much closer to lt1 code as far as table lookup is concerned. The main difference is that it can do word and float table lookup, and use rescalable axis.
    that's my first impression at a glance but i haven't looked very hard yet.

    in theory for a control system the result of a table lookup with two axes should pretty much always be a bilinear lookup, or a nearest neighbor lookup or something. despite the code differences you would figure the result would be the same, since it's the 'correct' way to look up a thing which is assumed to have straight lines between the points.

    i've seen some stepped table lookups using both nearest neighbour and also left or right value aligned especially in older ECMs, for example with a table with an axis of 0,20,40,60,80 and values of 2,4,6,8,10,12 if you lookup 10 you'll get 2, and if you lookup 65 you'll get 10. there's no linear math done or interpolating at all, it takes barely any processing to determine the result, but your values are ALWAYS something that is actually in the table.

    the whole point of my work right now is based on this thought that analysis should be the reverse of the process that produces the result.

    follow along with me here and tell me im crazy.

    if you have a table on an ECM that is doing linear interpolation with a single axis, lets say that axis has columns of 10,20,30,40,50 and you find a data point at x=15 and the data points value is 2.00

    what you have actually found is not a value of 2 at all, because from the perspective of the table, that point is halfway up a line formed by the first two cells.

    what you have in fact found is that your data point is 50% of the way between a straight line that is formed between the first two columns of that table (10 and 20) and that data point is encompassed by those two cells. in other words your data point affects the amplitude of that line, and what we need to be doing is, given the slope of the line and the x and y position of that point are known, solve for those two points.

    that's really easy with a table with only one axis, but i am having trouble extending it to two axes because it's not as easy to do a reverse lookup. it involves math on a 3d plane, and i've been doing this crap all morning and my head hurts

    Code:
    az +
          (
            ( ( ( ( bx - ax ) * ( cz - az ) ) - ( ( cx - ax ) * ( bz - az ) ) ) /
              ( ( ( bx - ax ) * ( cy - ay ) ) - ( ( cx - ax ) * ( by - ay ) ) ) )
            * ( y - ay )
            )
          -
          (
            ( ( ( ( by - ay ) * ( cz - az ) ) - ( ( cy - ay ) * ( bz - az ) ) ) /
              ( ( ( bx - ax ) * ( cy - ay ) ) - ( ( cx - ax ) * ( by - ay ) ) ) )
            * ( x - ax)
            );
    anyway, my theory is that while our analysis tools have been throwing the value itself at the first two cells and the averaging has been quenching the noise, and the users have been doing manual common sense smoothing and interpolation by hand, what we actually should be doing is literally reading between the lines to determine the effect on the cells surrounding the data points.

  5. #20
    Fuel Injected!
    Join Date
    Mar 2013
    Posts
    1,470
    Getting 3d lookup from pcm perspective can be split in 2 parts.

    First you do horizontal lookup for x axis. Get the 2 adjacent cells[cell1-2] from x-axis value. do some math with them and store result.

    Than pcm treats 3d table not as 3d but as a continuos 2d horizontal table. for example, where each rows are stitched[first row- second row -third row and so on]. if you get 16x16 table the pcm treats it as a single 256 cells 2d table

    the pcm gets a row scale[yaxis scale factor taken from y-value] than multiplies it to column count and got the new 2 adjacent cells[cell3-4]. they are also done with the math used with first 2 selected cells.

    than the pcm got 2 values derived from cell1-2 and from cell3-4 now you need a scale factor from y-axis and do the same math [1-2]-[3-4]*y scale factor +[1-2]

    to get you the final result.

    With extrapolation I think you need 2 stage pass. first extrapolate x axis and y axis and save result to buffer, than fill x-axis as first pass, than y-axis as second pass. that way you don`t have to do on the fly 3d but instead use 2 2d routines.

  6. #21
    LT1 specialist steveo's Avatar
    Join Date
    Aug 2013
    Posts
    4,008
    i had a breakthrough in solving my problem while looking at a scatter plot. i was missing the simple solution.

    the correct formula for solving an averaging table that will be subjected to linear interpolation is simply linear regression. this derives a straight line from the average of a data set.

    it explains why averaging into the nearest cell on large sets yeilds results that look good, but in reality the closer the data is to the median between the cells, the more the results would be skewed.

  7. #22
    LT1 specialist steveo's Avatar
    Join Date
    Aug 2013
    Posts
    4,008
    the algorithm is drafted, now for more migrane inducing math but it's just a matter of time. it will involve at least one linear regression algorithm run for each pair of cells but i believe i can make it run in close to real time if i really optimize it.

    i wonder if any other tools are doing this kind of thing, it seems like something that if you were an actual computer scientist instead of an ameture like me you would have realized long ago

  8. #23
    Fuel Injected!
    Join Date
    Nov 2017
    Location
    Californiacation
    Age
    57
    Posts
    811
    steveo, I understand your frustration. Look at some ROM disassembly of old 2732 bs. It's only 8 bit crapola but I do understand that many people get lost because they started with C and can't relate :( Many can't understand old school serial data too.
    -Carl

  9. #24
    LT1 specialist steveo's Avatar
    Join Date
    Aug 2013
    Posts
    4,008
    this theory is working out insanely well. it will totally blow the doors off the accuracy of other table reconstruction and analysis tools for sure. we have definitely been doing it wrong in some cases. clumps in data distribution between the cells can totally make or break an 'average into the nearest cell' type analysis, whereas once you are in the realm of linear math where tables have a shape in between the cells, clumps don't make a difference. once the linear analysis between each cell point is done, we can have up to 4 lines per cell in a 3d table to try to join up and further filter our data points. another side effect is with a regression algorithm you can easily calculate an error metric, so if you just have white noise data we can refuse to give results there based on a threshold. if you have ever looked at a scatter plot and tried to interpolate table points from it, all of this should make some sense.

  10. #25
    LT1 specialist steveo's Avatar
    Join Date
    Aug 2013
    Posts
    4,008
    code is almost done. so here's how it works in 3d, it's pretty simple.

    we establish a table of x-1,y-1 cells both in the horizontal and vertical directions, envision them as the lines between the table cells, but one for vertical linear analysis and one for horizontal. four adjacent cells form four containers.

    we then parse/filter/lagfilter/store the log data in the appropriate 'container' using a nearest neighbor locating method which effectively places the data into the encompassing 'cells'

    when that is done we run a linear regression algorithm on each container. this basically attempts to draw a 'best fit' straight line through the data, just like you would do from a 2d scatter plot, but in both horizontal and vertical directions. an 'average line' rather than just an average is really good at rejecting bad data. if you have a fairly straight line cluster of data and some outliers, the outliers just get rejected

    now we establish a reliability metric for each line. if we have a clump of data on one end of the line it probably isn't accurate. we don't use lines that are inaccurate. i evaluate by data points per 1/3rd of the line. if we only have data on one half of the line, it needs to be stronger data than if it were on both ends. just like you would expect.

    then we just join the ends of every reliable line at x and y.

    if you look at a 3d table graph in tunerpro this should make sense to you.

    we are basically collecting a scatter plot along each 'table line' and then we just join the ends of the line up.

    suprisingly it tests pretty well for things like fuel trims, which aren't really sloped between cells, because being stored and retrieved in coarse blocks, the trims should meet the table with practically zero slope, so in that type of analysis the regression just improves noise rejection.

    the first real test on a larger data set is i ran a fairly large (10mb) log and checked the timing advance output against load/rpm (so hard output data against hard output data that is synchronized properly) then compared it to the timing table. traditional 'nearest cell average' analysis had up to 4 degrees of error per cell. the linear regression analysis had pretty much zero.

  11. #26
    LT1 specialist steveo's Avatar
    Join Date
    Aug 2013
    Posts
    4,008
    almost forgot the speed it flies.
    on a low end i5
    6 milliseconds for a 100,000 line log with regular averaging
    13 milliseconds for the same log with linear regression
    so it's pretty much real time
    i bet i can shave it down to 10ms. maybe even 4-5ms if i did it in three threads.

  12. #27
    LT1 specialist steveo's Avatar
    Join Date
    Aug 2013
    Posts
    4,008
    now that i am on a roll i've made a massive change, i am implementing a plain text equation compiler across the entire program.

    might kick ms excel out of the dyno and back into the office cubicle it belongs in.

  13. #28
    Fuel Injected!
    Join Date
    Jul 2019
    Location
    Orange, CA
    Posts
    757
    Well now. That all sounds quite exciting.

    If the new tool can do the interpolations for changing fuel type and injector scaling, then I'd say it's indeed game over for Excel. Those were the last two applications that had me busting out spreadsheets.
    1990 Corvette (Manual)
    1994 Corvette (Automatic)
    1995 Corvette (Manual)

  14. #29
    LT1 specialist steveo's Avatar
    Join Date
    Aug 2013
    Posts
    4,008
    it should help a lot with that, even if it doesn't, it'll help with other tables.

    one thing i have driven in recently there is a straight up calculator table. it used to be a compare table but now math

    so you can define any number of source tables, whether they be static data, analysis results, or other calculator tables, even if they have different column/row values, and give it an equation, it assigns each table a variable A-W (X-Y return current cell)

    like A - B or B - A would compare tables A and B, or you could do division to figure out the percentage differences

    but lets say you have to compare two timing tables which are actually a base and adder table with a multiplier for octane, and one is running at .90 modifier, you could do something like ( (A + B) * 0.90 ) - ( ( C + D) * 1.0 ).

  15. #30
    LT1 specialist steveo's Avatar
    Join Date
    Aug 2013
    Posts
    4,008
    alrighty here's the new version, 1.2, the bilinear analysis is there but still has some kinks to work out. there are lots of changes. everything can be stored in the database and retrieved easily. there are color scales. math equation entry everywhere. once you figure out the weirdness of how this tool works, you can get anything from your data you want in a few seconds
    http://ecmhack.com/tablehack/

Similar Threads

  1. New LS1 Tuning Tool [Universal Patcher]
    By kur4o in forum OBDII Tuning
    Replies: 114
    Last Post: 02-22-2023, 05:04 PM
  2. All in one scan tool
    By Super Hydra Performance in forum OBDII Tuning
    Replies: 2
    Last Post: 05-19-2021, 09:10 PM
  3. Narrowband Tuning Tool
    By steveo in forum GM EFI Systems
    Replies: 237
    Last Post: 07-11-2019, 05:28 PM
  4. LT1 auto-tuning tool (web based)
    By steveo in forum GM EFI Systems
    Replies: 3
    Last Post: 10-17-2014, 08:07 AM
  5. TunerPro Rt used as a scan tool?
    By mudbuggy in forum TunerPro Tuning Talk
    Replies: 21
    Last Post: 01-10-2012, 03:38 AM

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •