Defining Awesome
  • Status Updates

  • Written by . Posted at 4:47 am on September 6th, 2010

    What would be a good compression method for small data? It seems that if I compress around 100 bytes with LZ I get more size!

    Be Sociable, Share!

    9 comments.

    1. Well, if the datastream is optimized already (i.e. you reduce the resolution of floats if you know they don’t have to be so accurate/are only in a certain range) there’s not much you can do. You could try sorting the data (Burrows-Wheeler-Transformation) and doing replacements on repeating bytes (RunLengthEncoding). Stuff like Huffman won’t help much I think. Note that I’m not pro on that stuff, I just read about data compression a bit some time ago.


    2. reduce the resolution of floats
      Good tip, thanks. Any other stuff like that?


    3. Where can i download link-dead?


    4. @b0b3r

      The alpha wasn’t released yet.
      So just wait a few days and you will get an awesome present 😉


    5. niko šveikovsky

      i would recommend making your own compression algorithm, something with very concise section headers and footers. though the compression won’t be as efficient as that for larger data, something can probably be managed that will still make a difference. note that while compressing data for networking can sync things up faster, it adds a significant load to the local processing. cater to the home network connections, but also be sure to use a fast packing/unpacking algorithm, so as to reach a happy middle for those without beast CPUs.


    6. some time ago when i was trying to figure out compression used on b&w image in custom embedded system i discovered that there were two methods used, depending on which one was more efficient for each line of pixels. it was either RLE for simple patterns or one that used 1byte to represent 8 pixels in a row and was way better for lines with lot of changes. I guess something similiar can be used for net code, pack each 100 bytes using two methods and send packet which has smaller size


    7. Are you already doing things like delta compression (based upon the last packet etc)?

      Maybe you can compress by avoiding to send data in the first place (only send changed values); or compress based upon the previous packet etc.


    8. i would recommend making your own compression algorithm
      This would be best and according to my philosophy:)

      laggyluk: but it wasn’t lossless?

      Andrew: yes I’m already doing deltas. What I can improve is compress the changed values based on the previous data.
      So if the position changed from 3,5 to 3,8 instead of sending 3,8 send 3,8-3,5 = 0,3.
      0,3 compresses much nicer.


    9. it was lossless


    Post a comment.

    Links