I'm probably getting pedantic here, but I was curious to see how the various size reducing methods stacked up and wanted to post those results here. Additionally, I've come up with another method for reducing .JSON size that gets things even smaller yet... at the expense of a guaranteed precision loss.
My test file is one of the levels I've developed for the game I'm working on. My scene stats script reports:
01:52:45: Scene statistics
01:52:45: Body count: 3
01:52:45: - static: 3
01:52:45: - kinematic: 0
01:52:45: - dynamic: 0
01:52:45: Fixture count: 15
01:52:45: - Sensor: 1
01:52:45: Joint count: 0
01:52:45: Image count: 8
Doesn't seem like much, but I've got polygon radius set to 1 for all the fixtures which yields 391 fixtures at run-time.
My new size reduction method is somewhat similar to the compact method. Since my platformer style game doesn't really require all that much precision, I created a perl script that simply lops off floats after 3 decimal places.
Example: 0.2000000029802322 translates into 0.200
For my test input, I use the .json produced with the human readable and compact options set.
Here's the perl script invocation:
Code: Select all
perl -pi -e 's/([-+]?[0-9]*\.+[0-9]+)/sprintf("%.3f",$1)/ge' TutHowToFlyTallhumc.json
Given that, here are the results I'm seeing:
Original rube file: 37457
Human readable, compact off: 132719
Human readable, compact on: 126410
Hex floats: 115176
Human readable, compact on, perl processed: 104435
I don't have any solid numbers as far as load times go. My mobile deployment currently uses hex/floats which takes a few more operations per float... I'm curious to learn if there's any noticeable improvement using the perl processed file. My gut tells me it would only become apparent for extremely large files. At a guess, the current hex/float load for this file takes roughly 3 seconds.
Thought others might be interested.
If / when I take some timing measurements, I'll follow-up here.