Will we get a huge change log / summary change log of 58.1 to Unification 60.0?
#1
I would love to see all the log notes. Such as "changed wall limit from 72 to 9000" or "total objects rendered in a given map from 300ish to 9000". I would love something like that for all the major changes and bug fixes.
Reply
#2
Ask me about 2 years ago. Wink  You can get the git commit logs with git log github/unification/master --not 0.58.1-d1x 0.58.1-d2x, optionally with various --pretty=format or use git shortlog to get just the subject lines.  For the most part, my commit logs have only called out fixes for regressions introduced earlier in the development cycle, so that people following along would know when to upgrade away from a buggy version.  I do not obfuscate fixes for non-regression bugs (whether Rebirth-specific or vintage Parallax), but I have not followed any pattern that makes them easy to search out, either.

As regards limits, I do not think I raised any of the limits that would allow a unification savegame to be incompatible with a previous release.  I rearranged the renderer to use dynamic allocations, so it may handle object-heavy scenes better than older builds did.  That rearrangement should make it pretty easy to raise the limit on number of concurrently rendered segments, but the limit has not been raised yet.  Since it does not affect saves, it would not break compatibility to raise it.

I converted many of the game's loops to sanity check their bounds before use.  Previously, loading a corrupt or ill-formed file would lead to memory corruption.  Now, it often is still fatal, but it dies immediately and cleanly, rather than having a weird failure much later on.

I heavily reworked how the game addresses segments and objects internally.  I expect this to improve performance by eliminating redundant calculations.  The original game had a bad habit of converting to/from pointer/integer, so you might have f1 pass a pointer to f2, which converts it to an index integer to pass to f3, which then converts it back to the pointer that f2 received originally.  By passing the pointer through from one end to the other, you trade one extra pointer in memory for the cost of converting from pointer -> integer -> pointer.  One such function that had this anti-pattern had a Parallax comment stating that it was "Optimized by MK on 4/21/94 because it is a 2% load."  I did not profile it to see what load it had today, but I expect it to be much cheaper to run now than it was in the last release.

There are lots of little places where off-by-one bugs have been caught and fixed, as well as places where a conditional was always true (or false) because it examined field A, but meant to examine field B.

On the downside, compilation is much much slower.  Current code takes ~8x as long as 0.58.1 to build.  On the positive side, the current code has much more thorough compile-time and runtime checking, and some types of mistake are now a compile-time error.

AFP has a Github ticket open asking for some of the limits to be raised.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)