If he was shooting for both functions to do the exact same thing, in a way, it could just be a bug or anomaly (not necessarily a true bug, but an unexpected result- which still might be considered a bug, depends).
I wonder if he did a kind of 'addendum' type thing- initially conclude that all that needs to be done to 'extend' memory is offload data onto 'hosts', then later added something to disable shadowing or caching of textures in system memory. Or vice versa. It's like they both do different functions, but can be combined. Most just used reduce memory usage (which fired off multiple copies of the host), which seemed to work. Then later he designed an alternative, but built it into an already existing code structure creating a distinct difference between the two, yet the latter not being fully independent; neither do the work of both.
In some of the previous tests I did (before these) when I was looking for a good balance, I did have unsafe memory hacks disabled (used just reduce system memory enabled) and everything seemed to work fairly rock solid with no out of memory issues. It was when trying to fallback on just the unsafe hacks I hit an out of memory.
Doesn't Boris have a 1-2gb NVidia graphics card? I wonder what the tests would be using one of those; to be stable though, it'd still have to use multiple copies of host for really really large texture sets since there just wouldn't be enough vram for an area. Otherwise an out of memory crash is pretty much guaranteed.
Here's my initial thoughts on the better fps, if this holds true...
1) By loading everything on the card, there's no copying data from the multiple hosts or system memory to the gpus memory saving cycles for other threads and processes. This would especially hold true for those cpus with a lower ipc or fewer cores (interrupts eating or compounding the overall time slice).
2) By reducing virtual memory for the game (reduce system memory usage), there's more room for other types of data like lists, arrays, objects and the like (maybe more npc data, things like that). If not tanked with tons of scripts, less the game has to juggle working size wise. If that makes sense.
3) When you look at the memory map for test1.jpg, the game figured virtual memory was just about tanked so it tried to load balance the data, and these constant 'checks' or scanning of the lists, were taking away from the fps (I'd venture a guess of about ~10% to the negative). That would seem a very logical possibility.
When I get a bit more time I can do more testing and see what kinds of results we could possibly get. It's np, just need to kinda slice back and forth between the different things as we go.