This isn't a question about how to fix that, more a programming-level curiosity about why it happens. I probably won't understand it, but I've been wondering why some games handle speed and modern processors more gracefully than others (say, Theme Park).
I get that it's a lack of "future proofing," but I'm wondering what actually causes it.
Instead of defining timing through specific, hardcoded cycles, is the programmer timing through a percentage of resources? Naturally, 60% of a Motorola 68k is going to be significantly different that 60% of a i7.
Is it one of a possible number of things?
As I think I remember understanding it way back, wait cycles. The normal state of a portion of a Windows application is to wait to be called to do something, but before that it tended to have to be one single block of programming which alternated between everything it had to do in order to appear to be doing more things at once, so each piece was waiting for a certain number of cycles while the others were processed. (I'm sure there's also something to be said about video processing speed, waiting for frames to be drawn.) So programs had to determine the computer's speed and, if noticeably faster than necessary, increase the number of wait cycles accordingly, adding moments when the computer won't be doing anything so the moments when it is doing something will still have roughly the same frequency. However, there are limits to those test routines and to the additional cycles that programmers thought to add, so when you have something meant to run off floppies on a 4 MHz PC suddenly finding itself basically loaded in the cache of a 3 GHz one, what'd have been a reasonable test routine back then would execute too fast to produce useful results... or if it will produce useful results, it may require adding so many wait cycles every step of the way that the program will be unable to work with those numbers (think 8/16-bit limits). There are ways to handle all of that, but programmers had to think WAAAAY ahead to implement them.
Or I may be way off. Just what I think I remember figuring out back in the day, as I said... And writing this after 2h of sleep last night...
This kind of problem is to be expected in really old games; it shouldn't be surprising that a game written for DOS in the 80s would have this problem.
However I do find it surprising that there are games made in the 'modern', post Windows 95 era, in the latter part of the 90s, that still can't handle 'fast' CPUs made just a few years later. The most recent example that I've encountered was Ubik, a game released in 1998 which I doubt would work well on anything much faster than a Pentium 2. It certainly doesn't work on P4s (well it barely works, but it's basically unplayable), which first came out in late 2000. That has to be considered lazy programming, or at least an extreme lack of foresight.
I'll leave any technical explanations of this to someone that understands these things better than me.
Giu's Brain Wrote:
What's truly amazing is how some of those early DOS games don't have this problem -- even if they only had a single target CPU when they were written (a 4.77-MHz 8088). The programmers had no reason to 'plan ahead', but their games still run at an acceptable speed even on a modern CPU, as long as your OS supports 16-bit software. Examples: Alley Cat, Orion Software's games.
Two ways to accomplish such a feat (that I can think of):
1. Running a small "benchmark" upon execution to measure CPU performance and using the result to compensate.
2. Synchronizing with something like the video hardware's vertical refresh rate, which is CPU-independent.
I think it's mostly simple timers. Old programs just go through their code as fast as the CPU can manage, but on newer ones they basically make a check like "only draw the next frame once a certain number of microseconds has passed. If not enough time has passed, do nothing for a bit, then check again". I read something like that in a simple game programming tutorial once.
I'd assume it's this as well.
Basically a program runs a loop of code that's repeated every frame. You can either just run it bluntly regardless of how much time has passed since the last frame and then your code will run faster or slower depending on the framerate your computer can attain. Or you can track how much time has passed since the last frame, in which case you can ignore or only do a fraction of an action.
Say I have a game where the protagonist heals 1 point of health per second.
An old game might assume the game would run at 30 frames per second and give the player 1/30th of a health point each loop or 1 point every 30 loops. If the same game runs at 120 frames per second on a new computer it would receive health 4 times as fast.
If you keep count of milliseconds of time between frames you can use that to calculate the exact amount of health per frame. So even if the framerate fluctuates you still get 1 per second in the end. This is basically what all modern games do.
The most modern example I can think of is Grand Theft Auto San Andreas. Take off the frame limiter (which limits to a choppy 30hz) and the game will have a lot less cars and strange loading bugs.
I'll have to remember to try this; I wouldn't have though a 21st century game having issues with 'fast' computers, let alone a big title like this.
Actually, something similar would happen if you were to remove the framerate cap in pretty much every framerate-limited console port out there --a hallmark of crappy coding if there ever was one. Most of the times the animations wouldn't scale right with a speed other than the default one and you'd get all manner of crazy issues.
In Dark Souls, for instance, the 60 fps mod is known to cause quite a few bugs. Most notably, irregular terrain becomes trickier to navigate, jumps cover slightly shorter distances, and there are at least two spots in the game where sliding down a ladder has you passing through the floor.