This is something I’ve been thinking about for a while. Reading this discourse post spurred me to ask you all about it here.
I’d like to make a replay system for a game. The easy, efficient way to do this is save all the user input and the initial state of the game and then replay it when needed. Same concept as the Elm debugger allowing you to export and import history.
This is great if you’re watching replays on the same device you recorded them on, but if you try replaying the recording on someone else’s computer or even a different browser then variations in floating point rounding error will potentially cause the replay to produce different results from what you saw when recording it.
One solution is to just not use floats in any code that needs to be deterministic. Either stick to integers, use fixed point, or write a package that emulates floats. The problem with this is that you have to also fork every package that uses floats. If you mess up anywhere then you’ve introduced a very subtle, hard to reproduce bug into the replay system.
My question then is, is there any reason this approach won’t work? Is there some easier solution that I have overlooked?
Instead of replaying the
Msgs and re-doing the floating point calculations, could you not just save the
Models at each step? Then replay by updating through the sequence of models?
I imagine that could work in some scenarios. In my particular use case, saving every frame of gameplay at 60 fps is probably too taxing on the CPU and uses too much memory. I could compromise and save the model once every second and save user input for the other 59 frames with the assumption that small floating point inconsistencies won’t make a noticeable difference over that short of a time span.
That could work, but these replays will serve a second purpose besides just for viewing. If a user wants to upload their result to a highscore table, I want to be able to use the replay as proof that they haven’t lied about their score. If the server can replay their input and get the same result then it knows their score is legitimate. If the replay is represented with models then I have to write some complicated logic to check if it was possible within some margin of error to get from one model to the next.
Ok, so typically the
model is much bigger than the
Could you write a function that takes 2 models and diff them, in the hope that the diff is typically much smaller than the whole model?
A diff could work. I don’t know how expensive it would be to diff the model but it’s probably faster than emulating floating point math. Memory-wise it would be larger than storing user input but much smaller than storing the whole model.
Still, two drawbacks I see with this are, I need to write code for diffing models and also code for determining if it’s possible to get from one model to the next (for server side validation). These aren’t insurmountable obviously but I have to maintain them whenever I change something and if I start a new project then the work won’t carry over.
Edit: I guess it would be fair to argue that it’s still less work than modifying the compiler and maintaining that whenever new versions get released. I guess I’ll have to admit I also like the elegance of just being able to store initial state + input and trusting that it will replay the same in any environment.
Perhaps the second would be easier if you also store user input, in addition to diffs. Server would then step through the game, replaying input to get server-generated-model, use diff to get user-generated-model, and compare that those are close enough that any differences can be explained with floating point inaccuracies. When acceptable difference occurs, server would continue replaying input using the user-generated-model as a base.
I’m just thinking how all those years back - 1997, the Java Virtual Machine came up with a specification of how FP works on it accross all platforms. I also remember there was a command line option to disable strict FP for a bit more speed.
@malaire Good idea. I imagine this is the most pragmatic way forward. I think I will still try and see how hard it is to mess around with the compiler (mostly because I’m curious), but this is probably what I’ll fall back on.
strictfp compiler flag…
@wolfadex That package uses the same approach I have of saving the initial state plus
msgs. As a result it’s also susceptible to inconsistent floating point math causing replays to differ from what was originally recorded.
This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.