Despite the hype from gamers unreal 5 is far from an automatic “make the game good” button. It does some things well, but has plenty of short comings, including networking and physics from what I know.
Main short comings(for the parts I know and working with):
There are many inter-connected features that works best if you use them together, so for example, if you want to use Lumen but don’t want to use Nanite you can, but the performance would be worse since the virtual texture part of rendering lumen scene is a lot more efficient when you use Nanite geo. Similar with VSM.
If you upgrade project from old UE4, there is A LOT you need to touch on. It’s like a ticket generator if you upgrade.(but yeah, do a cross gen engine upgrade would do that for you.)
TSR(their temporal super resolution upscaler) still not really working out of the box and most I know just use FSR/DLSS/XeSS
many features are moving target BUT was talking point during recent releases. ie. PCG, Substrate, Smart Object, World Partition(specifically the Data Layer part), even VSM just out of beta since 5.2(most current released UE5 games using version <= 5.1, Immortals of Aveum would be the first to upgrade to 5.2 as far as I know.)
because of the above point, the CVars to control features can be changed/deprecated when a new release goes out. So you can’t “decide” your quality presets if you are still banking on upgrades.
Let me talk about the 2 things you mentioned:
networking, this part is actually the “better” side, it’s a bit harder cause since UE4.26 it gets a lot of Fortnite’s improvement into UE-main. So if you are doing shooter even with a lot of players it’s actually better than most other engine on the market. But if you want to do things like say, fighting game or decouple input/game thread for lower latency, or if you have your own “space ship battle” with custom physics body not a humanoid capsule. Then you have to implement your stuff.(by ref how CharacterMovementComponent is implemented.)
Physics, many people talk shit about ChaosPhysics, while it might not be as good as some other current industry standards for single player game, it will change the landscape of multiplayer game once it’s performance is up to par. The reason is ChaosPhysics is designed with networking in mind. So, yeah, if your game is player interact destruction heavy(ie. big enough debris don’t disappear), you need a more expensive server to run. There are less physics based multiplayer game cause the old implementations are all “bad”. (just see what Battlefield or Halo has done for the past 2 decades, the slight desync and player/vehicle fly to the moon is staple of bad implementation.)
X-Wing had a decent approach decades ago, where only player inputs were sent, and everyone independently ran identical simulations. Obvious shortcoming: latency driven by the worst connection, every single frame. But I wonder if rollback netcode would make that tolerable now.
running identical sim requires some setup that decouple physics and render/game thread. Rocket League is a good example even though it only simulate 6 player controlled box with cosmetic cars render in place. RL is server authoritative, so your local sim is just there until server ask your client to sync up.(with modern rubber banding interpolation across frames basically.) Any game with frame rate dependent physics(Unreal is still kinda frame rate dependent) can’t approach running sims on all client and hope them to sync up. cause their delta will not be the same. And if they do have a fixed delta physics engine, then like you mentioned, the slowest client will affect how the server can progress the clock. It’s a good enough implementation pre dedicated server era, but for modern approach with anti-cheat in mind, it’s no longer adequate.
I think for multiplayer game, there are a couple things are in current “trend”:
server offload simulation, basically the dedicated server off load the physics to a bunch of cloud nodes that only does the physics sim, take the result and ask the client to sync up. It’s not really “new” just that the implementation would get better. Search StarCitizen’s server meshing shown recently. This can scale up really big sims but the network bandwidth required would be prohibitive, cause it not only scale with the players also the physics bodies needs to be updated. (so yeah, always trade offs. )
fixed time delta physics engine on server, like Rocket League mentioned. This works fine with limited amount of physics body and are really stable, but you can’t do large scale sim with fixed time delta physics engine. (the more physics bodies, the slower it is to update the sim result.) Also this type of sim is has really limited range in space cause floating point error if physics body go really far from origin. But on the other hand it’s much more reliable, predictable and easier to do input buffer so it’s really responsive if you only need to handle limited amount of physics bodies.
async/multithread physic engine, it can still be deterministic if the inputs are the same(see JoltPhysics as example), it can scale up to large amount of physics bodies since most would be in “sleep” state once they are stabilized. very “mature” in single player game, but for multiplayer you still need to wrangle the “sync” up part. Less expensive to update compare to first one, supports more physics bodies than 2nd one, but nothing is free so you need good server(no offload) and your game framework needs to be able to support it.(what ChaosPhysics is working on). It would cost “less” to host compare to first, but significantly higher than 2nd. (well, if someone use fix time delta to sim large amount of physics bodies, then their server will cost way more than async, but no one would do that anyway. ) Example game would be like The Finals that was doing beta recently from what I can see in the gameplay videos. (they have mixed cosmetic, running only on clients, physics and server synced physics. )
I definitely have experienced the moving target issues firsthand. It felt like you couldn’t count on them actually maintaining or developing features that were advertised. Unity has the same problem.
Sheer overhead. It’s not a general purpose game engine, it’s a hotrod FPS engine with all the visuals crankABLE to 11. But if you’re not pushing for the high end pc/console FPS, adapting it to your game’s flow and perf reqs can be challenging. And it’s not the easiest engine to develop AR/VR or other new tech on, requiring hyper optimization and throttling lot’s of the engine’s gorgeous visuals. It’ll be interesting to see where it goes from here though, unity has fucked themselves 8 ways from sunday on developer confidence and their own fragmented shit show outside the board room didn’t generate lots of confidence either.
Despite the hype from gamers unreal 5 is far from an automatic “make the game good” button. It does some things well, but has plenty of short comings, including networking and physics from what I know.
The source interview section to the time stamp: https://youtu.be/4b_o5ueZRF0?si=IZzMan9sVQOV4Qq6&t=4797
Main short comings(for the parts I know and working with):
Let me talk about the 2 things you mentioned:
X-Wing had a decent approach decades ago, where only player inputs were sent, and everyone independently ran identical simulations. Obvious shortcoming: latency driven by the worst connection, every single frame. But I wonder if rollback netcode would make that tolerable now.
running identical sim requires some setup that decouple physics and render/game thread. Rocket League is a good example even though it only simulate 6 player controlled box with cosmetic cars render in place. RL is server authoritative, so your local sim is just there until server ask your client to sync up.(with modern rubber banding interpolation across frames basically.) Any game with frame rate dependent physics(Unreal is still kinda frame rate dependent) can’t approach running sims on all client and hope them to sync up. cause their delta will not be the same. And if they do have a fixed delta physics engine, then like you mentioned, the slowest client will affect how the server can progress the clock. It’s a good enough implementation pre dedicated server era, but for modern approach with anti-cheat in mind, it’s no longer adequate.
I think for multiplayer game, there are a couple things are in current “trend”:
server offload simulation, basically the dedicated server off load the physics to a bunch of cloud nodes that only does the physics sim, take the result and ask the client to sync up. It’s not really “new” just that the implementation would get better. Search StarCitizen’s server meshing shown recently. This can scale up really big sims but the network bandwidth required would be prohibitive, cause it not only scale with the players also the physics bodies needs to be updated. (so yeah, always trade offs. )
fixed time delta physics engine on server, like Rocket League mentioned. This works fine with limited amount of physics body and are really stable, but you can’t do large scale sim with fixed time delta physics engine. (the more physics bodies, the slower it is to update the sim result.) Also this type of sim is has really limited range in space cause floating point error if physics body go really far from origin. But on the other hand it’s much more reliable, predictable and easier to do input buffer so it’s really responsive if you only need to handle limited amount of physics bodies.
async/multithread physic engine, it can still be deterministic if the inputs are the same(see JoltPhysics as example), it can scale up to large amount of physics bodies since most would be in “sleep” state once they are stabilized. very “mature” in single player game, but for multiplayer you still need to wrangle the “sync” up part. Less expensive to update compare to first one, supports more physics bodies than 2nd one, but nothing is free so you need good server(no offload) and your game framework needs to be able to support it.(what ChaosPhysics is working on). It would cost “less” to host compare to first, but significantly higher than 2nd. (well, if someone use fix time delta to sim large amount of physics bodies, then their server will cost way more than async, but no one would do that anyway. ) Example game would be like The Finals that was doing beta recently from what I can see in the gameplay videos. (they have mixed cosmetic, running only on clients, physics and server synced physics. )
Here is an alternative Piped link(s):
https://piped.video/4b_o5ueZRF0?si=IZzMan9sVQOV4Qq6&t=4797
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
I definitely have experienced the moving target issues firsthand. It felt like you couldn’t count on them actually maintaining or developing features that were advertised. Unity has the same problem.
Sheer overhead. It’s not a general purpose game engine, it’s a hotrod FPS engine with all the visuals crankABLE to 11. But if you’re not pushing for the high end pc/console FPS, adapting it to your game’s flow and perf reqs can be challenging. And it’s not the easiest engine to develop AR/VR or other new tech on, requiring hyper optimization and throttling lot’s of the engine’s gorgeous visuals. It’ll be interesting to see where it goes from here though, unity has fucked themselves 8 ways from sunday on developer confidence and their own fragmented shit show outside the board room didn’t generate lots of confidence either.
I’m playing with godot a lot these days lol.