Recent Discussions
Is GPU acceleration a thing yet with Minecraft server software?
Unanswered
Aquadateh<3 posted this in #questions
21 messages
0 views
If it isn’t, what could be the ETA?
I feel like that Using the gpu for things that aren’t too mission critical (such as chat, AI mob pathfinding or mob spawning) would be more efficient and would improve performance.
(It would also make the integrated graphics on a cpu actually somewhat kinda slightly useful)
I feel like that Using the gpu for things that aren’t too mission critical (such as chat, AI mob pathfinding or mob spawning) would be more efficient and would improve performance.
(It would also make the integrated graphics on a cpu actually somewhat kinda slightly useful)
Scott's Oriole
Gpu acceleration is not something you can slap on top on anything
So far there are some experiments being done with chunk generation by c2me, but is not something of interest on servers considering generation lag can be completely negated by just pre-generating your world, which has been a standard practice for years
In clients there is interest tho
So far there are some experiments being done with chunk generation by c2me, but is not something of interest on servers considering generation lag can be completely negated by just pre-generating your world, which has been a standard practice for years
In clients there is interest tho
@Aquadateh<3 If it isn’t, what could be the ETA?
I feel like that Using the gpu for things that aren’t too mission critical (such as chat, AI mob pathfinding or mob spawning) would be more efficient and would improve performance.
(It would also make the integrated graphics on a cpu actually somewhat kinda slightly useful)
I feel like that Using the gpu for things that aren’t too mission critical (such as chat, AI mob pathfinding or mob spawning) would be more efficient and would improve performance.
(It would also make the integrated graphics on a cpu actually somewhat kinda slightly useful)
I feel like (...) would be more efficient and would improve performance.
You'd be wrong.
@PM_ME_YOUR_REPO
You'd be wrong.
I feel like (...) would be more efficient and would improve performance.
You'd be wrong.
Why wouldn’t it?
Is it due to the overhead created?
Is it due to the overhead created?
@Aquadateh<3 Why wouldn’t it?
Is it due to the overhead created?
Is it due to the overhead created?
Palomino
The GPU is a different beast, most likely. Instead of having very powerful, but few cores, they have very weak cores, but a ton of them.
So like, there's barely a use case for gpus in servers
Since they don't need to do millions of very small calculations in real time
That is something graphical software uses a lot of, with few exceptions outside of it, AI training being one (since they need TONS of linear operations to be done very fast)
@Aquadateh<3 Why wouldn’t it?
Is it due to the overhead created?
Is it due to the overhead created?
No, it's because none of the tasks you suggested are workloads well-suited to a gpu.
You don't seem to have much experience in the differences between what GPUs and CPUs are good at or capable of. GPUs aren't just an untapped processor.
I suggest learning more about the architectural differences before conjecturing about potential room for optimization.
You don't seem to have much experience in the differences between what GPUs and CPUs are good at or capable of. GPUs aren't just an untapped processor.
I suggest learning more about the architectural differences before conjecturing about potential room for optimization.
@PM_ME_YOUR_REPO No, it's because none of the tasks you suggested are workloads well-suited to a gpu.
You don't seem to have much experience in the differences between what GPUs and CPUs are good at or capable of. GPUs aren't just an untapped processor.
I suggest learning more about the architectural differences before conjecturing about potential room for optimization.
You don't seem to have much experience in the differences between what GPUs and CPUs are good at or capable of. GPUs aren't just an untapped processor.
I suggest learning more about the architectural differences before conjecturing about potential room for optimization.
A* pathfinding sounds right up a gpu’s alley, from looking up the difference between optimal Cpu and Gpu tasks.
Parallelizing multiple pathfinding attempts, instead of wasting precious Cpu cycles doing one attempt at a time.
Parallelizing multiple pathfinding attempts, instead of wasting precious Cpu cycles doing one attempt at a time.
Also i may add that my concern with Gpu acceleration is not for the addition of expensive graphics cards, but to fully utilize onboard graphics (UHD 630 or 770) within a cpu.
@Aquadateh<3 A* pathfinding sounds right up a gpu’s alley, from looking up the difference between optimal Cpu and Gpu tasks.
Parallelizing multiple pathfinding attempts, instead of wasting precious Cpu cycles doing one attempt at a time.
Parallelizing multiple pathfinding attempts, instead of wasting precious Cpu cycles doing one attempt at a time.
Scott's Oriole
Mojang uses a modified version of A* for pathfinding, we’d have to look at if it is still suitable for a gpu with their additions
Now, since gpu acceleration requires the use of natives, we also have to see if the performance improvements outweigh the overhead of natives calls, and cost of maintenance given that natives are platform and architecture specifics
And finally, there have been experiments with asynchronous pathfinding for years, which while not fast, they leverage some load off the main thread. And the performance gain from that is… well not negligible, but also not a lot, you usually get an improvement if your server has a ton of entities (which are already heavy by themselves, and the brain is something that cannot be gpu accelerated)
Now, since gpu acceleration requires the use of natives, we also have to see if the performance improvements outweigh the overhead of natives calls, and cost of maintenance given that natives are platform and architecture specifics
And finally, there have been experiments with asynchronous pathfinding for years, which while not fast, they leverage some load off the main thread. And the performance gain from that is… well not negligible, but also not a lot, you usually get an improvement if your server has a ton of entities (which are already heavy by themselves, and the brain is something that cannot be gpu accelerated)
If there was some incentive, like a guaranteed significant performance gain that outweighs the maintenance cost and native overhead, like it was the case with gpu acceleration for chunk generation, then it might be some future for those kinds of features
@Scott's Oriole Mojang uses a modified version of A* for pathfinding, we’d have to look at if it is still suitable for a gpu with their additions
Now, since gpu acceleration requires the use of natives, we also have to see if the performance improvements outweigh the overhead of natives calls, and cost of maintenance given that natives are platform and architecture specifics
And finally, there have been experiments with asynchronous pathfinding for years, which while not fast, they leverage some load off the main thread. And the performance gain from that is… well not negligible, but also not a lot, you usually get an improvement if your server has a ton of entities (which are already heavy by themselves, and the brain is something that cannot be gpu accelerated)
Now, since gpu acceleration requires the use of natives, we also have to see if the performance improvements outweigh the overhead of natives calls, and cost of maintenance given that natives are platform and architecture specifics
And finally, there have been experiments with asynchronous pathfinding for years, which while not fast, they leverage some load off the main thread. And the performance gain from that is… well not negligible, but also not a lot, you usually get an improvement if your server has a ton of entities (which are already heavy by themselves, and the brain is something that cannot be gpu accelerated)
Im pretty sure the additions to A* is a subtle change to the math (it is weighted, and does not try all possible paths for performance reasons)
@Scott's Oriole If there was some incentive, like a guaranteed significant performance gain that outweighs the maintenance cost and native overhead, like it was the case with gpu acceleration for chunk generation, then it might be some future for those kinds of features
Im not sure if there is a need for Gpu accelerated chunk Generation on a server, pre generating the world kinda solves that, and is the standard tip given when someone talks about chunk generation.
@Aquadateh<3 Oh Nvm, you said like it was the case, my bad.
Scott's Oriole
yh, I said that at the top
@Aquadateh<3 Im pretty sure the additions to A* is a subtle change to the math (it is weighted, and does not try all possible paths for performance reasons)
Scott's Oriole
it accounts for terrain elevations, which means you'd have to provide a heightmap or sorts, which the jvm will have to serialize to make the native call
And like any natives, it will be platform dependent
@Aquadateh<3 Im not sure if there is a need for Gpu accelerated chunk Generation on a server, pre generating the world kinda solves that, and is the standard tip given when someone talks about chunk generation.
C2ME has an experimental version with opencl acceleration
@ℭ𝔞𝔯𝔬 C2ME has an experimental version with opencl acceleration
Connecticut Warbler
got 3400 cps with a rtx 6000 server blackwell card
Loading...
Loading...