Let’s be honest: I only use Java for Minecraft. So I only debugged with it. But all version, server or client, all launchers. All of them use double (or more) RAM. In the game the correctly allocated amount is used, but on my system double or more is allocated. Thus my other apps don’t get enough memory, causing crashes, while the game is suffering as well.
I’m not wise enough to know what logs or versions or whatever I should post here as a cry for help, but I’ll update this with anything that’ll help, just tell me. I have no idea how to approach the problem. One idea I have is to run a non-Minecraft java application, but who has( or knows about) one of those?
@jrgd@lemm.ee’s request:
launch arguments [-Xms512m, -Xmx1096m, -Duser.language=en] (it’s this little, so that the difference shows clearly. I have a modpack that I give 8gb to and uses way more as well. iirc around 12)
game version 1.18.2
total system memory 32gb
memory used by the game I’m using KDE’s default system monitor, but here’s Btop as well:
this test was on max render distance, with 1gb of ram, it crashed ofc, but it crashed at almost 4gbs, what the hell! That’s 4 times as much
I’m on arch (btw) (sry)
When you control the memory allocation for Minecraft, you really only are configuring the JVM’s garbage collector to use that much memory. That doesn’t include any shared resources outside of the JVM, such as Java itself, OpenGL resources and everywhere else that involves native code and system libraries and drivers.
If you have an integrated GPU, all the textures that normally gets sent to a GPU may also live on your regular RAM too since those use unified memory. That can inflate the amount of memory Java appears to use.
A browser for example, might not have a whole lot of JavaScript memory used, couple MBs maybe. But the tab itself uses a ton more because of the renderer and assets and CSS effects.
This is interesting and infuriating, but I don’t think this is quite right in my scenario. As I also observe the over-usage when running a server from console. There shouldn’t be any GPU shenanigans with that, I hope.
There are stilly plenty of native libraries and the JVM itself. For instance, the networking library (Netty) uses off-heap memory which it preallocates in fairly large blocks. The server will spawn quite a few threads both for networking and for handling async chunk loading+generation, each of which will add likely multiple megabytes of off-heap memory for stack space and thread-locals and GC state and system memory allocator state and I/O buffers. And none of this is accounting for the memory used by the JVM itself, which includes up to a few hundred megabytes of space for JIT-compiled code, JIT compiler state such as code profiling information (in practice a good chunk of opcodes need to track this), method signatures and field layouts and superclass+superinterface information for every single loaded class (for modern Minecraft, this is well into the 10s of thousands), full uncompressed bytecode for every single method in every single loaded class. If you’re using G1 or Shenandoah (you almost certainly are), add the GC card table, which IIRC is one byte per alignment unit of heap space (so by default, one byte per 8 bytes of JVM heap) (I don’t recall if this is bitpacked, I don’t think it is for performance reasons). I could go on, but you get the picture.
This is normal behavior. There is much more to the JVMs memory usage beyond what’s allocated to the heap - there are other memory regions as well. There are additional tuning options for them, but it’s a complicated subject and if you aren’t actually encountering out of memory issues you have to ask if this is worth the effort to tune it.
That’s disappointing, but makes sense. At least now I know that there isn’t a point trying. Well, this is the easiest solution xd. Thanks.
Depending on version and if modded with content mods, you can easily expect Minecraft to utilize a significant portion memory more than what you give for its heap. Java processes have a statically / dynamically (with bounds) allocated heap from system memory as well as memory used in the stack of the process. Additionally Minecraft might show using more memory in some process monitors due to any external shared libraries being utilized by the application.
My recommendation: don’t allocate more memory to the game than you need to run it without noticeable stutters from garbage collection. If you are running modded Minecraft, one or more mods might be causing stack-related memory leaks (or just being large and complex enough to genuinely require large amounts of memory. We might be able to get a better picture if you shared your launch arguments, game version, total system memory, memory used by the game in the process monitor you are using (and modlist if applicable).
In general, it’s also a good idea to setup and enable ZRAM and disable Swap if in use.
Big modpacks that add a lot of different blocks will also always explode the memory usage as at the start, Minecraft pre-bakes all the 3d models of the blocks.
launch arguments [-Xms512m, -Xmx1096m, -Duser.language=en] (it’s this little, so that the difference shows clearly. I have a modpack that I give 8gb to and uses way more as well. iirc around 12)
game version 1.18.2
total system memory 32gb
memory used by the game I’m using KDE’s default system monitor, but here’s Btop as well:
also: this test was on max render distance, with 1gb of ram, it crashed ofc, but it crashed at almost 4gbs, what the hell! That’s 4 times as much
For clarification, this is Vanilla, a performance mod Fabric pack, a Fabric content modpack, Forge modpack, etc. that you are launching? If it’s the modpack that you describe needing 8gb of heap memory allocated, I wouldn’t be surprised the java stack memory taking ~2.7 GiB. If it’s plain vanilla, that memory usage does seem excessive.
This was Vanilla.
Running the same memory constraints on a 1.18 vanilla instance, most of the stack memory allocation largely comes from ramping the render distance from 12 chunks to 32 chunks. The game only uses ~0.7 GiB memory non-heap at a sane render distance in vanilla whereas ~2.0 GiB at 32 chunks. I did forget the the render distance no longer caps out in vanilla at 16 chunks. Far render distances like 32 chunks will naturally balloon the stack memory size.
That you’d think that random game objects aren’t stored on the stack. Well, thanks for the info. Guess there isn’t anything to do, as others have said as well.
It looks like you’re looking at the entire PolyMC process group so in this case memory usage also includes PolyMC itself, which buffers a chunk of the logs. It shouldn’t be using that much, but it will add a hundred MB or two to your total here as well.
As a side note and a little psa, if you need to squeeze out more overall performance of out of MC (and you are playing vanilla or Fabric modpack) I very much recommend using these Fabric mods: Sodium, Lithium, FerriteCore and optionally Krypton (server-only), LazyDFU, Entity Culling, ImmediatelyFast.
haha, thanks! But I already knew about most of them :D
You could also (hard) limit the total (virtual) memory process will use (but the system will hard kill it if tries to get more) with this:
systemd-run --user --scope -p MemoryMax=8G -p MemorySwapMax=0 prismlauncher
You would have to experiment with how much Gs you want to specify as max so that it does not get outright killed. If you remove
MemorySwapMax
the system will not kill the process but will start aggressively swapping the processes’ memory, so if you do have a swap it will work (an depending on how slow the disk of the swap is, start lagging).In my case I have a small swap partition on an m2 disk (which might not be recommended?) so I didn’t notice any lagging or stutters once it overflow the max memory.
So in theory, if you are memory starved and have swap on a fast disk, you could instead useMemoryHigh
flag to create a limit from where systemd will start the swapping without any of the OOM killing (or use both, Max has to be higher then High obv).terminating if X is a very bad idea. I wouldn’t fancy loosing progress and corrupting my world
I only know Optifine. What is fabric?
Fabric is one of many mod loaders ala Forge. It’s newer and less bulky then Forge (but afaik it already did have it’s own drama so now we also have a fork called Quilt, the same goes for Forge and NeoForge).
The mods I’ve specified above can be considered as a suite replacement for the (old) OptiFine.
E: For example this all the mod loaders modrinth (mod hosting website, curseforge alternative) currently lists:
Look for the modpack Additive It’s based on Fabric with some great mods oriented towards speed and QOL which replace OptiFine in one package.
Thanks!
I’m sorry but I think that’s just the way Java Edition goes mate, lol.
You see a modpack that recommends 6GB allocated and you think “oh, I’m fine, I have 16”, next thing you know you’re almost going OOM.I have recently upgraded to 32GB solely because of ‘All The Mods 9’
I mean, there’s probably a reason why almost all technical/sandbox games start to lag at one point. Thanks for the info, I’ll just deal with it than.
glibc’s
malloc
increases the stacksize of threads depending on the number of cpu cores you have. The JVM might spawn a shitload of threads. That can increase the memory usage outside of the JVMs heap considerably. You could try to run the jvm with tcmalloc (which will replacemalloc
calls for the spawned process). Also different JVMs bundle different memory allocators. I think Zulu could also improve the situation out of the box. tcmalloc might still help additionally.Modded Minecraft is memory hungry. Even normal Minecraft can be. I’ve seen people suggest alternative JVMs (open j9) because they supposedly garbage collect more aggressively before requesting more memory. I tried this once with Forge back when it had to patch everything when it began (maybe still the case, idk) and what it actually did was just make everything slow to a crawl because the JVM wanted to collect instead of allocate more and keep going.
File bug reports with mid authors and Mojang of it is using way more memory than they say it needs so they update their docs but this is pretty much par for the course.
Think about it like this. You have a table. You have papers. You’re doing complicated math. You can use more and more of the table for scratch work on your papers. At some point you run out of table space. You consolidate the papers and notes you’ve taken on them. But that’s time you could’ve been doing more “useful” work. Now, what if you had like 90% of the table still full once you did that? You’d honestly need more table.
Minecraft is a great game, but when you push it to the extremes it has difficulty keeping up.
Thanks for the info. Bit sad to know this is the case, but makes sense. Also thanks for discouraging trying other GCs. Spared me a bit of time. I know sometimes it’s a good idea, but it’s logical that it’s at the cost of performance.
I would be more concerned about qbittorrent casually eating 11.1 gigs of ram
lol, I’m pretty sure it’s some bug. Minecraft’s actually eating that much, but qbit doesn’t. I honestly don’t know what’s going on. I mean I am seeding a shit ton. So maybe it just had that much in memory for a second.
What is this? I Minecraft knockoff?
No, it is just a voxel engine with games and mods. Originally it was but not is much now. There are some Minetest games that do try to make a Minecraft clone
What’s the connection with the original post?