Fix: java.lang.OutOfMemoryError: Java Heap Space / GC Overhead Limit Exceeded / Metaspace
The Error
You run your Java application and it crashes with one of these:
java.lang.OutOfMemoryError: Java heap spacejava.lang.OutOfMemoryError: GC overhead limit exceededjava.lang.OutOfMemoryError: Metaspacejava.lang.OutOfMemoryError: unable to create new native threadOr your build tool fails:
> Task :compileJava
java.lang.OutOfMemoryError: Java heap space
FAILURE: Build failed with an exception.All of these are OutOfMemoryError, but each variant has a different cause and a different fix.
Why This Happens
The JVM divides memory into regions. Each error tells you which region is exhausted.
Java heap space — The heap is where your objects live. When the JVM can’t allocate a new object because the heap is full, you get this error. The default maximum heap size varies by JVM version and system RAM, but it’s often 256 MB or 1/4 of physical memory. Large datasets, in-memory caches, or memory leaks push you past this limit.
GC overhead limit exceeded — The garbage collector is spending more than 98% of CPU time collecting garbage and recovering less than 2% of heap each cycle. The JVM throws this instead of letting your application grind to a halt in an infinite GC loop. It means the heap is nearly full and the GC can’t free enough space.
Metaspace — Metaspace (replacing PermGen in Java 8+) stores class metadata, method bytecode, and constant pools. It grows dynamically by default but can be capped with -XX:MaxMetaspaceSize. This error is common in applications that generate or load many classes at runtime — think heavy use of reflection, dynamic proxies, Groovy/Scala closures, or hot-redeploying web apps in Tomcat.
unable to create new native thread — The OS refuses to create more threads. Each thread consumes stack space (default 512 KB to 1 MB per thread depending on the platform). With thousands of threads, you exhaust either the process memory or the OS thread limit (ulimit -u on Linux). This isn’t a heap problem — it’s a thread/native memory problem.
Fix 1: Increase the Heap Size
The most direct fix for Java heap space and GC overhead limit exceeded. Set the -Xmx (maximum heap) and -Xms (initial heap) flags.
At runtime:
java -Xms512m -Xmx2g -jar myapp.jar-Xms512m— Start with 512 MB heap. Setting this avoids repeated heap resizing on startup.-Xmx2g— Allow up to 2 GB heap.
With environment variable:
export JAVA_OPTS="-Xms512m -Xmx2g"
java $JAVA_OPTS -jar myapp.jarIn Spring Boot (application.properties):
Spring Boot doesn’t manage JVM heap settings directly. Pass them when launching the jar:
java -Xmx2g -jar myapp.jarOr set them in the systemd unit file, Docker entrypoint, or JAVA_TOOL_OPTIONS environment variable:
export JAVA_TOOL_OPTIONS="-Xmx2g"Note: JAVA_TOOL_OPTIONS is picked up by every JVM that starts in that environment. Use it when you want a blanket default. If your environment variables aren’t being picked up at all, see Fix: Environment variable is undefined.
How to choose a value: Start by monitoring your application’s actual heap usage (see Fix 5). Set -Xmx to roughly 1.5x your application’s peak live data size. Setting it too high wastes memory; setting it too low causes frequent GC pauses or OOM.
Fix 2: Fix Metaspace OOM
Metaspace grows unbounded by default, so hitting this limit usually means either you’ve explicitly set -XX:MaxMetaspaceSize too low, or you have a classloader leak.
Increase the limit:
java -XX:MaxMetaspaceSize=512m -jar myapp.jarCheck current metaspace usage:
jstat -gcmetacapacity <pid>Or with verbose GC logging:
java -Xlog:gc*:file=gc.log -jar myapp.jarCommon cause — classloader leaks in web containers:
When you redeploy a web application in Tomcat or Jetty, the old classloader should be garbage collected. But if any object holds a reference to a class from the old classloader, the entire classloader (and all its loaded classes) stays in memory. After several redeploys, metaspace fills up.
Fix this by:
- Restarting the container instead of hot-redeploying in production. Hot-redeploy is for development only.
- Finding the leak. Enable classloader logging:
java -verbose:class -jar myapp.jar. Look for classes that keep getting loaded but never unloaded. - Check for common leak triggers: ThreadLocal variables not cleaned up, JDBC drivers registered in the child classloader, logging frameworks holding classloader references.
Fix 3: Fix “Unable to Create New Native Thread”
This isn’t a heap problem. You’re running out of OS threads or per-process memory.
Check the current thread count:
# Count threads for your Java process
ps -o nlwp -p <pid>
# Or list all threads
jstack <pid> | grep -c "nid="Check OS limits:
ulimit -u # max user processes (includes threads on Linux)
cat /proc/sys/kernel/threads-max # system-wide thread limitFixes:
- Reduce thread stack size. Each thread defaults to 512 KB–1 MB of stack. If you have thousands of threads, reduce it:
java -Xss256k -jar myapp.jarWarning: Setting -Xss too low causes StackOverflowError in threads with deep call stacks. 256k works for most web request handlers. If you have deeply recursive code, test carefully.
Reduce the number of threads. Audit your thread pools. Common culprits:
- Unbounded
Executors.newCachedThreadPool()— replace withnewFixedThreadPool(N) - HTTP clients creating a thread per connection — use connection pools
- Each incoming request spawning a new thread — use a bounded thread pool executor
- Unbounded
Increase the OS limit:
# Temporary
ulimit -u 65536
# Permanent — add to /etc/security/limits.conf
youruser soft nproc 65536
youruser hard nproc 65536- Use virtual threads (Java 21+). Virtual threads are lightweight — you can run millions without hitting OS thread limits:
try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
executor.submit(() -> handleRequest());
}Fix 4: Tune the Garbage Collector
If you’re getting GC overhead limit exceeded, your GC can’t keep up. Increasing heap helps, but switching to a better GC can make a bigger difference.
G1GC (default since Java 9):
java -XX:+UseG1GC -Xmx4g -jar myapp.jarG1GC handles large heaps better than the parallel collector. Tune its pause target:
java -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xmx4g -jar myapp.jarZGC (Java 15+, production-ready in Java 17+):
java -XX:+UseZGC -Xmx4g -jar myapp.jarZGC is designed for low-latency applications with large heaps (multi-gigabyte to terabyte). GC pauses stay under 1 ms regardless of heap size. If you’re on Java 17+ and have GC overhead limit exceeded on a large heap, ZGC is often the answer.
Shenandoah GC (available in OpenJDK):
java -XX:+UseShenandoahGC -Xmx4g -jar myapp.jarSimilar low-pause goals as ZGC. Available in Red Hat builds of OpenJDK.
Disable the GC overhead limit (not recommended):
java -XX:-UseGCOverheadLimit -Xmx4g -jar myapp.jarThis doesn’t fix anything — it just lets the application keep running in a degraded state until it hits Java heap space instead. Only use this as a temporary diagnostic measure.
Fix 5: Detect Memory Leaks
If increasing the heap only delays the crash, you have a memory leak. Here’s how to find it.
Capture a Heap Dump
Automatically on OOM (always enable this in production):
java -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/myapp/ -Xmx2g -jar myapp.jarThis writes a .hprof file when the OOM occurs. The file will be roughly the size of your heap.
Manually from a running process:
jmap -dump:live,format=b,file=heapdump.hprof <pid>The live option triggers a GC first, so you only see reachable objects.
With jcmd (preferred over jmap):
jcmd <pid> GC.heap_dump /tmp/heapdump.hprofAnalyze the Heap Dump
Eclipse MAT (Memory Analyzer Tool) — the gold standard for heap dump analysis:
- Download from eclipse.org/mat
- Open the
.hproffile - Run the Leak Suspects report — it identifies objects retaining the most memory
- Check the Dominator Tree to see which objects are keeping others alive
- Look at the Histogram for classes with unexpectedly high instance counts
VisualVM — lightweight alternative:
jvisualvmConnect to the running process, go to the Monitor tab, and watch heap usage over time. If the sawtooth pattern keeps climbing (each GC cycle recovers less), that’s a leak.
Monitor Live Memory Usage
jstat — quick GC and heap stats:
# Heap usage every 1 second
jstat -gcutil <pid> 1000
# Output columns: S0, S1, E, O, M, CCS, YGC, YGCT, FGC, FGCT, GCTKey columns:
- O (Old generation) — if this keeps growing toward 100%, you’re leaking.
- FGC (Full GC count) — if this number is climbing rapidly, the GC is struggling.
jcmd — detailed memory summary:
jcmd <pid> VM.native_memory summaryNote: Native memory tracking must be enabled at startup:
java -XX:NativeMemoryTracking=summary -jar myapp.jarFix 6: Common Memory Leak Patterns
Unclosed resources
// LEAK: InputStream never closed if an exception occurs between open and close
InputStream is = new FileInputStream("large-file.dat");
// ... process file
is.close();
// FIX: Use try-with-resources
try (InputStream is = new FileInputStream("large-file.dat")) {
// ... process file
} // Automatically closed, even on exceptionThis applies to database connections, HTTP clients, streams, and any AutoCloseable resource.
Static collections that grow forever
// LEAK: Map grows forever, never cleared
public class EventCache {
private static final Map<String, Event> cache = new HashMap<>();
public static void addEvent(String id, Event event) {
cache.put(id, event); // Never removed
}
}
// FIX: Use a bounded cache or WeakHashMap
private static final Map<String, Event> cache = new LinkedHashMap<>(100, 0.75f, true) {
@Override
protected boolean removeEldestEntry(Map.Entry<String, Event> eldest) {
return size() > 1000;
}
};
// Or use Caffeine/Guava cache with eviction
private static final Cache<String, Event> cache = Caffeine.newBuilder()
.maximumSize(1000)
.expireAfterWrite(Duration.ofMinutes(10))
.build();Listeners and callbacks not deregistered
// LEAK: listener holds a reference to MyComponent, preventing GC
eventBus.register(myComponent);
// ... myComponent is "removed" but eventBus still references it
// FIX: Always deregister
eventBus.unregister(myComponent);Strings from substring() (Java 6 and earlier)
In Java 6, String.substring() shared the underlying char array of the original string. If you held a small substring of a huge string, the entire char array stayed in memory. This was fixed in Java 7u6 — substring() now copies the relevant characters.
Large ThreadLocal values
// LEAK in thread pool contexts: ThreadLocal values persist for the thread's lifetime
private static final ThreadLocal<byte[]> buffer = ThreadLocal.withInitial(() -> new byte[1024 * 1024]);
// FIX: Always remove after use
try {
byte[] buf = buffer.get();
// ... use buffer
} finally {
buffer.remove();
}In servlet containers and thread pools, threads are reused. A ThreadLocal value set during one request survives to the next. If you keep setting large values without calling remove(), memory accumulates.
Fix 7: Gradle / Maven Build OOM
Gradle
Gradle runs its own JVM (the daemon) and may fork additional JVMs for compilation and testing.
Increase Gradle daemon heap in gradle.properties:
org.gradle.jvmargs=-Xmx4g -XX:+HeapDumpOnOutOfMemoryErrorIncrease heap for the compile task:
tasks.withType(JavaCompile).configureEach {
options.forkOptions.memoryMaximumSize = '2g'
}Increase heap for tests:
tasks.withType(Test).configureEach {
maxHeapSize = '2g'
}Reduce parallelism if memory is tight:
# gradle.properties
org.gradle.parallel=true
org.gradle.workers.max=2Fewer parallel workers means less peak memory usage.
Maven
Increase Maven heap:
export MAVEN_OPTS="-Xmx2g"
mvn clean installOr in .mvn/jvm.config (per-project, checked into version control):
-Xmx2g
-XX:+HeapDumpOnOutOfMemoryErrorIncrease Surefire/Failsafe test heap:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<argLine>-Xmx1g</argLine>
</configuration>
</plugin>Maven compiler fork:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<fork>true</fork>
<meminitial>512m</meminitial>
<maxmem>2g</maxmem>
</configuration>
</plugin>Fix 8: Java in Docker Containers
Java inside Docker has a notorious history of ignoring container memory limits and allocating heap based on the host’s total RAM. This leads to the container getting OOM-killed by the kernel.
Modern JVMs (Java 10+, or Java 8u191+)
Container detection is on by default since Java 10. The JVM reads the cgroup memory limit and sizes the heap accordingly.
Verify it’s active:
java -XX:+PrintFlagsFinal -version 2>&1 | grep UseContainerSupportYou should see UseContainerSupport = true.
Set the heap as a percentage of the container limit:
java -XX:MaxRAMPercentage=75.0 -XX:InitialRAMPercentage=50.0 -jar myapp.jarMaxRAMPercentage=75.0— Use up to 75% of the container’s memory for heap. The remaining 25% is for metaspace, thread stacks, native memory, and the OS.InitialRAMPercentage=50.0— Start at 50% to reduce early GC overhead.
In Dockerfile:
FROM eclipse-temurin:21-jre
COPY myapp.jar /app/myapp.jar
ENTRYPOINT ["java", "-XX:MaxRAMPercentage=75.0", "-XX:+HeapDumpOnOutOfMemoryError", "-jar", "/app/myapp.jar"]Old JVMs (Java 8 before u191)
Container detection doesn’t exist. The JVM sees the host’s total RAM, not the container’s limit. If the host has 64 GB RAM and the container has a 512 MB limit, the JVM might try to allocate a 16 GB heap.
Fix: Set heap explicitly with -Xmx:
ENV JAVA_OPTS="-Xmx384m -Xms256m"
ENTRYPOINT ["sh", "-c", "java $JAVA_OPTS -jar /app/myapp.jar"]Best fix: Upgrade to Java 11+ or at minimum Java 8u191.
Container getting OOM-killed despite correct heap settings
The heap isn’t the only memory consumer. Total JVM memory includes:
- Heap (
-Xmx) - Metaspace
- Thread stacks (each thread ×
-Xss) - Direct byte buffers (NIO)
- JIT compiler code cache
- Native memory from JNI libraries
If your container limit is 1 GB and you set -Xmx=1g, you’ll get OOM-killed because all the non-heap memory pushes total usage above 1 GB. If you’re also seeing Docker permission denied errors when trying to run your container, fix those first.
Rule of thumb: Set -Xmx to no more than 75% of the container memory limit. For a 1 GB container, use -Xmx768m.
For more on Docker container OOM issues, see Fix: Docker Container Exited (137) OOMKilled / Killed Signal 9.
Still Not Working?
Heap dump is too large to open
If your heap dump is 8+ GB, Eclipse MAT may itself run out of memory. Increase MAT’s own heap by editing MemoryAnalyzer.ini:
-Xmx8gAlternatively, use jhat for command-line analysis or run MAT on a machine with more RAM.
You can also generate a histogram without a full dump:
jmap -histo <pid> | head -30This shows the top classes by instance count and byte size — often enough to identify the leak.
OOM during class data sharing (CDS) or AppCDS
If you use CDS archives (-Xshare:dump), the shared archive may not fit in the default shared memory space. Increase it:
java -XX:SharedArchiveSize=256m -Xshare:dumpSudden OOM without gradual heap growth
If heap usage looks fine and the OOM hits suddenly, check for large single allocations:
- Loading an entire large file into a
byte[]orString - Deserializing a massive JSON/XML payload
ResultSetfetching millions of rows into memory
Fix by using streaming APIs: InputStream, JsonParser (Jackson streaming), SAX/StAX for XML, and setFetchSize() on JDBC statements.
Off-heap / direct memory OOM
java.lang.OutOfMemoryError: Direct buffer memoryThis means ByteBuffer.allocateDirect() has exhausted direct memory. Increase it:
java -XX:MaxDirectMemorySize=512m -jar myapp.jarCommon in applications using Netty, MMAP, or NIO heavily. Check for direct buffers that aren’t being released.
GC pauses causing timeouts, not OOM
If your application doesn’t crash but becomes unresponsive, GC pauses might be the issue. Enable GC logging:
# Java 9+
java -Xlog:gc*:file=gc.log:time,uptime:filecount=5,filesize=10m -jar myapp.jar
# Java 8
java -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:gc.log -jar myapp.jarAnalyze the logs with GCEasy or GCViewer to identify long pauses, frequent full GCs, or promotion failures.
IntelliJ IDEA / Eclipse IDE OOM
Your IDE runs on a JVM too. If your IDE itself is throwing OOM:
IntelliJ IDEA:
Go to Help > Change Memory Settings and increase the maximum heap (default is 2048 MB). Or edit the idea.vmoptions file:
-Xmx4gEclipse:
Edit eclipse.ini:
-Xms512m
-Xmx4gCheck for file watchers and inotify limits
On Linux, Java applications using WatchService for file monitoring can hit inotify limits, especially in development environments with large source trees. See Fix: ENOSPC: System Limit for Number of File Watchers Reached for how to increase these limits.
Related Articles
Fix: java.lang.ClassNotFoundException (Class Not Found at Runtime)
How to fix Java ClassNotFoundException at runtime by resolving missing dependencies, classpath issues, Maven/Gradle configuration, JDBC drivers, classloader problems, and Java module system errors.
Fix: Docker Volume Permission Denied – Cannot Write to Mounted Volume
How to fix Docker permission denied errors on mounted volumes caused by UID/GID mismatch, read-only mounts, or SELinux labels.
Fix: Java OutOfMemoryError – Java Heap Space, Metaspace, and GC Overhead
How to fix Java OutOfMemoryError including heap space, Metaspace, GC overhead limit exceeded, and unable to create new native thread.
Fix: Docker Pull Error – Image Not Found or Manifest Unknown
How to fix Docker errors like 'manifest for image not found', 'repository does not exist', or 'pull access denied' when pulling or running images.