Java OutOfMemory Exceptions (OOME) When Using the Solr Search Engine


Solr is an open-source technology used by Xinet to enable advanced queries and is based on the Java framework.

There are several scenarios when a Java Out Of Memory Exception (OOME) occurs while using Solr Search. Especially when working with a huge directory of assets (and having increased maxGramSize directive, for example) it might cause issues with Java processes running out of memory.

The error may be seen during a search, indexer job, garbage collection (GC), processing of separate facet fields per core, etc. Symptoms may involve an unexpected increase in the execution times of Solr instance queries, slow startup, GC pause problems, a sudden spike in the fieldcache memory, etc.

For Java heap size issues, see OOMEs errors in the Solr log file located at $SOLR_HOME/logs which typically is server/logs/solr.log. If the log file is configured for a non-default location, check files under your Solr root/resources and check the solr.log property to find the location of the Solr log files.

$ tail -f /server/logs/solr.log

The error is logged as following:

Mar 7, 2018 10:36:47 AM org.apache.solr.common.SolrException
log SEVERE: java.lang.RuntimeException: java.lang.OutOfMemoryError:Java heap space at
org.apache.solr.core.SolrCore.getSearcher( at
org.apache.solr.update.DirectUpdateHandler2.commit( at...
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process( at$ at Source) Caused by: java.lang.OutOfMemoryError: Java heap space

A great way to see what JVM settings your server is using, along with other useful information, is to use the admin RequestHandler, solr/admin/system. This request handler will display a wealth of server statistics and settings.

Optimizing the JVM is a key factor in getting the most out of Solr Search Engine. When starting the engine, Administrators can also specify additional parameters to pass to Java. This article describes the steps to tune the initial and maximum Java heap size to prevent OOME.


  • Xinet v18.1.x, 19.0, 19.1.x, 19.2.x
  • RHEL 6 x64, RHEL 7 x64, OS X Yosemite 10.10.5 - 10.12 x64, CentOS 6.5 - 7.2 x64
  • Java 1.7, 1.8
  • SOLR 6.2


  • Remote access to the Xinet Server.
  • Administrative access to the Xinet Server (via nativeadmin).


  1. To increase the Java Memory size used, open the Administration view in Xinet, navigate to Database > Admin > Searching.
    Note: The video attached in this article provides a demonstration of this step.
  2. Click Pause Solr Process.


  3. Open the solr.conf file:
    • Unix: /usr/etc/venture/var/solr.conf
    • Windows: C:\Program Files (x86)\Xinet\Venture\var\solr.conf
  4. Set the javaparams value. Set the initial Java heap size as 256 MB and maximum heap size as 512 MB, by adding the following:
    javaparams=-Xms128m -Xmx512m
    1. You want a heap that is large enough so that you do not have OOME exceptions and problems with constant garbage collection, but small enough that you are not wasting memory or running into huge garbage collection pauses. Do not follow the advice that tells you to use a specific fraction (one quarter, one half, etc.) of your total memory size for the heap. You can quickly end up with a heap size that is too small or too large by following that advice.
    2. The Java Development Kit (JDK) comes with two GUI tools (jconsole and jvisulavm) that you can connect to the running instance of Solr and see how much heap gets used over time.

    3. The chart in the jconsole example above shows a typical sawtooth pattern - memory usage climbs to a peak, then garbage collection frees up some memory. Figuring out how many collections is too many will depend on your query/update volume.
    4. One possible rule of thumb: Look at the number of queries per second Solr is seeing. If the number of garbage collections per minute exceeds that value, your heap might be too small. It also might be perfectly fine because well-tuned garbage collection might do a large number of very quick collections frequently.
    5. If you let your Solr server run with a high query and update load, the low points in the sawtooth pattern will represent the absolute minimum required memory. Try setting your max heap between 125% and 150% of this value, then repeat the monitoring to see if the low points in the sawtooth pattern are noticeably higher than they were before, or if the garbage collections are happening very frequently. If they are, repeat the test with a higher max heap.
    6. To increase memory performance for Xinet server sessions using Oracle Java 2 Standard Edition v1.8, add the following to the solr.conf
    7. The following logging levels are available for Java 1.8:
      • SEVERE (highest value, only most critical error messages are displayed)
      • WARNING
      • INFO
      • CONFIG
      • FINE
      • FINER
      • FINEST (lowest value, the most detailed output possible)
      • OFF (disables the logging)
  5. Save your updates.
  6. In the Administration view, click Database > Admin > Searching and then click Start Solr.




Article is closed for comments.