Troubleshooting “Too Many Open Files”

View Categories

Troubleshooting “Too Many Open Files”

2 min read

In Linux-based systems such as RELIANOID appliances, each network connection, socket, or open file consumes a file descriptor. The operating system enforces limits on the number of file descriptors that can be opened simultaneously to prevent resource exhaustion.

If a process reaches its file descriptor limit, it will no longer be able to open additional files or sockets, and the system may generate the error:

Too many open files

This condition can affect system functionality, causing services to stop responding or fail to establish new connections until the affected process or system is restarted.

This article explains how file descriptor limits work in RELIANOID systems and how administrators can adjust them when necessary.

How File Descriptor Limits Work #

Linux manages file descriptors through two main types of limits:

  • System-wide limits
  • Per-process limits

Both must be configured appropriately to ensure stable operation.

System-Wide File Descriptor Limits #

The Linux kernel maintains a global limit on the number of file descriptors that can be allocated across the entire system. This value is controlled by the kernel parameter: fs.file-max

You can view the current value using:

sysctl fs.file-max

Example output:

fs.file-max = 1000000

This means the system can allocate up to one million file descriptors in total. If this limit is reached, new file handles cannot be created until existing ones are released.

If required, administrators can temporarily increase the value using:

sysctl -w fs.file-max=2000000

To make the change persistent across reboots, add the parameter to:

/etc/sysctl.conf

Example:

fs.file-max = 2000000

Apply the configuration with:

sysctl -p

Per-Process File Descriptor Limits #

In addition to the global system limit, each process also has its own limit for the number of open files it can handle. This limit is controlled using ulimit .

You can check the current limit with:

ulimit -n

Example output:

100000

This indicates that a single process can open up to 100,000 file descriptors.

If a process reaches this limit, it will generate the “Too many open files” error even if the system-wide limit is still available.

File Descriptor Limits in RELIANOID #

RELIANOID services load their per-process file descriptor limits from the following configuration file:

/etc/profile/relianoid.sh

This ensures that RELIANOID components run with the required limits to handle large numbers of network connections.

However, third-party processes running on the same system may not automatically inherit these limits. Monitoring agents, external tools, or custom services may therefore encounter file descriptor exhaustion if their limits are lower than required.

Adjusting Limits for Third-Party Processes #

If a third-party application requires higher file descriptor limits, they can be configured in:

/etc/security/limits.conf

For example:

* soft nofile 200000
* hard nofile 200000

This configuration increases the allowed number of open files for all users.

Alternatively, limits can be applied to a specific user:

ncpa soft nofile 200000
ncpa hard nofile 200000

After applying the changes, restart the affected service or process so that it loads the updated limits.

Conclusion #

The “Too many open files” error occurs when a process reaches its maximum allowed number of file descriptors and is unable to open additional files or network sockets. When this limit is exceeded, services may fail to accept new connections or operate normally.

To avoid this condition, administrators should ensure that both the system-wide file descriptor limit (fs.file-max) and the per-process limit defined by ulimit are properly configured.

In RELIANOID systems, service-related limits are loaded from /etc/profile/relianoid.sh. However, third-party applications running on the same appliance may require additional configuration through /etc/security/limits.conf so that they inherit appropriate file descriptor limits.

By maintaining adequate limits and reviewing system configuration when necessary, administrators can help ensure stable operation and prevent service interruptions in environments handling a high number of concurrent connections.

 

📄 Download this document in PDF format #

    EMAIL: *

    Powered by BetterDocs