NodeJS Error: ENOMEM, Not Enough Memory - Understanding, Fixing, and Preventing
Introduction
Node.js, known for its efficiency and scalability, occasionally runs into memory-related roadblocks, one of which is the "ENOMEM: Not enough memory" error. This error surfaces when Node.js applications demand more memory than is available, leading to performance issues or outright crashes. In this comprehensive guide, we'll delve into understanding this error, explore common scenarios where it occurs, provide fixes, and discuss best practices to prevent it.
const fs = require('fs');
const readline = require('readline');
// ✅ Use streams to process large files line by line
async function processLargeFile(filepath) {
const stream = fs.createReadStream(filepath);
const rl = readline.createInterface({ input: stream });
let lineCount = 0;
for await (const line of rl) {
// Process one line at a time
lineCount++;
}
console.log(`Processed ${lineCount} lines`);
}
// ✅ Increase Node.js memory limit if needed
// node --max-old-space-size=4096 app.js
// ✅ Use pagination for database queries
async function processInBatches(collection, batchSize = 1000) {
let skip = 0;
while (true) {
const batch = await collection
.find({})
.skip(skip)
.limit(batchSize)
.toArray();
if (batch.length === 0) break;
// Process batch...
skip += batchSize;
}
}
Understanding the Error
The "ENOMEM" error in Node.js signals that the JavaScript engine has exhausted the allocated memory limit. Node.js, built on the V8 engine, has a default memory limit (typically around 1.5 to 2 GB). When your application's memory usage exceeds this limit, Node.js throws an "ENOMEM" error.
// Loading a huge file into memory
const fs = require('fs');
const data = fs.readFileSync('huge-file.csv'); // 10GB file
// Error: ENOMEM: not enough memory
// Processing large arrays in memory
const items = [];
for (let i = 0; i < 1e9; i++) {
items.push({ id: i, data: 'x'.repeat(1000) });
}
// JavaScript heap out of memory
Diving Deeper
Memory issues in Node.js can be subtle and complex. They often stem from inefficient code, memory leaks, or handling large datasets. Understanding the underlying causes is key to addressing and preventing them.
Common Scenarios and Fixes with Example Code Snippets
Scenario 1: Memory Leak in Long-Running Processes
Problematic Code:
const cache = [];
setInterval(() => {
// Continuously adding data without cleanup
const data = Buffer.alloc(1024 * 1024); // 1MB per tick
cache.push(data);
console.log('Cache size:', cache.length);
}, 100); // ENOMEM after cache fills available memory
Explanation: Here, a large number of event listeners are added, but never removed, leading to a memory leak.
Solution:
const MAX_CACHE_SIZE = 100; // Max 100MB
const cache = [];
setInterval(() => {
if (cache.length >= MAX_CACHE_SIZE) {
cache.shift(); // Remove oldest entry
}
const data = Buffer.alloc(1024 * 1024);
cache.push(data);
console.log('Cache size:', cache.length);
}, 100);
// Monitor memory usage
setInterval(() => {
const usage = process.memoryUsage();
console.log('Heap used:', Math.round(usage.heapUsed / 1024 / 1024), 'MB');
}, 5000);
Explanation: This code adds an event listener and ensures it is removed after a certain period, preventing a memory leak.
Scenario 2: Writing to a Restricted Directory
Problematic Code:
const fs = require('fs');
// Reading entire huge file into memory at once
const data = fs.readFileSync('10gb-file.csv', 'utf8'); // ENOMEM
console.log(data.length);
Explanation: Loading a large file into memory all at once can exhaust the available memory.
Solution:
const fs = require('fs');
const readline = require('readline');
// Stream the file line by line instead
const stream = fs.createReadStream('10gb-file.csv');
const rl = readline.createInterface({ input: stream });
let lineCount = 0;
rl.on('line', (line) => {
lineCount++;
// Process each line without loading entire file
});
rl.on('close', () => {
console.log('Total lines:', lineCount);
});
Explanation: Streaming the file allows you to process it in smaller chunks, managing memory usage effectively.
Scenario 3: Access Denied to Network Port
Problematic Code:
// Creating too many child processes
const { fork } = require('child_process');
for (let i = 0; i < 10000; i++) {
fork('./worker.js'); // Each fork uses ~30MB+ memory
}
Explanation: Using inefficient data structures for large datasets can lead to excessive memory consumption.
Solution:
const { fork } = require('child_process');
const MAX_WORKERS = require('os').cpus().length;
const taskQueue = [];
let activeWorkers = 0;
function runTask(taskData) {
if (activeWorkers >= MAX_WORKERS) {
taskQueue.push(taskData);
return;
}
activeWorkers++;
const worker = fork('./worker.js');
worker.send(taskData);
worker.on('exit', () => {
activeWorkers--;
if (taskQueue.length > 0) {
runTask(taskQueue.shift());
}
});
}
Explanation: Maps and Sets are often more memory-efficient and offer better performance for large datasets.
Scenario 4: Executing a Script Without Execute Permissions
Problematic Code:
// Accumulating event listeners without cleanup
const EventEmitter = require('events');
const emitter = new EventEmitter();
for (let i = 0; i < 100000; i++) {
emitter.on('data', (data) => {
// Each listener holds references in memory
console.log(data);
});
}
Explanation: An unbounded cache can grow indefinitely, leading to memory exhaustion.
Solution:
const EventEmitter = require('events');
const emitter = new EventEmitter();
// Use a single listener or limit listener count
emitter.setMaxListeners(20); // Warns if exceeded
emitter.on('data', (data) => {
console.log(data);
});
// Clean up listeners when done
function cleanup() {
emitter.removeAllListeners('data');
}
Explanation: Using a cache with a limited size, like LRU, ensures that memory usage is kept under control.
Scenario 5: Attempting to Modify System Files
Problematic Code:
// Storing all database results in memory
async function getAllUsers() {
const users = await db.query('SELECT * FROM users'); // Millions of rows
return users; // Entire result set in memory
}
Explanation: A recursive function without a base case can lead to a stack overflow.
Solution:
// Use streaming/cursor-based queries for large datasets
async function processAllUsers() {
const cursor = db.query('SELECT * FROM users').stream();
for await (const row of cursor) {
await processUser(row); // Process one at a time
}
}
// Or use pagination
async function getUsersPageByPage(pageSize = 1000) {
let offset = 0;
while (true) {
const users = await db.query(
'SELECT * FROM users LIMIT ? OFFSET ?', [pageSize, offset]
);
if (users.length === 0) break;
for (const user of users) await processUser(user);
offset += pageSize;
}
}
Explanation: Adding a base case prevents the recursive function from calling itself indefinitely.
Scenario 6: Node Modules Permission Issues
Problematic Code:
// String concatenation in a loop creating huge strings
let result = '';
for (let i = 0; i < 10000000; i++) {
result += 'x'.repeat(1000); // Each iteration creates a new string
}
Explanation: Processing a large amount of data at once can overwhelm the memory.
Solution:
// Use array join or Buffer for large string building
const parts = [];
for (let i = 0; i < 10000000; i++) {
parts.push('x'.repeat(1000));
}
const result = parts.join('');
// Or use streams for output
const { Writable } = require('stream');
const output = fs.createWriteStream('output.txt');
for (let i = 0; i < 10000000; i++) {
output.write('x'.repeat(1000));
}
output.end();
Explanation: Processing data in smaller batches helps manage memory usage effectively.
Scenario 7: Accessing Restricted Environment Variables
Problematic Code:
// Node.js default heap size is ~1.5GB
// Running a memory-intensive application without increasing limit
const bigArray = new Array(200000000).fill({ data: 'item' });
Explanation: Retrieving a large dataset from a database in one query can consume a lot of memory.
Solution:
// Increase Node.js memory limit when needed
// Run with: node --max-old-space-size=4096 app.js
// Or set in package.json:
// "scripts": { "start": "node --max-old-space-size=4096 app.js" }
// Better: process data in chunks
function processInChunks(totalSize, chunkSize = 10000) {
for (let i = 0; i < totalSize; i += chunkSize) {
const chunk = new Array(Math.min(chunkSize, totalSize - i))
.fill({ data: 'item' });
processChunk(chunk);
// chunk is garbage collected after this scope
}
}
Explanation: Using pagination in database queries limits the amount of data loaded into memory at once.
Scenario 8: Permission Issues with Docker Containers
Problematic Code:
// Docker container with low memory limit
// Dockerfile: no memory consideration
// docker run my-app (default memory limits)
const crypto = require('crypto');
// Generating large random data
const data = crypto.randomBytes(500 * 1024 * 1024); // 500MB
Explanation: Continuously creating new objects without releasing them can lead to memory bloat.
Solution:
// Set appropriate Docker memory limits
// docker run -m 2g --memory-swap 2g my-app
// In application, check available memory
const os = require('os');
const freeMem = os.freemem();
const totalMem = os.totalmem();
console.log(`Memory: ${Math.round(freeMem/1024/1024)}MB free / ${Math.round(totalMem/1024/1024)}MB total`);
// Generate data in manageable chunks
const crypto = require('crypto');
const chunkSize = 10 * 1024 * 1024; // 10MB chunks
const totalSize = 500 * 1024 * 1024;
for (let i = 0; i < totalSize; i += chunkSize) {
const chunk = crypto.randomBytes(chunkSize);
processChunk(chunk); // Process and release each chunk
}
Explanation: Structuring the code to allow for garbage collection helps manage memory usage.
Strategies to Prevent Errors
Profiling and Monitoring: Regularly profile your Node.js application to monitor memory usage.
Efficient Code Practices: Write memory-efficient code, avoiding unnecessary variables and data structures.
Memory Management Techniques: Employ techniques like garbage collection and memory pooling.
Best Practices
Use Streams for Large Data: When dealing with large files or data sets, use streaming to process data.
Avoid Global Variables: Minimize the use of global variables as they can lead to memory leaks.
Regular Code Reviews: Conduct code reviews focusing on memory usage and potential leaks.
Upgrade Node.js: Ensure you're using the latest version of Node.js, as it often includes memory optimization improvements.
Conclusion
The "ENOMEM: Not enough memory" error in Node.js can be daunting, but with a thorough understanding of memory management and efficient coding practices, it can be addressed and prevented. Regular monitoring, efficient coding, and staying updated with Node.js releases are key to managing memory effectively and keeping your Node.js applications running smoothly.
Written by
Divya Mahi
Building innovative digital solutions at Poulima InfoTech. We specialize in web & mobile app development using React, Next.js, Flutter, and AI technologies.
Ready to Build Your Next Project?
Transform your ideas into reality with our expert development team. Let's discuss your vision.
