Excessive Platform Resource Consumption within a Loop
Description
Excessive Platform Resource Consumption within a Loop occurs when software contains a loop body or loop condition that directly or indirectly consumes platform resources such as messaging connections, sessions, locks, file descriptors, database connections, or network sockets. When these resource acquisitions occur inside loops without proper release, the resources accumulate with each iteration. If attackers can control or influence the number of loop iterations, they can exhaust system resources, leading to denial-of-service conditions.
Risk
This vulnerability can lead to severe availability issues. Resource exhaustion from loop-based accumulation can crash applications, degrade system performance, or make services unavailable. File descriptor exhaustion prevents new connections. Database connection pool exhaustion blocks all database operations. Lock accumulation causes deadlocks. The risk is amplified when: (1) loop iterations are controlled by external input, (2) resources aren't properly released on loop exit, or (3) exception handling doesn't clean up acquired resources. Attackers can exploit this for targeted denial-of-service attacks.
Solution
Release resources within the same loop iteration where they're acquired. Use try-finally or try-with-resources patterns to ensure cleanup on exceptions. Avoid acquiring resources inside loops when possible - acquire once before the loop and reuse. Implement resource pooling with size limits. Set timeouts on resource acquisition. Validate loop bounds against external input. Monitor resource consumption and implement circuit breakers. Use static analysis tools to detect resource leaks in loops. Implement rate limiting for operations that trigger resource-intensive loops.
Common Consequences
| Impact | Details |
|---|---|
| Availability | Scope: Availability DoS: Resource Consumption - Loop-based resource accumulation can exhaust file descriptors, connections, memory, or other platform resources. |
| Availability | Scope: Availability DoS: Crash/Exit/Restart - Resource exhaustion may cause application crashes or force restarts. |
| Other | Scope: Other Reduce Performance - Even without complete exhaustion, excessive resource consumption degrades overall system performance. |
Example Code
Vulnerable Code
// Vulnerable: Opening file handles in loop without closing
public class VulnerableFileProcessor {
public void processFiles(List<String> filePaths) {
for (String path : filePaths) {
// Vulnerable: FileInputStream opened but never closed
FileInputStream fis = new FileInputStream(path);
byte[] data = fis.readAllBytes();
processData(data);
// fis is never closed - file handles accumulate!
}
// After processing 1000+ files, system runs out of file descriptors
}
}
// Vulnerable: Database connections acquired in loop
public class VulnerableDatabaseProcessor {
public List<Result> processIds(List<Integer> ids) {
List<Result> results = new ArrayList<>();
for (Integer id : ids) {
// Vulnerable: New connection per iteration
Connection conn = dataSource.getConnection();
PreparedStatement stmt = conn.prepareStatement("SELECT * FROM data WHERE id = ?");
stmt.setInt(1, id);
ResultSet rs = stmt.executeQuery();
if (rs.next()) {
results.add(mapResult(rs));
}
// Connection never closed!
// Connection pool exhausted after ~100 iterations
}
return results;
}
}
# Vulnerable: Network connections in loop
import socket
def vulnerable_check_servers(hosts):
results = []
for host in hosts:
# Vulnerable: Socket created but never closed
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(5)
try:
sock.connect((host, 80))
results.append((host, "up"))
except:
results.append((host, "down"))
# Socket never closed - file descriptors leak
return results
# After checking ~1000 hosts, process runs out of file descriptors
// Vulnerable: Acquiring locks in loop without release
public class VulnerableLockManager
{
private Dictionary<string, object> locks = new Dictionary<string, object>();
public void ProcessItems(List<Item> items)
{
foreach (var item in items)
{
// Vulnerable: Lock acquired but never released
var lockObj = GetOrCreateLock(item.Category);
Monitor.Enter(lockObj);
try
{
ProcessItem(item);
}
catch (Exception ex)
{
// Exception doesn't release lock!
Log(ex);
}
// Monitor.Exit never called - deadlock imminent
}
}
}
// Vulnerable: Goroutine leak in loop
func vulnerableProcessBatch(items []Item) {
for _, item := range items {
// Vulnerable: Goroutines spawned without limit or completion tracking
go func(i Item) {
result := expensiveOperation(i)
// If channel is full or closed, goroutine leaks
resultChannel <- result
}(item)
// No wait, no limit - goroutine explosion possible
}
// Thousands of goroutines consuming memory
}
// Vulnerable: Event listeners accumulated in loop
class VulnerableEventHandler {
processItems(items) {
items.forEach(item => {
// Vulnerable: Adding listeners without removal
document.addEventListener('click', (e) => {
this.handleClick(item, e);
});
// Each iteration adds new listener
// After 1000 items, 1000 click handlers attached!
});
}
}
Fixed Code
// Fixed: Proper resource management in loop
public class FixedFileProcessor {
public void processFiles(List<String> filePaths) {
for (String path : filePaths) {
// Fixed: try-with-resources ensures closure
try (FileInputStream fis = new FileInputStream(path)) {
byte[] data = fis.readAllBytes();
processData(data);
} catch (IOException e) {
log.error("Failed to process: " + path, e);
// Resource still closed on exception
}
}
}
// Alternative: Process files in batches with explicit cleanup
public void processFilesInBatches(List<String> filePaths, int batchSize) {
for (int i = 0; i < filePaths.size(); i += batchSize) {
List<String> batch = filePaths.subList(i,
Math.min(i + batchSize, filePaths.size()));
processBatch(batch);
// Force garbage collection hint between batches
System.gc();
}
}
}
// Fixed: Single connection for loop, or proper connection management
public class FixedDatabaseProcessor {
// Option 1: Single connection for entire loop
public List<Result> processIdsEfficient(List<Integer> ids) {
List<Result> results = new ArrayList<>();
try (Connection conn = dataSource.getConnection();
PreparedStatement stmt = conn.prepareStatement(
"SELECT * FROM data WHERE id = ?")) {
for (Integer id : ids) {
stmt.setInt(1, id);
try (ResultSet rs = stmt.executeQuery()) {
if (rs.next()) {
results.add(mapResult(rs));
}
}
stmt.clearParameters();
}
}
return results;
}
// Option 2: Batch query to avoid loop entirely
public List<Result> processIdsBatch(List<Integer> ids) {
if (ids.isEmpty()) return Collections.emptyList();
String placeholders = String.join(",",
Collections.nCopies(ids.size(), "?"));
String sql = "SELECT * FROM data WHERE id IN (" + placeholders + ")";
try (Connection conn = dataSource.getConnection();
PreparedStatement stmt = conn.prepareStatement(sql)) {
for (int i = 0; i < ids.size(); i++) {
stmt.setInt(i + 1, ids.get(i));
}
try (ResultSet rs = stmt.executeQuery()) {
List<Result> results = new ArrayList<>();
while (rs.next()) {
results.add(mapResult(rs));
}
return results;
}
}
}
}
# Fixed: Proper socket management
import socket
from contextlib import contextmanager
@contextmanager
def managed_socket(timeout=5):
"""Context manager for socket cleanup"""
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(timeout)
try:
yield sock
finally:
sock.close()
def fixed_check_servers(hosts):
results = []
for host in hosts:
# Fixed: Socket properly closed via context manager
with managed_socket() as sock:
try:
sock.connect((host, 80))
results.append((host, "up"))
except (socket.timeout, socket.error):
results.append((host, "down"))
return results
# Alternative: Connection pooling for frequent checks
from urllib3 import HTTPConnectionPool
def fixed_check_servers_pooled(hosts, max_connections=10):
results = []
# Use connection pool with limits
pools = {}
try:
for host in hosts:
if host not in pools:
pools[host] = HTTPConnectionPool(host, maxsize=1)
try:
response = pools[host].request('HEAD', '/', timeout=5)
results.append((host, "up" if response.status < 500 else "down"))
except Exception:
results.append((host, "down"))
finally:
# Cleanup all pools
for pool in pools.values():
pool.close()
return results
// Fixed: Proper lock management with try-finally
public class FixedLockManager
{
private readonly ConcurrentDictionary<string, object> locks =
new ConcurrentDictionary<string, object>();
public void ProcessItems(List<Item> items)
{
foreach (var item in items)
{
var lockObj = locks.GetOrAdd(item.Category, _ => new object());
bool lockTaken = false;
try
{
// Fixed: Track lock acquisition
Monitor.Enter(lockObj, ref lockTaken);
ProcessItem(item);
}
finally
{
// Fixed: Always release lock if acquired
if (lockTaken)
{
Monitor.Exit(lockObj);
}
}
}
}
// Better: Use lock statement
public void ProcessItemsSimple(List<Item> items)
{
foreach (var item in items)
{
var lockObj = locks.GetOrAdd(item.Category, _ => new object());
// Fixed: lock statement handles release automatically
lock (lockObj)
{
ProcessItem(item);
}
}
}
}
// Fixed: Controlled goroutine management
func fixedProcessBatch(items []Item) []Result {
// Fixed: Use worker pool pattern
const maxWorkers = 10
jobs := make(chan Item, len(items))
results := make(chan Result, len(items))
// Start fixed number of workers
var wg sync.WaitGroup
for w := 0; w < maxWorkers; w++ {
wg.Add(1)
go func() {
defer wg.Done()
for item := range jobs {
result := expensiveOperation(item)
results <- result
}
}()
}
// Send jobs
for _, item := range items {
jobs <- item
}
close(jobs)
// Wait for completion and close results
go func() {
wg.Wait()
close(results)
}()
// Collect results
var output []Result
for result := range results {
output = append(output, result)
}
return output
}
// Alternative: Use semaphore for concurrency limit
func fixedProcessBatchSemaphore(items []Item) []Result {
sem := make(chan struct{}, 10) // Max 10 concurrent
var wg sync.WaitGroup
results := make([]Result, len(items))
for i, item := range items {
wg.Add(1)
sem <- struct{}{} // Acquire
go func(idx int, it Item) {
defer func() {
<-sem // Release
wg.Done()
}()
results[idx] = expensiveOperation(it)
}(i, item)
}
wg.Wait()
return results
}
// Fixed: Event listener management
class FixedEventHandler {
constructor() {
this.boundHandlers = new Map();
}
processItems(items) {
// Fixed: Single handler with delegation
const handler = (e) => {
const itemId = e.target.dataset.itemId;
const item = items.find(i => i.id === itemId);
if (item) {
this.handleClick(item, e);
}
};
// Store reference for cleanup
this.boundHandlers.set('click', handler);
document.addEventListener('click', handler);
}
cleanup() {
// Fixed: Remove listeners when done
for (const [event, handler] of this.boundHandlers) {
document.removeEventListener(event, handler);
}
this.boundHandlers.clear();
}
}
// Alternative: Use event delegation from the start
class DelegatedEventHandler {
constructor(container) {
this.container = container;
this.items = new Map();
}
addItems(items) {
items.forEach(item => {
this.items.set(item.id, item);
});
}
init() {
// Single listener handles all items
this.container.addEventListener('click', (e) => {
const target = e.target.closest('[data-item-id]');
if (target) {
const item = this.items.get(target.dataset.itemId);
if (item) {
this.handleClick(item, e);
}
}
});
}
}
CVE Examples
This CWE describes a pattern that can contribute to denial-of-service vulnerabilities. Resource exhaustion from improper loop handling has been a factor in various availability incidents.
Related CWEs
- CWE-405: Asymmetric Resource Consumption (Amplification) (parent)
- CWE-400: Uncontrolled Resource Consumption (related)
- CWE-404: Improper Resource Shutdown or Release (related)
- CWE-1006: Bad Coding Practices (category member)
References
- MITRE Corporation. "CWE-1050: Excessive Platform Resource Consumption within a Loop." https://cwe.mitre.org/data/definitions/1050.html
- CISQ. "Automated Source Code Quality Measures."
- Oracle. "Java try-with-resources Statement."