Search NodeBook

Buy me a coffee
Async Patterns & Control Flow

Promises and Microtask Scheduling

Ishtmeet Singh @ishtmsMarch 1, 202629 min read
#nodejs#promises#microtasks#async-patterns#v8

Promise.resolve().then() Runs Before setTimeout(..., 0)

That surprises people. You'd think a zero-millisecond timer fires as fast as anything can. But it doesn't. The .then() callback runs first, every time, because it goes into the microtask queue, and microtasks drain before the event loop advances to the timers phase. The ordering is deterministic. It's baked into V8 and Node's event loop interaction, and understanding why requires knowing what promises actually are underneath.

A promise is a state machine. That's the precise description - three states, two transitions, one invariant. The invariant: once a promise leaves the pending state, it stays where it landed forever. The three states: pending, fulfilled, rejected. The two transitions: pending to fulfilled (resolution), and pending to rejected. There's a collective term for the two non-pending states: settled. A settled promise has a final value (fulfillment) or reason (rejection), and nothing can change it.

Inside V8, a promise is a regular JavaScript object with internal slots. The [[PromiseState]] slot holds the current state as an integer: 0 for pending, 1 for fulfilled, 2 for rejected. The [[PromiseResult]] slot holds the settlement value (or undefined while pending). The [[PromiseFulfillReactions]] and [[PromiseRejectReactions]] slots hold lists of handlers waiting for the promise to settle. When you call .then(), the handler gets appended to the appropriate reaction list. When the promise settles, the reaction list is processed and then cleared - it's a one-shot dispatch.

const p = new Promise((resolve, reject) => {
  resolve(42);
  resolve(99);
  reject(new Error("too late"));
});
p.then(v => console.log(v)); // 42

The first resolve(42) transitions the promise to fulfilled with value 42. The second resolve(99) does nothing. The reject() does nothing. First call wins. The implementation checks [[PromiseState]] - if it's anything other than 0 (pending), the call returns silently. Subsequent calls to either resolve or reject are ignored. The promise settled on the first transition and locked in. This exactly-once guarantee is one of the properties that makes promises safer than raw callbacks - the callback inversion-of-control problem (covered in the previous subchapter) included the risk of double-invocation. Promises eliminate that by construction.

The value you pass to resolve() can be anything: a primitive, an object, undefined, null, another promise. The fulfillment value is stored in [[PromiseResult]] and handed to every .then() handler. Similarly, reject() accepts any value as the rejection reason, though by convention you should always pass an Error instance. Rejecting with a string or number works syntactically but loses the stack trace, which makes debugging harder in production.

The Executor Runs Synchronously

The function you pass to new Promise() has a name: the executor. And it runs immediately, synchronously, on the current call stack.

console.log("before");
const p = new Promise((resolve, reject) => {
  console.log("executor");
  resolve("done");
});
console.log("after");

Output: before, executor, after. The executor fires inline during new Promise(). By the time the constructor returns, the executor has already completed. If the promise was resolved synchronously inside the executor (which it was here), the promise is already settled by the time p is assigned. But - and this is the part that matters - the .then() callback for that settled promise still runs asynchronously. Always. Even when the promise is already fulfilled at the point you attach .then(), the handler gets queued as a microtask instead of running immediately.

This design pattern is called the revealing constructor pattern. The resolve and reject functions are capabilities that only exist inside the executor. They're created by the Promise constructor and passed exclusively to the executor function. No other code can access them. The constructor "reveals" these capabilities to the executor and to nobody else. Once the executor returns, the only way to interact with the promise is through .then(), .catch(), and .finally() - all of which are read-only observation methods. You can attach handlers. You can observe the settled value. You can't change it from outside.

If the executor throws, the promise rejects with the thrown error:

const p = new Promise(() => {
  throw new Error("executor exploded");
});
p.catch(e => console.log(e.message)); // "executor exploded"

The constructor wraps the executor call in a try/catch internally. Any synchronous throw inside the executor becomes a rejection reason. This is one of the safety properties that makes promises an improvement over raw callbacks - exceptions inside the executor are caught and channeled into the rejection path rather than crashing the process.

Promise.resolve(value) and Promise.reject(reason) are shorthand for creating already-settled promises without writing the full executor ceremony. Promise.resolve(42) returns a promise fulfilled with 42. Promise.reject(new Error("no")) returns a promise rejected with that error. Both are used frequently in library code and tests where you need a promise but already have the value.

There's a wrinkle with Promise.resolve() that catches people. If you pass a native promise to Promise.resolve(), it returns the exact same promise object - no wrapping, no new allocation:

const original = Promise.resolve(1);
const wrapped = Promise.resolve(original);
console.log(original === wrapped); // true

But if you pass a thenable (an object with a .then() method that's a non-native promise), Promise.resolve() creates a new promise that follows the thenable. The shortcut only applies to genuine Promise instances. This is an optimization in the spec - creating a wrapper around an already-native promise would waste an allocation and add a microtask hop for no reason.

Resolving With a Promise

There's a subtlety in resolve() that most explanations gloss over. When you resolve a promise with another promise, the outer promise "follows" the inner one:

const inner = new Promise(resolve => {
  setTimeout(() => resolve("delayed"), 100);
});
const outer = new Promise(resolve => {
  resolve(inner);
});
outer.then(v => console.log(v)); // "delayed" (after 100ms)

outer doesn't fulfill with the promise object inner. It adopts inner's state. While inner is pending, outer stays pending. When inner fulfills with "delayed", outer fulfills with "delayed". The states are linked. If inner rejected, outer would reject with the same reason.

This goes further. The spec defines "thenable assimilation": if you resolve a promise with any object that has a .then() method, the promise treats it as a promise-like thing and tries to adopt its state. The runtime calls value.then(resolve, reject) on the thenable, using the outer promise's own resolve and reject functions. This means you can resolve a promise with a jQuery Deferred, a Bluebird promise, or any custom object that implements .then(), and the native promise will interoperate with it.

const thenable = {
  then(onFulfill) {
    onFulfill("from thenable");
  }
};
Promise.resolve(thenable).then(v => console.log(v));

Output: "from thenable". The Promise.resolve() call detects the .then() method, calls it, and uses the result. This is how the JavaScript ecosystem transitioned from multiple competing promise libraries (Q, Bluebird, RSVP, when.js) to native promises. The thenable protocol is the interoperability bridge. Any object with a .then() method is a valid resolve target.

The thenable check adds two microtask hops. When you resolve with a non-thenable value, the promise fulfills immediately (though handlers still run asynchronously). When you resolve with a thenable, the runtime first schedules a PromiseResolveThenableJob microtask to call the thenable's .then(). When that job runs, the inner resolve triggers a second microtask - a PromiseReactionJob - to actually fulfill the outer promise. So the fulfillment lands two microtasks later than a plain-value resolution. These extra hops occasionally matter in precise ordering scenarios.

The check is performed at resolve() time, and it's purely duck-typed. V8 looks at the resolved value: is it an object? Does it have a .then property? Is that property callable? If yes to all three, it's treated as a thenable. This means accidentally resolving with an object that happens to have a .then method - some database record with a then field, some API response with a then key - triggers thenable assimilation. The promise will try to call value.then() as if it were a promise. If that .then() method does something unexpected (or throws), you get surprising behavior from what should have been a simple resolution.

In practice this is rare. But it's happened in production code. A MongoDB document returned from a query had a then property (from the user's data), and the promise chain treated the entire document as a thenable instead of resolving with it. The fix was wrapping the value: resolve({ value: document }) or using Promise.resolve().then(() => document) to bypass the thenable check. The lesson: be aware that resolve() inspects the value you give it.

Chaining With .then(), .catch(), and .finally()

.then() is the core of the promise API. It takes two optional arguments: onFulfilled and onRejected. And it returns a new promise. Always. This is the chaining mechanism.

const result = Promise.resolve(5)
  .then(v => v * 2)
  .then(v => v + 1)
  .then(v => console.log(v)); // 11

Three .then() calls, three new promise objects created on the heap. Each handler receives the fulfillment value of the previous promise in the chain. Whatever the handler returns becomes the fulfillment value of the next promise. Return a number, the next promise fulfills with that number. Return a string, the next promise fulfills with that string. Return a promise, the next promise adopts its state (the same state-adoption behavior as resolve()).

This chaining is the structural improvement over callbacks. With callbacks, you handed your continuation to someone else's code (inversion of control). With promise chains, you retain control. Each .then() returns a promise you own. You decide what to attach next. You decide where to handle errors. The flow reads top to bottom, left to right, at a consistent indentation level. And every handler runs exactly once because every promise settles exactly once.

If a .then() handler returns nothing (undefined), the next promise fulfills with undefined. If you forget to return inside a chain, the value silently becomes undefined and downstream handlers receive nothing useful. This is one of the most common promise bugs. Linters flag it (no-promise-executor-return), but it still happens constantly.

If a handler throws, the returned promise rejects:

Promise.resolve("ok")
  .then(v => { throw new Error("oops"); })
  .then(v => console.log("skipped"))
  .catch(e => console.log(e.message)); // "oops"

The throw in the first handler causes its returned promise to reject. The second .then() is skipped because rejection propagates down the chain until a rejection handler catches it. .catch(fn) is syntactic sugar for .then(undefined, fn). It attaches a rejection handler that catches errors from everything above it in the chain.

Where you place .catch() matters. At the end of a chain, it catches any rejection from any step above. In the middle, it catches rejections from above and allows the chain to continue with a new fulfillment value:

Promise.reject(new Error("fail"))
  .catch(e => "recovered")
  .then(v => console.log(v)); // "recovered"

The .catch() handler returns "recovered", which becomes the fulfillment value for the next .then(). This is error recovery. A .catch() that returns a normal value switches the chain from the rejection path back to the fulfillment path. A .catch() that re-throws keeps it on the rejection path.

.finally(fn) runs regardless of whether the promise fulfilled or rejected. The handler receives no arguments - it doesn't know the outcome. And it doesn't alter the chain's value unless it throws or returns a rejected promise. The original fulfillment value or rejection reason passes through:

Promise.resolve(42)
  .finally(() => console.log("cleanup"))
  .then(v => console.log(v)); // "cleanup" then 42

The 42 passes through .finally() unchanged. This makes .finally() useful for cleanup operations - closing file handles, stopping spinners, releasing locks - where you need to run code regardless of outcome without affecting the result.

One thing that trips people up: .then() accepts two arguments, and the second is a rejection handler. So .then(onFulfilled, onRejected) and .then(onFulfilled).catch(onRejected) look the same but behave differently in one case. If onFulfilled throws an error, the two-argument .then() won't catch it - the rejection handler in the same .then() call only catches rejections from the previous promise, not from its sibling handler. But .catch() chained after .then() catches errors from the .then() handler itself, because it's attached to the promise that .then() returns. In practice, .catch() at the end of a chain is almost always what you want.

There's also the matter of .then() with no arguments. promise.then() (no handlers) returns a new promise that follows the original - it's just cloning the promise reference with an extra allocation. Not useful, but also not an error. You'll sometimes see it in code that was refactored and left an empty .then() behind.

How V8 Processes Promise Handlers

Here's where the machinery matters. When you call .then() on a promise, and that promise is already fulfilled, the handler doesn't run immediately. It gets scheduled as a microtask. And when the promise is still pending and later gets fulfilled, the handler also gets scheduled as a microtask at that point. Either way, the handler execution is always deferred to the microtask queue.

V8 maintains a microtask queue internally. This queue is separate from libuv's I/O callback queues, from the timer heap, from the check-phase queue - it's V8's own data structure, a FIFO queue that lives inside the V8 isolate. Promise reaction callbacks (from .then(), .catch(), .finally()) and queueMicrotask() callbacks go into this queue. process.nextTick() callbacks go into a completely separate queue managed by Node's JavaScript layer - they never enter V8's microtask queue. Both queues drain at the same checkpoint, but nextTick always goes first.

The microtask queue drains completely before the event loop proceeds. "Completely" means recursively - if a microtask schedules another microtask, that one runs too, before anything else. The draining continues until the queue is empty. Only then does the event loop move forward to its next phase (timers, poll, check, etc.).

The specific V8 mechanism for promise handler execution is called PromiseReactionJob. When a promise settles (transitions from pending to fulfilled or rejected), V8 iterates through its list of dependent reactions - every .then(), .catch(), and .finally() that was attached to it - and enqueues a PromiseReactionJob for each one into the microtask queue. Each job is a small unit of work: call the appropriate handler (fulfillment or rejection) with the settled value, then resolve or reject the next promise in the chain based on the handler's return value (or thrown exception).

The name PromiseReactionJob comes directly from the ECMAScript specification (section 27.2.2.1). V8 implements it as a built-in that's not directly exposed to JavaScript. You can observe it indirectly in flame graphs and CPU profiles, where it shows up as PromiseReactionJob in the V8 internal frames. If you see that in a profile, you're looking at promise handler execution.

Internally, a PromiseReaction is a struct with three fields: the handler function (or undefined if no handler was provided for that settlement type), the "capability" (the resolve/reject pair for the next promise in the chain), and the reaction type (fulfill or reject). When the microtask queue processes a PromiseReactionJob, it calls the handler with the promise's settled value. If the handler returns normally, V8 calls the capability's resolve function with the return value. If the handler throws, V8 calls the capability's reject function with the thrown value. This is how value transformation and error propagation thread through a chain - each link in the chain has its own resolve/reject pair, and the previous link's handler output feeds into the next link's resolve or reject.

When a .then() has no handler for the current settlement type - for example, .then(onFulfilled) with no onRejected, and the promise rejects - V8 creates a "pass-through" reaction. The rejection value propagates unchanged to the next promise's reject function. The value skips the missing handler and continues down the chain. This is the mechanism behind rejection propagation: rejections flow through .then() calls that lack rejection handlers until they hit a .catch() or a .then() with a second argument.

Here's the execution order that falls out of this design:

console.log("1");
setTimeout(() => console.log("2"), 0);
Promise.resolve().then(() => console.log("3"));
process.nextTick(() => console.log("4"));
console.log("5");

Output: 1, 5, 4, 3, 2. Let me walk through each step.

The synchronous code runs first. console.log("1") prints 1. setTimeout registers a timer callback in libuv's timer heap. Promise.resolve().then() creates a fulfilled promise and enqueues a PromiseReactionJob (the () => console.log("3") handler) into V8's microtask queue. process.nextTick() adds its callback to Node's nextTick queue. console.log("5") prints 5.

The synchronous call stack is now empty. Node checks for microtasks. The nextTick queue drains first (Node's priority rule): console.log("4") prints 4. Then the V8 microtask queue drains: the PromiseReactionJob fires, console.log("3") prints 3. The microtask checkpoint is done.

The event loop proceeds to its next iteration. The timers phase checks the timer heap. The setTimeout callback fires. console.log("2") prints 2.

The ordering guarantee is: synchronous code, then nextTick callbacks, then promise microtasks, then event loop phases (timers, poll, check, etc.). This holds between every pair of "macro" callbacks too. After every I/O callback, timer callback, or check-phase callback that fires, Node runs a microtask checkpoint - drain nextTick, drain promise microtasks, then continue. MakeCallback (covered in the previous subchapter) triggers this checkpoint after each callback invocation.

How V8 and Node coordinate on this is worth examining. Node sets V8's MicrotasksPolicy to kExplicit. This tells V8: "don't drain microtasks automatically after every JavaScript function returns. I'll tell you when." Node then manually triggers microtask draining at the right moments - inside MakeCallback, after each event loop phase transition, and after the initial script evaluation. This explicit control is necessary because Node needs to interleave nextTick draining with V8's microtask draining, and V8's automatic draining mode doesn't support that.

The implementation bridges C++ and JavaScript. On the C++ side, InternalCallbackScope::Close() in src/api/callback.cc triggers the microtask checkpoint. On the JavaScript side, the processTicksAndRejections function in lib/internal/process/task_queues.js orchestrates the draining loop. It first exhausts the nextTick queue, then calls isolate->PerformMicrotaskCheckpoint() to drain V8's microtask queue. If either queue gained new entries during processing, the loop repeats until both are empty.

This design means nextTick callbacks have higher priority than promise microtasks - but only at checkpoint boundaries. When V8's PerformMicrotaskCheckpoint() starts draining the microtask queue, it drains it completely, including any new microtasks enqueued during that drain. A process.nextTick() scheduled from inside a .then() handler goes into Node's separate nextTick queue, and won't run until V8 finishes the current microtask drain. On the next iteration of processTicksAndRejections, nextTick goes first again. Both have higher priority than any macrotask (timers, I/O callbacks, setImmediate).

queueMicrotask(fn) is the standardized API for scheduling microtasks directly, without creating a promise. It's defined in the WHATWG HTML specification (not ECMAScript itself) and available in Node since v11. It puts fn into V8's microtask queue - the same queue that PromiseReactionJob entries go into. So queueMicrotask callbacks interleave with promise .then() callbacks in FIFO order, both after nextTick:

process.nextTick(() => console.log("nextTick"));
queueMicrotask(() => console.log("microtask"));
Promise.resolve().then(() => console.log("promise"));

Output: nextTick, microtask, promise. The ordering is deterministic. Both queueMicrotask and Promise.resolve().then() enqueue into V8's microtask queue, which is strictly FIFO. queueMicrotask was called first, so its entry is ahead in the queue. Both run after nextTick.

The Starvation Problem

The exhaustive draining of microtasks has a dark side. If a microtask schedules another microtask, which schedules another, indefinitely, the event loop never advances. Timers don't fire. I/O callbacks don't process. The process is alive - the main thread is running JavaScript continuously - but it's stuck in the microtask loop.

function flood() {
  Promise.resolve().then(flood);
}
flood();

This is infinite recursion through the microtask queue. Each .then() handler schedules another .then() handler. The microtask checkpoint never finishes. The event loop is starved. No timers fire, no I/O processes, no incoming connections are accepted. The process appears hung.

The same applies to process.nextTick(). A recursive nextTick chain starves the event loop identically. In practice, nextTick starvation is more commonly encountered because early Node patterns used recursive nextTick for "yield to the event loop" semantics, misunderstanding that it doesn't actually yield - it stays in the microtask checkpoint.

V8 doesn't impose a limit on microtask queue depth. Neither does Node. There's no built-in protection against microtask starvation. The only safeguard is the programmer's awareness: don't create unbounded recursive microtask chains. If you need to defer work and want the event loop to breathe, use setImmediate() instead - it schedules work in the check phase, after the microtask checkpoint, giving I/O callbacks a chance to run.

The difference between setImmediate and nextTick/microtasks is worth being precise about. Both defer execution. But nextTick and promise .then() run during the microtask checkpoint, which happens between event loop phases. setImmediate runs during the check phase, which is an event loop phase itself. The microtask checkpoint has to finish completely before any event loop phase runs. So nextTick and promises can block the event loop. setImmediate participates in normal phase rotation and lets other phases (like poll for I/O) get their turn.

A concrete scenario where this matters: you're processing a large array of items, and each item requires some light async work. If you chain everything with .then(), the microtask queue stays busy for the entire batch, and no I/O can interleave. If you use setImmediate() between batches, the event loop gets a chance to handle incoming requests, accept connections, and fire timer callbacks between your batches. The throughput might be slightly lower (setImmediate has more overhead than a microtask), but the latency for other work stays bounded.

Error Handling in Promise Chains

Rejections propagate down the chain until something catches them. A .then() without a rejection handler passes the rejection through to the next promise:

Promise.reject(new Error("bad"))
  .then(v => console.log("skipped"))
  .then(v => console.log("also skipped"))
  .catch(e => console.log(e.message)); // "bad"

Both .then() handlers are bypassed. The rejection travels through until .catch() intercepts it. Inside .catch(), you can return a value (switching back to the fulfillment path), re-throw (staying on the rejection path), or return a rejected promise (also staying on the rejection path).

Re-throwing in .catch() is how you add logging without swallowing errors:

someAsyncOp()
  .catch(e => { console.error("logged:", e); throw e; })
  .then(v => processResult(v))
  .catch(e => sendErrorResponse(e));

The first .catch() logs the error and re-throws. The re-thrown error rejects the promise returned by .catch(), which propagates down to the second .catch(). Without the re-throw, the first .catch() would return undefined (the return value of console.error), and the chain would continue on the fulfillment path with v being undefined.

throw inside a .then() handler and Promise.reject() produce the same result - rejection of the returned promise. The difference is context. throw works inside synchronous code blocks. Promise.reject() is useful when you need to reject from a place where throw doesn't make sense, like a conditional expression or a ternary.

One of the subtler rejection sources: returning Promise.reject() from a .then() handler. The returned promise rejects, which rejects the chained promise. This is functionally identical to throw, but some developers prefer it for rejecting with specific error types in conditional logic:

fetchUser(id).then(user => {
  if (!user.active) return Promise.reject(new Error("inactive"));
  return user;
});

The return Promise.reject(...) rejects the next promise in the chain, the same way throw new Error(...) would. The behavioral difference is zero. It's a style choice. But mixing throw and return Promise.reject() in the same codebase can be confusing - pick one and be consistent.

Unhandled Rejections

When a promise rejects and no rejection handler is attached, Node emits the unhandledRejection event on the process object (covered in Chapter 1). Starting from Node v15, the default behavior is to throw the rejection as an uncaught exception, crashing the process. Earlier versions logged a warning. You can control this with the --unhandled-rejections flag: throw (default in v15+), warn (log but don't crash), strict (throw even if a handler is added later), or none (suppress entirely).

The detection mechanism is interesting. V8 tracks promise rejections internally. When a promise rejects and has no rejection handler at the time of rejection, V8 notifies Node through a hook (PromiseRejectCallback). Node starts a timer - roughly one microtask checkpoint later - and checks if a handler has been attached since. If a handler appears (the rejectionHandled event fires), the warning is suppressed. If no handler appears, unhandledRejection fires.

This means there's a brief window where a rejection without a handler doesn't immediately trigger the warning. You can reject a promise and attach .catch() on the next line, and that's fine - the check happens after the current synchronous code completes and microtasks drain. But if you store the rejected promise in a variable and attach .catch() in a setTimeout, you might see the unhandled rejection warning before your handler runs. The timing window is tight but it exists.

The practical rule: always handle rejections synchronously in the same promise chain. Don't create rejected promises and handle them later in a different execution context. Attach .catch() at the end of every chain, or use a try/catch with await (covered in the next subchapter).

Here's a pattern that demonstrates the timing issue clearly:

const p = Promise.reject(new Error("oops"));
setTimeout(() => {
  p.catch(e => console.log("caught:", e.message));
}, 0);

The unhandledRejection event fires before the setTimeout callback runs. By the time your .catch() attaches, Node has already flagged the rejection as unhandled. The rejectionHandled event will then fire when the catch is finally attached, but the warning has already been emitted. In Node v15+, the process might have already crashed by then. Attaching error handlers lazily across macrotask boundaries is a recipe for unreliable error handling.

There's also the question of what constitutes "handling" a rejection. Just having a .catch() in the chain is sufficient, even if the .catch() itself does nothing: promise.catch(() => {}). This silences the rejection. Whether silencing is the right thing depends on context - sometimes you genuinely want to ignore a rejection (a best-effort cache write, a non-critical analytics call). But as a habit, swallowing errors quietly makes debugging harder. At minimum, log something.

util.promisify and Callback-to-Promise Conversion

A large part of Node's core API was written in the callback era. util.promisify(fn) wraps a callback-based function (one that follows the error-first convention) into a function that returns a promise:

const { promisify } = require("util");
const readFile = promisify(require("fs").readFile);
readFile("/etc/hostname", "utf8").then(data => {
  console.log(data.trim());
});

promisify returns a new function. When called, it invokes the original function with all the same arguments plus a generated callback appended at the end. That callback follows the error-first convention: if err is truthy, it rejects the promise. Otherwise, it resolves with the result value.

Under the hood, the generated callback creates a new Promise() and returns it. The callback calls resolve(result) or reject(err) when the original function calls back. The implementation is in lib/internal/util.js in Node's source, and it's about 30 lines of code. Each call to the promisified function allocates one Promise object and one closure for the generated callback.

Some Node core functions customize their promisified behavior. fs.read, for example, uses an internal customPromisifyArgs symbol to tell promisify how to name the multiple callback arguments - its callback receives (err, bytesRead, buffer), and the promisified version returns { bytesRead, buffer } instead of just the first non-error argument. For your own APIs, the public util.promisify.custom symbol lets you provide a completely custom promise-returning implementation:

function myFn(cb) { cb(null, "a", "b"); }
myFn[require("util").promisify.custom] = () => {
  return Promise.resolve({ first: "a", second: "b" });
};

The dedicated fs.promises namespace exists because util.promisify(fs.readFile) creates an extra wrapper layer on every call. fs.promises.readFile is a native implementation that creates a FSReqPromise object in the C++ layer instead of a FSReqCallback. The libuv request completion resolves or rejects the promise directly, without the intermediate JavaScript callback. It's marginally more efficient and has better stack traces.

For new code, always prefer fs.promises (or require('node:fs/promises')) over promisifying the callback API. The promise-native version is what the Node team maintains and optimizes. Promisify is for third-party callback APIs that don't offer a promise variant.

The cost of promisifying deserves a note. Each call to the promisified wrapper allocates a new Promise() (one V8 heap object, ~104 bytes), a closure for the generated callback, and the original callback's closure scope. For a single readFile call, this is irrelevant. For a tight loop making thousands of database queries per second, the extra allocations show up in heap snapshots and GC profiles. The native fs.promises path avoids the JavaScript-level closure because the promise resolution happens in C++ directly - FSReqPromise has the resolve and reject functions stored as persistent V8 handles, not as JavaScript closure variables. One fewer closure per operation, one fewer function object on the heap.

There's also util.callbackify, which goes the other direction: wraps a promise-returning (or async) function into one that accepts an error-first callback. It's used in Node internals and in libraries that need backward compatibility with callback-based consumers. The implementation is thin - call the function, attach .then() and .catch(), and route the result or error to the provided callback. But it adds a microtask hop (the promise settlement runs as a microtask before the callback fires), so callback-expecting code that was timing-sensitive might notice the delay.

Promise Performance Characteristics

Every .then() allocates a new Promise object on V8's heap. Every settled promise with a pending reaction schedules a PromiseReactionJob microtask. These are real costs. Small individually - a Promise object is around 104 bytes in V8 - but they add up in hot paths.

Promise resolution is always asynchronous. Even Promise.resolve(42).then(fn) - where the value is known immediately and no I/O is involved - schedules fn as a microtask rather than calling it synchronously. This is a deliberate design choice from the Promises/A+ specification. The guarantee is called "always async": handlers run on a clean call stack, after the current synchronous code completes.

The reason for always-async is consistency. Consider a function that sometimes returns a cached value and sometimes performs I/O. If the cached path resolved synchronously and the I/O path resolved asynchronously, the caller would see different execution orders depending on cache state. Code that works with a warm cache might break with a cold one. Event listeners set up after the .then() call might miss the synchronous resolution. State mutations expected to happen "before" the handler runs would race with the synchronous path.

This is Zalgo (mentioned in the previous subchapter). Promises prevent it by construction. Every handler runs as a microtask, regardless of whether the promise was already settled. The ordering is always: your current synchronous code completes, then microtasks fire. Your code can rely on this.

Long chains create many intermediate Promise objects. A chain of ten .then() calls creates ten Promise objects and ten microtask jobs. In a hot loop processing thousands of items per second, this means thousands of short-lived Promise objects being allocated and collected. V8's generational garbage collector handles short-lived objects efficiently (the nursery is tuned for it), but GC pauses still accumulate. In benchmarks, switching from a promise chain to a flat callback can reduce GC time by 20-40% on high-throughput workloads.

V8 has optimized promises steadily over the years. The PromiseThenFinally built-in, TurboFan's ability to inline simple .then() handlers, and internal fast-paths for already-settled promises all reduce the overhead. Async functions (covered in the next subchapter) get additional optimization through V8's --async-stack-traces and AwaitOptimization, which can sometimes eliminate intermediate promise allocations entirely.

For most application code, promise overhead is noise. You'd need to be processing tens of thousands of promise chains per second before allocation pressure becomes measurable. But if you're building a database driver, an HTTP parser, or any library on a hot path, the overhead is worth profiling. The pattern you'll see in performance-sensitive code is: use promises at the API boundary (callers expect them), use callbacks internally (fewer allocations), and use util.promisify or a thin wrapper to bridge the gap.

The allocation story is predictable. One .then() call: one new Promise object (~104 bytes), one or two closures (the handler functions), one PromiseReactionJob entry in the microtask queue. Multiply by chain length, multiply by calls per second, and you have your memory throughput. If that number makes your GC profile noticeable, optimize. Otherwise, take the ergonomic and safety wins promises give you and move on.

There's a subtlety about garbage collection timing here. A promise chain like a.then(f1).then(f2).then(f3) creates three intermediate promises. The first intermediate promise (returned by a.then(f1)) can be collected as soon as f2 runs and the second intermediate promise settles. V8's generational collector (covered in Chapter 1) puts these short-lived objects in the young generation nursery, where collection is fast - typically under a millisecond for the nursery sweep. The intermediate promises rarely survive to old generation unless your chain is very long or you store references to them.

But there's a catch. If you store a reference to a mid-chain promise - say, assigning it to a variable for later use - you pin that promise and everything it references in memory for longer. A .finally() handler with a closure over a large buffer will keep that buffer alive until the finally handler runs and the returned promise settles. In server code handling concurrent requests, this kind of unintentional retention can add up. The pattern to watch for: closures inside .then() handlers that capture variables from outer scopes. Each closure keeps its entire scope chain alive for the duration of the promise chain.

Promises also have a debugging cost. When an error occurs deep in a chain, the stack trace shows where the rejection happened but often omits the chain of .then() calls that led there. V8's --async-stack-traces flag (enabled by default in modern Node) helps by capturing the continuation chain, but this adds memory overhead per promise - V8 stores extra frames to reconstruct the trace. The trade-off is: better debuggability versus slightly higher memory usage per promise. For development, the extra traces are worth it. In production, you might see higher baseline memory from promise-heavy code paths, and --async-stack-traces is part of that.