Async/Await Under the Hood
Every async Function Is a State Machine
Here's something that took me a while to fully appreciate: when you write async function, the engine doesn't just sprinkle some syntactic sugar over a promise chain. It builds a state machine. Your function body gets split at every await expression into segments, and V8 manages the transitions between those segments using an internal object that tracks where you left off, what your local variables are, and which promise is being awaited. The async keyword looks like it marks a function as "asynchronous." What it actually marks is: "please compile this into a suspendable, resumable execution context."
The mechanics start with something simple. An async function always returns a promise. Always. Even if your function body is return 42, the caller receives Promise<42>. Even if you return nothing, the caller receives a promise that fulfills with undefined. And if your function throws, the caller receives a rejected promise rather than an uncaught exception.
async function getNumber() {
return 42;
}
const result = getNumber();
console.log(result); // Promise { 42 }
console.log(result instanceof Promise); // trueThe return value 42 gets wrapped through Promise.resolve(42). You never get a raw value back from an async function. This wrapping happens automatically - you don't write return Promise.resolve(42), you just write return 42 and the engine handles it. If you return a promise explicitly, say return Promise.resolve(42), the implicit promise adopts its state - no double-wrapping. The resulting promise fulfills with 42 either way. But returning an already-rejected promise causes the returned promise to reject with the same reason, even without any throw statement in your function.
The wrapping behavior has a subtle edge case worth understanding. If you return a thenable (covered in the previous subchapter) from an async function, the returned promise adopts the thenable's state through the assimilation protocol. The extra microtask hop from thenable resolution applies here too. And returning a native promise still goes through the standard promise resolution procedure - the implicit promise calls into the returned promise's .then() via a PromiseResolveThenableJob, which adds an extra microtask tick. This is different from the await path, where V8 optimizes away the extra hop for native promises. On the return path, return await somePromise actually resolves one tick faster than return somePromise, because await gets the optimized fast path while return goes through the full resolution machinery.
Throwing inside an async function maps directly to rejection:
async function boom() {
throw new Error("broken");
}
boom().catch(e => console.log(e.message)); // "broken"No special error handling plumbing exists here. The async function's internal machinery wraps the entire function body in an implicit try/catch. The throw executes synchronously during the call - it genuinely runs - but the async wrapper intercepts it and rejects the implicit promise with the thrown error. By the time boom() returns to the caller, the promise is already in a rejected state. The .catch() handler picks it up. The caller never sees a traditional exception, because the throw is caught internally and converted into a rejection before the promise is returned.
There's a practical consequence of the "always returns a promise" rule. You can't use async functions where a synchronous return value is expected. An async callback passed to Array.map() returns promises, and map() collects those promises into an array - it doesn't await them. An async comparator passed to Array.sort() returns a promise, and sort() coerces it to NaN, producing garbage results. The async keyword changes the function's contract in ways that aren't visible at the call site unless you read the declaration.
await Is a Suspension Point
The await keyword is where the real action happens. When V8 encounters await expr, it evaluates expr, wraps the result in a promise if it isn't one already, then suspends the current async function and returns control to the caller. The rest of the function - everything after the await - becomes a continuation that runs when the awaited promise settles.
async function example() {
console.log("A");
const val = await Promise.resolve("B");
console.log(val);
console.log("C");
}
example();
console.log("D");Output: A, D, B, C. The first console.log("A") runs synchronously - it's before the first await. Then await suspends the function. Control returns to the caller, and console.log("D") executes. Later, when the microtask queue drains, the awaited promise's settlement resumes the function. val receives "B", and the remaining two log statements execute.
The part that catches people: await only suspends the enclosing async function. The rest of your program keeps going. The call stack doesn't freeze - it unwinds. Whatever called example() proceeds to the next statement immediately. The async function's paused state exists on the heap, not on the stack.
And here's a subtlety. Code before the first await in an async function runs synchronously. Completely. On the current call stack. The async function doesn't become asynchronous until it hits its first await. If your async function has no await at all, it runs entirely synchronously (though it still returns a promise wrapping the return value).
async function noAwait() {
console.log("sync");
return "done";
}
noAwait();
console.log("after");Output: sync, after. The function body executes synchronously. The returned promise (fulfilled with "done") settles immediately, but any .then() handlers attached to it would still run as microtasks after the current synchronous code completes. This is the same "always async" guarantee that applies to promise handlers generally (covered in the previous subchapter).
You can await any value, by the way. await 42 works. await "hello" works. await undefined works. V8 wraps non-promise values in Promise.resolve() before awaiting them. The result is that the function suspends, a microtask is scheduled to resume it with the value, and execution continues. It's a roundabout way of deferring to the next microtask - functionally equivalent to const x = await Promise.resolve(42). There's rarely a reason to do this deliberately, but it shows up in generic code that doesn't know whether its input is a promise or a raw value.
What the Engine Actually Does
The word "desugaring" gets used loosely, so let me be precise. V8 doesn't literally transform your async function source code into a .then() chain. The internal mechanism is closer to generators and coroutines. But the behavior is equivalent to a promise chain, and thinking about it that way is accurate for predicting execution order.
Take this async function:
async function fetchData(url) {
const response = await fetch(url);
const json = await response.json();
return json;
}The behavioral equivalent using raw promises:
function fetchData(url) {
return fetch(url)
.then(response => response.json())
.then(json => json);
}Each await becomes a .then(). The code after the first await becomes the first .then() handler. The code after the second await becomes the second .then() handler. The final return becomes the fulfillment value of the last promise in the chain.
The desugaring gets more involved with error handling:
async function fetchSafe(url) {
try {
const resp = await fetch(url);
return await resp.json();
} catch (e) {
console.error("failed:", e.message);
return null;
}
}The promise-chain equivalent:
function fetchSafe(url) {
return fetch(url)
.then(resp => resp.json())
.catch(e => {
console.error("failed:", e.message);
return null;
});
}The try/catch around the awaits maps to a .catch() at the end of the chain. A rejection from either fetch() or response.json() propagates down to the catch handler. The async/await version reads linearly. The promise version reads as a method chain. Both produce the same execution behavior.
The desugaring analogy breaks down in one interesting way. In a promise chain, each .then() handler is a separate function with its own scope. Variables from an earlier .then() aren't accessible in a later one unless you pass them forward. In an async function, all the code shares one scope. response is available after the second await without threading it through. This shared scope is the real ergonomic win - you write sequential code with shared local variables, and the engine handles the suspension and resumption behind the scenes.
There's a nuance that matters for precise ordering. When you await a value that's already resolved - like await 42 or await Promise.resolve("hi") - the continuation still runs as a microtask. The function still suspends and resumes. The overhead is minimal, but the suspension is real. V8 doesn't optimize it away into synchronous execution (though it has optimized the number of microtask ticks required - more on that later).
Multiple sequential awaits create a chain of suspensions:
async function three() {
const a = await step1();
const b = await step2(a);
const c = await step3(b);
return c;
}Three await points, three suspensions, three microtask resumptions. Each step waits for the previous one to complete before starting. This is sequential execution. The total time is step1 + step2 + step3. If the steps are independent, this is unnecessarily slow - but the pattern makes that obvious, which is one of async/await's advantages over nested callbacks where the sequential nature was harder to see.
The generator connection deserves a brief mention. Before async/await was standardized (ES2017), the community used generator functions with a runner library (like co) to achieve the same pattern. You'd write function* with yield instead of async function with await, and the runner would advance the generator by calling .next() with the resolved value. The TC39 committee formalized this pattern into a language feature. The V8 implementation shares some internal machinery with generators - both need to suspend and resume execution contexts. But async functions are a distinct bytecode construct, with dedicated opcodes (SuspendGenerator and ResumeGenerator are shared, but the promise wiring is async-function-specific).
V8's Async Function Machinery
When V8 compiles an async function, it creates an internal object called a JSAsyncFunctionObject. This object is the state machine that tracks the function's suspended execution. It stores the function's local variables, the current execution position (which await it's paused at), the promise that was created for the function's return value, and the promise currently being awaited.
The lifecycle of an async function call works like this. When you call fetchData(url), V8 creates a new promise - call it outerPromise - that will be returned to the caller. It also creates the JSAsyncFunctionObject to track the function's state. The function body starts executing on the current call stack, synchronously, just like a normal function.
When execution reaches await expr, V8 does several things in sequence. First, it evaluates expr and wraps the result in a promise if it isn't already one. It calls an internal PromiseResolve operation for this - if the value is already a native promise, it uses it directly; otherwise, it wraps it. Second, it creates a PromiseReaction (covered in the previous subchapter) on the awaited promise to resume the function when the promise settles. Third, it saves the function's execution context - all local variables, the current bytecode offset, the stack frame - into the JSAsyncFunctionObject. This context moves from the stack to the heap. Fourth, it returns control to the caller, with outerPromise as the return value of the function call.
The caller's code continues normally. The async function is now paused, its state sitting on the heap inside the JSAsyncFunctionObject. Sometime later, the awaited promise settles. This triggers the PromiseReaction, which enqueues a microtask to resume the async function. When that microtask runs, V8 restores the execution context from the JSAsyncFunctionObject, puts it back on the stack, and continues executing from the bytecode offset after the await. To the function body, it feels like await simply returned a value. But underneath, the function was split, suspended, stored on the heap, and reconstituted later.
The outerPromise (the one returned to the caller) doesn't settle until the async function either returns a value or throws. If the function has multiple awaits, outerPromise stays pending through all of them. When the function finally reaches its return statement (or falls off the end), V8 resolves outerPromise with the return value. If the function throws at any point (including a rejection from an awaited promise that isn't caught), V8 rejects outerPromise with the thrown error.
The JSAsyncFunctionObject itself inherits from JSGeneratorObject in V8's internal class hierarchy. It adds three fields: the outer promise (promise), and two reusable closures for await resume handlers (await_resolve_closure and await_reject_closure). V8 reuses these closures across multiple await expressions in the same function rather than allocating fresh ones each time. Microtask scheduling happens through PromiseReaction chains on the promise objects themselves, not through anything stored on the async function object. The object is allocated in the V8 heap's young generation (nursery) and typically stays there for short-lived async functions. For long-running suspensions (say, awaiting a network request that takes seconds), the object gets promoted to old generation during a nursery scavenge. This promotion is fine - a single JSAsyncFunctionObject is a few hundred bytes. The concern is what it transitively retains: every local variable, every closure captured by the function, every object referenced through those variables. A suspended async function is a GC root for its entire local scope.
V8 used to have a performance problem with the await implementation. Before V8 7.2, awaiting an already-resolved promise required three microtask ticks. The engine would create an extra "throwaway" promise during the await desugaring, adding two unnecessary microtask hops. The spec required this behavior to ensure timing consistency with thenable resolution, but the V8 team worked with TC39 to amend the spec. The change, shipped in V8 7.2, reduced await overhead to one microtask tick for the common case of awaiting a native, already-resolved promise. The optimization is described in the V8 blog post "Faster async functions and promises." Node 12 (which shipped with V8 7.4, incorporating this optimization) is measurably faster than Node 10 for async-heavy code, specifically because each await does less internal promise bookkeeping.
The one-tick optimization works like this. When V8 encounters await p where p is a native promise (constructed by the engine, belonging to the current realm), V8 skips the PromiseResolve wrapping step entirely. It attaches the resume reaction directly to p instead of creating an intermediate promise. This saves one promise allocation and one microtask tick per await. If p is a thenable (a non-native promise-like object), V8 falls back to the full spec algorithm with the extra wrapping, because it needs to call the thenable's .then() method to integrate with non-native promise implementations.
Another optimization worth detailing: async stack traces. When an error occurs inside an async function, the stack trace needs to show the chain of await points that led to the current execution. But those await points happened across different microtask ticks, on different call stacks. V8 solves this by storing extra frame information in the JSAsyncFunctionObject. When you await, V8 records the call site. When the function resumes, it can reconstruct the full async call chain. This is the --async-stack-traces feature, enabled by default in modern Node. The cost is a few dozen extra bytes per await point per live async function. For most applications, this is negligible. For code with millions of concurrent async functions (unlikely but possible in benchmarks), it's measurable.
The stack trace reconstruction works bottom-up. When an error is thrown, V8 walks the current synchronous stack as usual. Then it checks if the current execution context belongs to a JSAsyncFunctionObject. If so, it reads the stored caller information and appends an "async" frame to the trace. If that caller was also an async function awaiting this one, V8 follows the chain of outerPromise references up through the PromiseReaction graph, collecting frames at each level. The result is a stack trace that reads as if the await points were synchronous calls:
Error: oops
at innerFn (file.js:12:11)
at async middleFn (file.js:8:20)
at async outerFn (file.js:3:18)The async prefix on the frame tells you it was an await boundary. Without async stack traces, you'd only see innerFn's frame, making it nearly impossible to trace how you got there in a complex async call graph.
The heap retention matters for production workloads. A suspended async function holds its entire local scope alive. If your function captures a large buffer, an HTTP response body, or a database result set in a local variable, that memory stays pinned until the function resumes and the variable goes out of scope. With multiple await points, the function could be suspended for an extended period - waiting on a network request, a timer, a database query - while holding references to objects that can't be garbage collected.
Execution Ordering With await
Predicting the order of operations in async/await code requires internalizing one rule: code after await runs as a microtask. Everything else follows from this.
console.log("1");
async function run() {
console.log("2");
await Promise.resolve();
console.log("3");
}
run();
console.log("4");Output: 1, 2, 4, 3. Step by step: console.log("1") runs. run() is called. Inside run, console.log("2") runs synchronously (before the first await). The await Promise.resolve() suspends the function and returns control. console.log("4") runs. The microtask queue drains, resuming run, and console.log("3") runs.
When multiple async functions interleave, the ordering depends on when each one suspends and resumes. Two async functions called without awaiting each other run their synchronous portions immediately, then interleave at the microtask level:
Two async functions, a and b, each logging before and after a single await Promise.resolve():
async function a() {
console.log("a1");
await Promise.resolve();
console.log("a2");
}Function b follows the same structure, logging b1 and b2. Call both: a(); b();. Output: a1, b1, a2, b2. Both functions run their synchronous code (before the first await) immediately: a1, then b1. Then both suspend. The microtask queue has two entries: the continuation of a, then the continuation of b (in the order they were suspended). Microtasks drain FIFO: a2, then b2.
A more involved scenario. Two async functions with different numbers of awaits, called together. If x has two awaits (printing x1, x2, x3) and y has one await (printing y1, y2), calling x(); y() produces: x1, y1, x2, y2, x3. Both synchronous portions run first. Then the microtask queue has continuations for x and y. x resumes, prints x2, hits its second await, and suspends again - this enqueues another continuation for x. y resumes, prints y2, and completes (no more awaits). Then x's second continuation runs and prints x3. Each await creates a new microtask checkpoint, and the interleaving follows strict FIFO order.
A common misconception: people assume await means "wait here until the result is ready." It does - from the perspective of the async function. But from the perspective of the caller and the event loop, await means "suspend here and let everything else continue." The async function waits. The rest of the program doesn't.
This has practical implications. If you call an async function without await-ing the returned promise, the function starts executing but you've lost the handle to its completion:
async function save(data) {
await db.insert(data);
console.log("saved");
}
save(myData); // fire and forget - no await
console.log("continuing");The save function starts, hits its await, suspends. The console.log("continuing") runs immediately. The database insert happens in the background. If it fails, the returned promise rejects, but nobody is watching it. This is a "fire and forget" pattern - sometimes intentional, often a bug.
The fix depends on context. If you need to wait for completion:
await save(myData);
console.log("continuing after save");If you genuinely want fire-and-forget, catch errors explicitly:
save(myData).catch(e => console.error("save failed:", e));There's also the ordering between await and other microtask sources. Since await resumes via a microtask (technically a PromiseReactionJob), it interleaves with process.nextTick() callbacks and queueMicrotask() calls. But nextTick always runs first within a given checkpoint, because Node drains the nextTick queue before V8's microtask queue (covered in the previous subchapter). An await continuation that's enqueued at the same time as a nextTick callback will run after the nextTick:
async function run() {
await Promise.resolve();
console.log("await");
}
run();
process.nextTick(() => console.log("nextTick"));Output: nextTick, await. The async function suspends at await, queueing a microtask to resume. process.nextTick queues a callback. The microtask checkpoint drains nextTick first, then the V8 microtask queue. This ordering is deterministic.
Error Handling in async Functions
try/catch works naturally with await. When an awaited promise rejects, the rejection is thrown as an exception at the await expression, and catch receives it:
async function loadUser(id) {
try {
const user = await fetchUser(id);
return user;
} catch (e) {
console.error("fetch failed:", e.message);
return null;
}
}The rejected promise from fetchUser(id) becomes a thrown error at the await site. The catch block handles it. This is one of async/await's biggest improvements over raw promise chains - error handling uses the same try/catch syntax that developers already know from synchronous code. Under the hood, V8 implements this by checking the awaited promise's settlement type when resuming the async function. If the promise rejected, V8 throws the rejection reason instead of returning it. The throw goes through the normal exception handling path, which finds the surrounding try/catch in the bytecode.
Without try/catch, the rejection propagates to the async function's returned promise:
async function loadUser(id) {
const user = await fetchUser(id);
return user;
}
loadUser(99).catch(e => console.error(e.message));If fetchUser rejects, loadUser rejects, and the .catch() on the outside picks it up. The rejection travels up the async call chain - each async function's returned promise rejects in turn - until something catches it. If nothing catches it, you get an unhandled rejection (covered in the previous subchapter).
You can catch errors from multiple await expressions with a single try/catch block:
async function pipeline() {
try {
const raw = await fetchData();
const parsed = await parseData(raw);
return await saveData(parsed);
} catch (e) {
console.error("pipeline failed:", e);
}
}Any one of the three awaited operations can reject, and the same catch handles all three. The downside: you don't know which step failed without inspecting the error. Sometimes that's fine. Sometimes you need step-specific error handling:
async function pipeline() {
let raw;
try { raw = await fetchData(); }
catch (e) { throw new Error("fetch: " + e.message); }
return await parseData(raw);
}Only the fetch has specific error handling here. If parseData rejects, the error propagates as-is to the caller. You can mix granular and broad try/catch blocks in the same function - individual blocks around operations that need specific handling, and a broader block (or no block) for operations where generic propagation is fine.
There's a gotcha that bites people regularly. Forgetting to await a promise means errors from it are silently lost:
async function process() {
doSomethingAsync(); // no await!
console.log("done");
}doSomethingAsync() returns a promise. If it rejects, nothing catches it - the promise is created and immediately discarded. The unhandled rejection handler might fire, but in busy code, these are easy to miss. Linters help (no-floating-promises in typescript-eslint), but it's a trap that even experienced developers fall into when refactoring. I've seen production incidents caused by exactly this - someone removes await during a refactor, the function appears to work (the happy path doesn't need the result), and the error path silently breaks.
Another subtle case: returning a promise without await inside a try/catch:
async function risky() {
try {
return doSomethingAsync(); // no await before return
} catch (e) {
console.error("caught:", e); // never fires!
}
}The try/catch doesn't catch rejections from doSomethingAsync() here. The promise is returned directly without being awaited, so the rejection happens after the function has already completed its try block. The catch block would only fire if doSomethingAsync() threw synchronously during the initial function call (before returning a promise), which is unusual for well-behaved async APIs. To catch the error, you need return await doSomethingAsync(). This is one of the few cases where return await is meaningful, despite the general advice that the extra await is redundant.
The mechanics here are specific. When you write return expr in an async function, V8 evaluates expr, resolves outerPromise with the result, and exits the function. If expr is a rejected promise, outerPromise adopts the rejection - but the catch block in the function has already been exited. When you write return await expr, V8 evaluates expr, then awaits it. The await suspends the function inside the try block. If the awaited promise rejects, the throw happens while the try/catch is still active, and the catch block fires. The difference is whether the try block is active at the time the rejection is observed.
There's also an error handling pattern that comes up in cleanup scenarios. finally blocks work with async functions too:
async function withLock(resource, fn) {
await resource.lock();
try {
return await fn();
} finally {
await resource.unlock();
}
}The finally block runs regardless of whether fn() succeeds or fails. The await inside finally works normally - the function suspends during cleanup. If both fn() and resource.unlock() reject, the rejection from finally wins (it replaces the original error). This is the same behavior as synchronous try/finally where a throw in finally replaces the original exception.
Common Patterns and Anti-Patterns
The "unnecessary async" pattern shows up in a lot of code:
async function getData() {
return await fetch("/api/data");
}The await here is technically redundant. Without it, the function would return the promise from fetch() directly, and the caller would see the same result. The version without await:
async function getData() {
return fetch("/api/data");
}Both work the same from the caller's perspective. But the return await version has two properties the plain return doesn't: the async function's try/catch can intercept rejections (as discussed above), and the async stack trace includes this function's frame. For those reasons, return await inside a try/catch is correct and intentional. Outside a try/catch, it's just a wasted microtask tick. Some linters (eslint's no-return-await rule) flag this, though the rule has been controversial precisely because of the try/catch exception.
Sequential await in loops is the most common performance trap:
async function fetchAll(urls) {
const results = [];
for (const url of urls) {
const res = await fetch(url);
results.push(await res.json());
}
return results;
}Each iteration waits for the previous one to complete. If you have 10 URLs and each takes 100ms, total time is ~1000ms. The operations are independent - they could run concurrently. The concurrent version uses Promise.all() (detailed coverage in subchapter 06):
async function fetchAll(urls) {
const responses = await Promise.all(
urls.map(url => fetch(url))
);
return Promise.all(responses.map(r => r.json()));
}All fetches start simultaneously. Total time is ~100ms (the slowest one). The sequential version is correct - it works - but it serializes inherently parallel work. The key insight: calling an async function starts the work immediately and returns a promise. The await is where you wait for the result. If you collect the promises first and await them together, the operations overlap.
Sometimes sequential execution is actually what you want. Processing database migrations in order, sending messages where order matters, rate-limited API calls - these require one-at-a-time execution. The for...of loop with await is the right pattern for those cases. The anti-pattern is when the operations are independent and you serialize them by accident.
The forEach trap is more insidious:
const urls = ["/api/a", "/api/b", "/api/c"];
urls.forEach(async (url) => {
const res = await fetch(url);
const data = await res.json();
console.log(data);
});
console.log("all done?"); // runs immediately!forEach calls your async callback for each element but doesn't await the returned promises. It ignores them entirely. forEach's implementation is roughly for (let i = 0; i < this.length; i++) { callback(this[i], i, this) } - it calls the callback and discards the return value. When the callback is async, the return value is a promise, and that promise goes nowhere. The console.log("all done?") runs before any fetch completes. There's no way to know when the async operations finish. There's no way to catch their errors either, unless they individually have .catch() handlers inside the callback.
The fix is a for...of loop (for sequential) or Promise.all with .map() (for concurrent). for...of works with await because it's a regular loop statement, and await inside a regular for loop suspends the enclosing async function as expected. The same problem applies to .filter(), .reduce(), .some(), and .every() - none of these array methods understand promises. Passing async callbacks to them produces confusing results.
The async IIFE pattern was common before top-level await (covered in Chapter 1) became available in ES modules:
(async () => {
const config = await loadConfig();
const server = await startServer(config);
console.log("listening on", server.address().port);
})();An immediately-invoked async function expression. You create an async arrow function and call it on the same line. The pattern still appears in CommonJS code, which doesn't support top-level await. In ESM, you can just use await at the module scope directly.
There's one more pattern worth examining: choosing between returning a promise directly and wrapping it in async/await:
// Option A: direct return
function getUser(id) {
return db.query("SELECT * FROM users WHERE id = ?", [id]);
}Compare with:
// Option B: async wrapper
async function getUser(id) {
return await db.query(
"SELECT * FROM users WHERE id = ?", [id]
);
}Option A returns the promise from db.query directly. No async function overhead. No extra microtask tick. No JSAsyncFunctionObject allocation. Option B wraps it in an async function, adds one JSAsyncFunctionObject allocation, and adds one microtask hop for the await. Functionally identical from the caller's perspective.
Option A is leaner. But Option B gives you a place to add error handling, logging, or transformation later without changing the function signature. In most codebases, the overhead of Option B is irrelevant, and the maintainability argument wins. In a database driver processing 50,000 queries per second, Option A's fewer allocations matter.
One more anti-pattern I see regularly: creating a new Promise inside an async function:
async function wait(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}This is actually fine - setTimeout is callback-based, so wrapping it in a Promise constructor is the right approach. But this version is problematic:
async function getData() {
return new Promise(async (resolve, reject) => {
const data = await fetch("/api");
resolve(data);
});
}An async executor inside new Promise(). The outer function is already async - it already returns a promise. The new Promise wrapper is redundant. The async executor adds a second layer of promise wrapping. If the await throws and the catch doesn't handle it (or re-throws), the rejection hits the async executor's implicit promise, and the outer new Promise might not reject properly. The fix: drop the new Promise wrapper entirely and use the async function's own promise.
Performance Considerations
Each await has a cost: one microtask tick, one PromiseReaction creation, one suspension and resumption of the JSAsyncFunctionObject. The absolute overhead is small - microseconds per await - but it accumulates in hot paths.
V8's optimizations over the years have reduced this cost substantially. The V8 7.2 change (shipped in Node 12 via V8 7.4) eliminated the extra "throwaway" promise per await, reducing the microtask ticks from three to one. The V8 team's benchmarks showed up to an 8x improvement for Promise.all() workloads from this single change. Later versions continued optimizing: better inline caching for PromiseResolve, faster JSAsyncFunctionObject allocation, and TurboFan's ability to partially inline simple async function bodies.
In practice, raw .then() chains can be marginally faster than async/await because they avoid the JSAsyncFunctionObject overhead. There's no suspension/resumption machinery - the .then() handler is just a function that V8 calls when the microtask fires. The difference is usually around 5-15% in microbenchmarks. In real applications where the awaited operation takes milliseconds (network I/O, database queries, file reads), the microseconds saved by raw .then() are noise.
The tradeoff is almost always worth taking: async/await gives you readable sequential flow, natural error handling with try/catch, and better stack traces. Raw .then() chains are harder to read, harder to debug, and harder to maintain. Use .then() in library internals where you've profiled and found the overhead matters. Use async/await everywhere else.
Memory is the less obvious cost. A suspended async function holds its JSAsyncFunctionObject on the heap, which includes all local variables. If you have 10,000 concurrent HTTP requests, each handled by an async function that's suspended at an await db.query(), that's 10,000 JSAsyncFunctionObject instances on the heap, each holding references to their local variables. The objects themselves are modest - probably a few hundred bytes each. But the local variables they reference might include request bodies, parsed JSON, Buffer instances, or other substantial data.
The practical advice: keep async function scopes lean. Don't declare variables you don't need. Don't hold large data in variables across await boundaries. If you've already processed a large buffer and only need a small result from it, let the buffer go before the next await:
async function handle(req) {
let body = await readBody(req);
const parsed = parseRequest(body);
body = null; // release the raw body before the DB await
return await db.save(parsed);
}Nulling body before the database await lets the GC collect the raw body while the database operation is in flight. Without that line, body stays alive for the entire duration of the db.save, pinned by the JSAsyncFunctionObject's scope. In a server handling thousands of requests, the difference between retaining and releasing a 1MB request body across a 50ms database call adds up quickly.
Debugging async code has gotten much better. V8's async stack traces (enabled by default since Node 12) reconstruct the call chain across await boundaries. When an error occurs, the stack trace shows where the error was thrown and includes async frames showing which function was awaiting what. The --inspect flag opens the Chrome DevTools protocol, which can step through async functions, pause at await points, and inspect the JSAsyncFunctionObject's state. You can see local variables, the current bytecode position, and which promise is being awaited - all from the debugger's scope panel.
One last performance note: avoid creating async functions in hot loops. Each invocation allocates a JSAsyncFunctionObject. If you're mapping over an array with an async callback, each element gets its own async function instance:
const results = await Promise.all(
items.map(async item => {
const processed = await transform(item);
return processed;
})
);For 10,000 items, that's 10,000 JSAsyncFunctionObject allocations. V8's nursery handles this fine for most cases, but if the items array is very large and the transform operations are fast, you might see GC pressure from the sheer number of short-lived objects. For extreme cases, batching (process 100 at a time) or using raw .then() on the inner loop can reduce allocation pressure. For normal application code, this optimization is premature.