Environment
@google-cloud/bigtable version: 6.5.0 (also reproduces on main, see below)
- Node.js: v24.14.1
- OS: Linux (Debian 12) — not OS-specific
Summary
When Table#mutate / Table#insert encounters an RPC-level failure with entries still pending and no per-entry mutation errors, mutateInternal sets err.errors to an array that includes err itself, producing a self-referential error object.
This makes the returned error object unsafe to serialize with any tool that follows the AggregateError convention of recursing into err.errors (e.g. pino-std-serializers, most structured loggers, some error-reporting SDKs) — it causes RangeError: Maximum call stack size exceeded.
Offending code
https://github.com/googleapis/google-cloud-node/blob/main/handwritten/bigtable/src/utils/mutateInternal.ts
(Same code also present in the now-archived googleapis/nodejs-bigtable repo at src/utils/mutateInternal.ts.)
if (err) {
/* If there's an RPC level failure and the mutation entries don't have
a status code, the RPC level failure error code will be used as the
entry failure code.
*/
(err as ServiceError & {errors?: ServiceError[]}).errors =
mutationErrors.concat(
[...pendingEntryIndices]
.filter(index => !mutationErrorsByEntryIndex.has(index))
.map(() => err), // ← pushes `err` into its own `err.errors`
);
collectMetricsCallback(err, err);
return;
}
For each pending entry with no per-row status, the outer err is pushed into err.errors. After this runs, err.errors[i] === err for every i, forming a cycle.
Minimal repro
const { pino } = require('pino');
const log = pino();
// Shape of what `mutateInternal` hands back on RPC-level failure with pending entries:
const err = new Error('RPC failure');
err.code = 14; // UNAVAILABLE
err.errors = [err, err, err]; // one self-reference per pending entry
log.error({ err }, 'mutateRows failed');
// RangeError: Maximum call stack size exceeded
// at errSerializer (pino-std-serializers/lib/err.js:25)
Observed impact
In our production service this has fired thousands of times after a single bigtable RPC failure — each error crashes the logging pipeline before the underlying BigTable failure can be reported, so we lost all upstream diagnostic detail about the original RPC error.
Suggested fix
Don't reuse the outer err as a placeholder for pending entries. Options:
- Use a clone of
err (a fresh Error with the same message / code / details) for each pending-entry placeholder, so the aggregate contains siblings, not self-references.
- Use a distinct sentinel error (e.g.
new Error('pending entry — RPC failed with: ' + err.message)) for each pending entry.
- Or simply skip padding with the outer error for pending entries and keep
err.errors = mutationErrors — the top-level err.message/err.code already communicate the RPC-level failure.
All three avoid the cycle and remain serializable by standard logging / telemetry libraries.
Workaround for current users
Until this is fixed in a release, consumers have to add a cycle-breaking serializer in front of their logger (we did this via a custom pino err serializer that walks err.errors with a WeakSet and replaces cyclic references before delegating to stdSerializers.err).
Environment
@google-cloud/bigtableversion:6.5.0(also reproduces onmain, see below)Summary
When
Table#mutate/Table#insertencounters an RPC-level failure with entries still pending and no per-entry mutation errors,mutateInternalsetserr.errorsto an array that includeserritself, producing a self-referential error object.This makes the returned error object unsafe to serialize with any tool that follows the
AggregateErrorconvention of recursing intoerr.errors(e.g.pino-std-serializers, most structured loggers, some error-reporting SDKs) — it causesRangeError: Maximum call stack size exceeded.Offending code
https://github.com/googleapis/google-cloud-node/blob/main/handwritten/bigtable/src/utils/mutateInternal.ts
(Same code also present in the now-archived
googleapis/nodejs-bigtablerepo atsrc/utils/mutateInternal.ts.)For each pending entry with no per-row status, the outer
erris pushed intoerr.errors. After this runs,err.errors[i] === errfor everyi, forming a cycle.Minimal repro
Observed impact
In our production service this has fired thousands of times after a single bigtable RPC failure — each error crashes the logging pipeline before the underlying BigTable failure can be reported, so we lost all upstream diagnostic detail about the original RPC error.
Suggested fix
Don't reuse the outer
erras a placeholder for pending entries. Options:err(a fresh Error with the samemessage/code/details) for each pending-entry placeholder, so the aggregate contains siblings, not self-references.new Error('pending entry — RPC failed with: ' + err.message)) for each pending entry.err.errors = mutationErrors— the top-levelerr.message/err.codealready communicate the RPC-level failure.All three avoid the cycle and remain serializable by standard logging / telemetry libraries.
Workaround for current users
Until this is fixed in a release, consumers have to add a cycle-breaking serializer in front of their logger (we did this via a custom pino
errserializer that walkserr.errorswith aWeakSetand replaces cyclic references before delegating tostdSerializers.err).