Node.js v20.10.
00 Manual &
Documentation
PDF created by Elijah Echekwu
Content by node.js and its creators
About this documentation
Welcome to the official API reference documentation for Node.js!
Node.js is a JavaScript runtime built on the V8 JavaScript engine.
Contributing
Report errors in this documentation in the issue tracker. See the
contributing guide for directions on how to submit pull requests.
Stability index
Throughout the documentation are indications of a section’s
stability. Some APIs are so proven and so relied upon that they are
unlikely to ever change at all. Others are brand new and
experimental, or known to be hazardous.
The stability indices are as follows:
Stability: 0 - Deprecated. The feature may emit warnings.
Backward compatibility is not guaranteed.
Stability: 1 - Experimental. The feature is not subject to semantic
versioning rules. Non-backward compatible changes or removal
may occur in any future release. Use of the feature is not
recommended in production environments.
Experimental features are subdivided into stages:
1.0 - Early development. Experimental features at this stage
are unfinished and subject to substantial change.
1.1 - Active development. Experimental features at this stage
are nearing minimum viability.
1.2 - Release candidate. Experimental features at this stage
are hopefully ready to become stable. No further breaking
changes are anticipated but may still occur in response to
user feedback. We encourage user testing and feedback so
that we can know that this feature is ready to be marked as
stable.
Stability: 2 - Stable. Compatibility with the npm ecosystem is a
high priority.
Stability: 3 - Legacy. Although this feature is unlikely to be
removed and is still covered by semantic versioning guarantees,
it is no longer actively maintained, and other alternatives are
available.
Features are marked as legacy rather than being deprecated if their
use does no harm, and they are widely relied upon within the npm
ecosystem. Bugs found in legacy features are unlikely to be fixed.
Use caution when making use of Experimental features, particularly
when authoring libraries. Users may not be aware that experimental
features are being used. Bugs or behavior changes may surprise users
when Experimental API modifications occur. To avoid surprises, use
of an Experimental feature may need a command-line flag.
Experimental features may also emit a warning.
Stability overview
JSON output
Every .html document has a corresponding .json document. This is
for IDEs and other utilities that consume the documentation.
System calls and man pages
Node.js functions which wrap a system call will document that. The
docs link to the corresponding man pages which describe how the
system call works.
Most Unix system calls have Windows analogues. Still, behavior
differences may be unavoidable.
Usage and example
Usage
node [options] [V8 options] [script.js | -e "script" | - ]
[arguments]
Please see the Command-line options document for more
information.
Example
An example of a web server written with Node.js which responds
with 'Hello, World!':
Commands in this document start with $ or > to replicate how they
would appear in a user’s terminal. Do not include the $ and >
characters. They are there to show the start of each command.
Lines that don’t start with $ or > character show the output of the
previous command.
First, make sure to have downloaded and installed Node.js. See
Installing Node.js via package manager for further install
information.
Now, create an empty project folder called projects, then navigate
into it.
Linux and Mac:
mkdir ~/projects
cd ~/projects
Windows CMD:
mkdir %USERPROFILE%\projects
cd %USERPROFILE%\projects
Windows PowerShell:
mkdir $env:USERPROFILE\projects
cd $env:USERPROFILE\projects
Next, create a new source file in the projects folder and call it hello-
world.js.
Open hello-world.js in any preferred text editor and paste in the
following content:
const http = require('node:http');
const hostname = '127.0.0.1';
const port = 3000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello, World!\n');
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
Save the file. Then, in the terminal window, to run the hello-world.js
file, enter:
node hello-world.js
Output like this should appear in the terminal:
Server running at http://127.0.0.1:3000/
Now, open any preferred web browser and visit
http://127.0.0.1:3000.
If the browser displays the string Hello, World!, that indicates the
server is working.
Assert
Stability: 2 - Stable
The node:assert module provides a set of assertion functions for
verifying invariants.
Strict assertion mode
In strict assertion mode, non-strict methods behave like their
corresponding strict methods. For example, assert.deepEqual() will
behave like assert.deepStrictEqual().
In strict assertion mode, error messages for objects display a diff. In
legacy assertion mode, error messages for objects display the objects,
often truncated.
To use strict assertion mode:
import { strict as assert } from 'node:assert';
const assert = require('node:assert').strict;
import assert from 'node:assert/strict';
const assert = require('node:assert/strict');
Example error diff:
import { strict as assert } from 'node:assert';
assert.deepEqual([[[1, 2, 3]], 4, 5], [[[1, 2, '3']], 4, 5]);
// AssertionError: Expected inputs to be strictly deep-equal:
// + actual - expected ... Lines skipped
//
// [
// [
// ...
// 2,
// + 3
// - '3'
// ],
// ...
// 5
// ]
const assert = require('node:assert/strict');
assert.deepEqual([[[1, 2, 3]], 4, 5], [[[1, 2, '3']], 4, 5]);
// AssertionError: Expected inputs to be strictly deep-equal:
// + actual - expected ... Lines skipped
//
// [
// [
// ...
// 2,
// + 3
// - '3'
// ],
// ...
// 5
// ]
To deactivate the colors, use the NO_COLOR or NODE_DISABLE_COLORS
environment variables. This will also deactivate the colors in the
REPL. For more on color support in terminal environments, read the
tty getColorDepth() documentation.
Legacy assertion mode
Legacy assertion mode uses the == operator in:
assert.deepEqual()
assert.equal()
assert.notDeepEqual()
assert.notEqual()
To use legacy assertion mode:
import assert from 'node:assert';
const assert = require('node:assert');
Legacy assertion mode may have surprising results, especially when
using assert.deepEqual():
// WARNING: This does not throw an AssertionError in legacy assertio
assert.deepEqual(/a/gi, new Date());
Class: assert.AssertionError
Extends: {errors.Error}
Indicates the failure of an assertion. All errors thrown by the
node:assert module will be instances of the AssertionError class.
new assert.AssertionError(options)
options {Object}
message {string} If provided, the error message is set to this
value.
actual {any} The actual property on the error instance.
expected {any} The expected property on the error instance.
operator {string} The operator property on the error instance.
stackStartFn {Function} If provided, the generated stack
trace omits frames before this function.
A subclass of Error that indicates the failure of an assertion.
All instances contain the built-in Error properties (message and name)
and:
actual {any} Set to the actual argument for methods such as
assert.strictEqual().
expected {any} Set to the expected value for methods such as
assert.strictEqual().
generatedMessage {boolean} Indicates if the message was auto-
generated (true) or not.
code {string} Value is always ERR_ASSERTION to show that the error
is an assertion error.
operator {string} Set to the passed in operator value.
import assert from 'node:assert';
// Generate an AssertionError to compare the error message later:
const { message } = new assert.AssertionError({
actual: 1,
expected: 2,
operator: 'strictEqual',
});
// Verify error output:
try {
assert.strictEqual(1, 2);
} catch (err) {
assert(err instanceof assert.AssertionError);
assert.strictEqual(err.message, message);
assert.strictEqual(err.name, 'AssertionError');
assert.strictEqual(err.actual, 1);
assert.strictEqual(err.expected, 2);
assert.strictEqual(err.code, 'ERR_ASSERTION');
assert.strictEqual(err.operator, 'strictEqual');
assert.strictEqual(err.generatedMessage, true);
}
const assert = require('node:assert');
// Generate an AssertionError to compare the error message later:
const { message } = new assert.AssertionError({
actual: 1,
expected: 2,
operator: 'strictEqual',
});
// Verify error output:
try {
assert.strictEqual(1, 2);
} catch (err) {
assert(err instanceof assert.AssertionError);
assert.strictEqual(err.message, message);
assert.strictEqual(err.name, 'AssertionError');
assert.strictEqual(err.actual, 1);
assert.strictEqual(err.expected, 2);
assert.strictEqual(err.code, 'ERR_ASSERTION');
assert.strictEqual(err.operator, 'strictEqual');
assert.strictEqual(err.generatedMessage, true);
}
Class: assert.CallTracker
Stability: 0 - Deprecated
This feature is deprecated and will be removed in a future version.
Please consider using alternatives such as the mock helper function.
new assert.CallTracker()
Creates a new CallTracker object which can be used to track if
functions were called a specific number of times. The
tracker.verify() must be called for the verification to take place. The
usual pattern would be to call it in a process.on('exit') handler.
import assert from 'node:assert';
import process from 'node:process';
const tracker = new assert.CallTracker();
function func() {}
// callsfunc() must be called exactly 1 time before tracker.verify()
const callsfunc = tracker.calls(func, 1);
callsfunc();
// Calls tracker.verify() and verifies if all tracker.calls() functi
// been called exact times.
process.on('exit', () => {
tracker.verify();
});
const assert = require('node:assert');
const tracker = new assert.CallTracker();
function func() {}
// callsfunc() must be called exactly 1 time before tracker.verify()
const callsfunc = tracker.calls(func, 1);
callsfunc();
// Calls tracker.verify() and verifies if all tracker.calls() functi
// been called exact times.
process.on('exit', () => {
tracker.verify();
});
tracker.calls([fn][, exact])
fn {Function} Default: A no-op function.
exact {number} Default: 1.
Returns: {Function} that wraps fn.
The wrapper function is expected to be called exactly exact times. If
the function has not been called exactly exact times when
tracker.verify() is called, then tracker.verify() will throw an error.
import assert from 'node:assert';
// Creates call tracker.
const tracker = new assert.CallTracker();
function func() {}
// Returns a function that wraps func() that must be called exact ti
// before tracker.verify().
const callsfunc = tracker.calls(func);
const assert = require('node:assert');
// Creates call tracker.
const tracker = new assert.CallTracker();
function func() {}
// Returns a function that wraps func() that must be called exact ti
// before tracker.verify().
const callsfunc = tracker.calls(func);
tracker.getCalls(fn)
fn {Function}.
Returns: {Array} with all the calls to a tracked function.
Object {Object}
thisArg {Object}
arguments {Array} the arguments passed to the tracked
function
import assert from 'node:assert';
const tracker = new assert.CallTracker();
function func() {}
const callsfunc = tracker.calls(func);
callsfunc(1, 2, 3);
assert.deepStrictEqual(tracker.getCalls(callsfunc),
[{ thisArg: undefined, arguments: [1, 2, 3] }
const assert = require('node:assert');
// Creates call tracker.
const tracker = new assert.CallTracker();
function func() {}
const callsfunc = tracker.calls(func);
callsfunc(1, 2, 3);
assert.deepStrictEqual(tracker.getCalls(callsfunc),
[{ thisArg: undefined, arguments: [1, 2, 3] }
tracker.report()
Returns: {Array} of objects containing information about the
wrapper functions returned by tracker.calls().
Object {Object}
message {string}
actual {number} The actual number of times the function
was called.
expected {number} The number of times the function was
expected to be called.
operator {string} The name of the function that is wrapped.
stack {Object} A stack trace of the function.
The arrays contains information about the expected and actual
number of calls of the functions that have not been called the
expected number of times.
import assert from 'node:assert';
// Creates call tracker.
const tracker = new assert.CallTracker();
function func() {}
// Returns a function that wraps func() that must be called exact ti
// before tracker.verify().
const callsfunc = tracker.calls(func, 2);
// Returns an array containing information on callsfunc()
console.log(tracker.report());
// [
// {
// message: 'Expected the func function to be executed 2 time(s)
// executed 0 time(s).',
// actual: 0,
// expected: 2,
// operator: 'func',
// stack: stack trace
// }
// ]
const assert = require('node:assert');
// Creates call tracker.
const tracker = new assert.CallTracker();
function func() {}
// Returns a function that wraps func() that must be called exact ti
// before tracker.verify().
const callsfunc = tracker.calls(func, 2);
// Returns an array containing information on callsfunc()
console.log(tracker.report());
// [
// {
// message: 'Expected the func function to be executed 2 time(s)
// executed 0 time(s).',
// actual: 0,
// expected: 2,
// operator: 'func',
// stack: stack trace
// }
// ]
tracker.reset([fn])
fn {Function} a tracked function to reset.
Reset calls of the call tracker. If a tracked function is passed as an
argument, the calls will be reset for it. If no arguments are passed, all
tracked functions will be reset
import assert from 'node:assert';
const tracker = new assert.CallTracker();
function func() {}
const callsfunc = tracker.calls(func);
callsfunc();
// Tracker was called once
assert.strictEqual(tracker.getCalls(callsfunc).length, 1);
tracker.reset(callsfunc);
assert.strictEqual(tracker.getCalls(callsfunc).length, 0);
const assert = require('node:assert');
const tracker = new assert.CallTracker();
function func() {}
const callsfunc = tracker.calls(func);
callsfunc();
// Tracker was called once
assert.strictEqual(tracker.getCalls(callsfunc).length, 1);
tracker.reset(callsfunc);
assert.strictEqual(tracker.getCalls(callsfunc).length, 0);
tracker.verify()
Iterates through the list of functions passed to tracker.calls() and
will throw an error for functions that have not been called the
expected number of times.
import assert from 'node:assert';
// Creates call tracker.
const tracker = new assert.CallTracker();
function func() {}
// Returns a function that wraps func() that must be called exact ti
// before tracker.verify().
const callsfunc = tracker.calls(func, 2);
callsfunc();
// Will throw an error since callsfunc() was only called once.
tracker.verify();
const assert = require('node:assert');
// Creates call tracker.
const tracker = new assert.CallTracker();
function func() {}
// Returns a function that wraps func() that must be called exact ti
// before tracker.verify().
const callsfunc = tracker.calls(func, 2);
callsfunc();
// Will throw an error since callsfunc() was only called once.
tracker.verify();
assert(value[, message])
value {any} The input that is checked for being truthy.
message {string|Error}
An alias of assert.ok().
assert.deepEqual(actual,
expected[, message])
actual {any}
expected {any}
message {string|Error}
Strict assertion mode
An alias of assert.deepStrictEqual().
Legacy assertion mode
Stability: 3 - Legacy: Use assert.deepStrictEqual() instead.
Tests for deep equality between the actual and expected parameters.
Consider using assert.deepStrictEqual() instead. assert.deepEqual()
can have surprising results.
Deep equality means that the enumerable “own” properties of child
objects are also recursively evaluated by the following rules.
Comparison details
Primitive values are compared with the == operator, with the
exception of NaN. It is treated as being identical in case both sides
are NaN.
Type tags of objects should be the same.
Only enumerable “own” properties are considered.
Error names and messages are always compared, even if these are
not enumerable properties.
Object wrappers are compared both as objects and unwrapped
values.
Object properties are compared unordered.
Map keys and Set items are compared unordered.
Recursion stops when both sides differ or both sides encounter a
circular reference.
Implementation does not test the [[Prototype]] of objects.
Symbol properties are not compared.
WeakMap and WeakSet comparison does not rely on their values.
RegExp lastIndex, flags, and source are always compared, even if
these are not enumerable properties.
The following example does not throw an AssertionError because the
primitives are compared using the == operator.
import assert from 'node:assert';
// WARNING: This does not throw an AssertionError!
assert.deepEqual('+00000000', false);
const assert = require('node:assert');
// WARNING: This does not throw an AssertionError!
assert.deepEqual('+00000000', false);
“Deep” equality means that the enumerable “own” properties of child
objects are evaluated also:
import assert from 'node:assert';
const obj1 = {
a: {
b: 1,
},
};
const obj2 = {
a: {
b: 2,
},
};
const obj3 = {
a: {
b: 1,
},
};
const obj4 = { __proto__: obj1 };
assert.deepEqual(obj1, obj1);
// OK
// Values of b are different:
assert.deepEqual(obj1, obj2);
// AssertionError: { a: { b: 1 } } deepEqual { a: { b: 2 } }
assert.deepEqual(obj1, obj3);
// OK
// Prototypes are ignored:
assert.deepEqual(obj1, obj4);
// AssertionError: { a: { b: 1 } } deepEqual {}
const assert = require('node:assert');
const obj1 = {
a: {
b: 1,
},
};
const obj2 = {
a: {
b: 2,
},
};
const obj3 = {
a: {
b: 1,
},
};
const obj4 = { __proto__: obj1 };
assert.deepEqual(obj1, obj1);
// OK
// Values of b are different:
assert.deepEqual(obj1, obj2);
// AssertionError: { a: { b: 1 } } deepEqual { a: { b: 2 } }
assert.deepEqual(obj1, obj3);
// OK
// Prototypes are ignored:
assert.deepEqual(obj1, obj4);
// AssertionError: { a: { b: 1 } } deepEqual {}
If the values are not equal, an AssertionError is thrown with a message
property set equal to the value of the message parameter. If the
message parameter is undefined, a default error message is assigned.
If the message parameter is an instance of an Error then it will be
thrown instead of the AssertionError.
assert.deepStrictEqual(actual,
expected[, message])
actual {any}
expected {any}
message {string|Error}
Tests for deep equality between the actual and expected parameters.
“Deep” equality means that the enumerable “own” properties of child
objects are recursively evaluated also by the following rules.
Comparison details
Primitive values are compared using Object.is().
Type tags of objects should be the same.
[[Prototype]] of objects are compared using the === operator.
Only enumerable “own” properties are considered.
Error names and messages are always compared, even if these are
not enumerable properties.
Enumerable own Symbol properties are compared as well.
Object wrappers are compared both as objects and unwrapped
values.
Object properties are compared unordered.
Map keys and Set items are compared unordered.
Recursion stops when both sides differ or both sides encounter a
circular reference.
WeakMap and WeakSet comparison does not rely on their values. See
below for further details.
RegExp lastIndex, flags, and source are always compared, even if
these are not enumerable properties.
import assert from 'node:assert/strict';
// This fails because 1 !== '1'.
assert.deepStrictEqual({ a: 1 }, { a: '1' });
// AssertionError: Expected inputs to be strictly deep-equal:
// + actual - expected
//
// {
// + a: 1
// - a: '1'
// }
// The following objects don't have own properties
const date = new Date();
const object = {};
const fakeDate = {};
Object.setPrototypeOf(fakeDate, Date.prototype);
// Different [[Prototype]]:
assert.deepStrictEqual(object, fakeDate);
// AssertionError: Expected inputs to be strictly deep-equal:
// + actual - expected
//
// + {}
// - Date {}
// Different type tags:
assert.deepStrictEqual(date, fakeDate);
// AssertionError: Expected inputs to be strictly deep-equal:
// + actual - expected
//
// + 2018-04-26T00:49:08.604Z
//
// - Date {}
assert.deepStrictEqual(NaN, NaN);
// OK because Object.is(NaN, NaN) is true.
// Different unwrapped numbers:
assert.deepStrictEqual(new Number(1), new Number(2));
// AssertionError: Expected inputs to be strictly deep-equal:
// + actual - expected
//
// + [Number: 1]
// - [Number: 2]
assert.deepStrictEqual(new String('foo'), Object('foo'));
// OK because the object and the string are identical when unwrapped
assert.deepStrictEqual(-0, -0);
// OK
// Different zeros:
assert.deepStrictEqual(0, -0);
// AssertionError: Expected inputs to be strictly deep-equal:
// + actual - expected
//
// + 0
// - -0
const symbol1 = Symbol();
const symbol2 = Symbol();
assert.deepStrictEqual({ [symbol1]: 1 }, { [symbol1]: 1 });
// OK, because it is the same symbol on both objects.
assert.deepStrictEqual({ [symbol1]: 1 }, { [symbol2]: 1 });
// AssertionError [ERR_ASSERTION]: Inputs identical but not referenc
//
// {
// [Symbol()]: 1
// }
const weakMap1 = new WeakMap();
const weakMap2 = new WeakMap([[{}, {}]]);
const weakMap3 = new WeakMap();
weakMap3.unequal = true;
assert.deepStrictEqual(weakMap1, weakMap2);
p q ( p , p );
// OK, because it is impossible to compare the entries
// Fails because weakMap3 has a property that weakMap1 does not cont
assert.deepStrictEqual(weakMap1, weakMap3);
// AssertionError: Expected inputs to be strictly deep-equal:
// + actual - expected
//
// WeakMap {
// + [items unknown]
// - [items unknown],
// - unequal: true
// }
const assert = require('node:assert/strict');
// This fails because 1 !== '1'.
assert.deepStrictEqual({ a: 1 }, { a: '1' });
// AssertionError: Expected inputs to be strictly deep-equal:
// + actual - expected
//
// {
// + a: 1
// - a: '1'
// }
// The following objects don't have own properties
const date = new Date();
const object = {};
const fakeDate = {};
Object.setPrototypeOf(fakeDate, Date.prototype);
// Different [[Prototype]]:
assert.deepStrictEqual(object, fakeDate);
// AssertionError: Expected inputs to be strictly deep-equal:
// + actual - expected
//
// + {}
// - Date {}
// Different type tags:
assert.deepStrictEqual(date, fakeDate);
// AssertionError: Expected inputs to be strictly deep-equal:
// + actual - expected
//
//
// + 2018-04-26T00:49:08.604Z
// - Date {}
assert.deepStrictEqual(NaN, NaN);
// OK because Object.is(NaN, NaN) is true.
// Different unwrapped numbers:
assert.deepStrictEqual(new Number(1), new Number(2));
// AssertionError: Expected inputs to be strictly deep-equal:
// + actual - expected
//
// + [Number: 1]
// - [Number: 2]
assert.deepStrictEqual(new String('foo'), Object('foo'));
// OK because the object and the string are identical when unwrapped
assert.deepStrictEqual(-0, -0);
// OK
// Different zeros:
assert.deepStrictEqual(0, -0);
// AssertionError: Expected inputs to be strictly deep-equal:
// + actual - expected
//
// + 0
// - -0
const symbol1 = Symbol();
const symbol2 = Symbol();
assert.deepStrictEqual({ [symbol1]: 1 }, { [symbol1]: 1 });
// OK, because it is the same symbol on both objects.
assert.deepStrictEqual({ [symbol1]: 1 }, { [symbol2]: 1 });
// AssertionError [ERR_ASSERTION]: Inputs identical but not referenc
//
// {
// [Symbol()]: 1
// }
const weakMap1 = new WeakMap();
const weakMap2 = new WeakMap([[{}, {}]]);
const weakMap3 = new WeakMap();
weakMap3.unequal = true;
assert.deepStrictEqual(weakMap1, weakMap2);
// OK, because it is impossible to compare the entries
// Fails because weakMap3 has a property that weakMap1 does not cont
assert.deepStrictEqual(weakMap1, weakMap3);
// AssertionError: Expected inputs to be strictly deep-equal:
// + actual - expected
//
// WeakMap {
// + [items unknown]
// - [items unknown],
// - unequal: true
// }
If the values are not equal, an AssertionError is thrown with a message
property set equal to the value of the message parameter. If the
message parameter is undefined, a default error message is assigned.
If the message parameter is an instance of an Error then it will be
thrown instead of the AssertionError.
assert.doesNotMatch(string,
regexp[, message])
string {string}
regexp {RegExp}
message {string|Error}
Expects the string input not to match the regular expression.
import assert from 'node:assert/strict';
assert.doesNotMatch('I will fail', /fail/);
// AssertionError [ERR_ASSERTION]: The input was expected to not mat
assert.doesNotMatch(123, /pass/);
// AssertionError [ERR_ASSERTION]: The "string" argument must be of
assert.doesNotMatch('I will pass', /different/);
( p , / /);
// OK
const assert = require('node:assert/strict');
assert.doesNotMatch('I will fail', /fail/);
// AssertionError [ERR_ASSERTION]: The input was expected to not mat
assert.doesNotMatch(123, /pass/);
// AssertionError [ERR_ASSERTION]: The "string" argument must be of
assert.doesNotMatch('I will pass', /different/);
// OK
If the values do match, or if the string argument is of another type
than string, an AssertionError is thrown with a message property set
equal to the value of the message parameter. If the message parameter
is undefined, a default error message is assigned. If the message
parameter is an instance of an Error then it will be thrown instead of
the AssertionError.
assert.doesNotReject(asyncFn[,
error][, message])
asyncFn {Function|Promise}
error {RegExp|Function}
message {string}
Awaits the asyncFn promise or, if asyncFn is a function, immediately
calls the function and awaits the returned promise to complete. It
will then check that the promise is not rejected.
If asyncFn is a function and it throws an error synchronously,
assert.doesNotReject() will return a rejected Promise with that error.
If the function does not return a promise, assert.doesNotReject() will
return a rejected Promise with an ERR_INVALID_RETURN_VALUE error. In
both cases the error handler is skipped.
Using assert.doesNotReject() is actually not useful because there is
little benefit in catching a rejection and then rejecting it again.
Instead, consider adding a comment next to the specific code path
that should not reject and keep error messages as expressive as
possible.
If specified, error can be a Class, RegExp, or a validation function. See
assert.throws() for more details.
Besides the async nature to await the completion behaves identically
to assert.doesNotThrow().
import assert from 'node:assert/strict';
await assert.doesNotReject(
async () => {
throw new TypeError('Wrong value');
},
SyntaxError,
);
const assert = require('node:assert/strict');
(async () => {
await assert.doesNotReject(
async () => {
throw new TypeError('Wrong value');
},
SyntaxError,
);
})();
import assert from 'node:assert/strict';
assert.doesNotReject(Promise.reject(new TypeError('Wrong value')))
.then(() => {
// ...
});
const assert = require('node:assert/strict');
assert.doesNotReject(Promise.reject(new TypeError('Wrong value')))
.then(() => {
// ...
});
assert.doesNotThrow(fn[, error][,
message])
fn {Function}
error {RegExp|Function}
message {string}
Asserts that the function fn does not throw an error.
Using assert.doesNotThrow() is actually not useful because there is no
benefit in catching an error and then rethrowing it. Instead, consider
adding a comment next to the specific code path that should not
throw and keep error messages as expressive as possible.
When assert.doesNotThrow() is called, it will immediately call the fn
function.
If an error is thrown and it is the same type as that specified by the
error parameter, then an AssertionError is thrown. If the error is of a
different type, or if the error parameter is undefined, the error is
propagated back to the caller.
If specified, error can be a Class, RegExp, or a validation function. See
assert.throws() for more details.
The following, for instance, will throw the TypeError because there is
no matching error type in the assertion:
import assert from 'node:assert/strict';
assert.doesNotThrow(
() => {
throw new TypeError('Wrong value');
},
SyntaxError,
);
const assert = require('node:assert/strict');
assert.doesNotThrow(
() => {
throw new TypeError('Wrong value');
},
SyntaxError,
);
However, the following will result in an AssertionError with the
message ‘Got unwanted exception…’:
import assert from 'node:assert/strict';
assert.doesNotThrow(
() => {
throw new TypeError('Wrong value');
},
TypeError,
);
const assert = require('node:assert/strict');
assert.doesNotThrow(
() => {
throw new TypeError('Wrong value');
},
TypeError,
);
If an AssertionError is thrown and a value is provided for the message
parameter, the value of message will be appended to the
AssertionError message:
import assert from 'node:assert/strict';
assert.doesNotThrow(
() => {
throw new TypeError('Wrong value');
},
/Wrong value/,
'Whoops',
);
// Throws: AssertionError: Got unwanted exception: Whoops
const assert = require('node:assert/strict');
assert.doesNotThrow(
() => {
throw new TypeError('Wrong value');
},
/Wrong value/,
'Whoops',
);
// Throws: AssertionError: Got unwanted exception: Whoops
assert.equal(actual, expected[,
message])
actual {any}
expected {any}
message {string|Error}
Strict assertion mode
An alias of assert.strictEqual().
Legacy assertion mode
Stability: 3 - Legacy: Use assert.strictEqual() instead.
Tests shallow, coercive equality between the actual and expected
parameters using the == operator. NaN is specially handled and treated
as being identical if both sides are NaN.
import assert from 'node:assert';
assert.equal(1, 1);
// OK, 1 == 1
assert.equal(1, '1');
// OK, 1 == '1'
assert.equal(NaN, NaN);
// OK
assert.equal(1, 2);
// AssertionError: 1 == 2
assert.equal({ a: { b: 1 } }, { a: { b: 1 } });
// AssertionError: { a: { b: 1 } } == { a: { b: 1 } }
const assert = require('node:assert');
assert.equal(1, 1);
// OK, 1 == 1
assert.equal(1, '1');
// OK, 1 == '1'
assert.equal(NaN, NaN);
// OK
assert.equal(1, 2);
// AssertionError: 1 == 2
assert.equal({ a: { b: 1 } }, { a: { b: 1 } });
// AssertionError: { a: { b: 1 } } == { a: { b: 1 } }
If the values are not equal, an AssertionError is thrown with a message
property set equal to the value of the message parameter. If the
message parameter is undefined, a default error message is assigned.
If the message parameter is an instance of an Error then it will be
thrown instead of the AssertionError.
assert.fail([message])
message {string|Error} Default: 'Failed'
Throws an AssertionError with the provided error message or a
default error message. If the message parameter is an instance of an
Error then it will be thrown instead of the AssertionError.
import assert from 'node:assert/strict';
assert.fail();
// AssertionError [ERR_ASSERTION]: Failed
assert.fail('boom');
// AssertionError [ERR_ASSERTION]: boom
assert.fail(new TypeError('need array'));
// TypeError: need array
const assert = require('node:assert/strict');
assert.fail();
// AssertionError [ERR_ASSERTION]: Failed
assert.fail('boom');
// AssertionError [ERR_ASSERTION]: boom
assert.fail(new TypeError('need array'));
// TypeError: need array
Using assert.fail() with more than two arguments is possible but
deprecated. See below for further details.
assert.fail(actual, expected[,
message[, operator[,
stackStartFn]]])
Stability: 0 - Deprecated: Use assert.fail([message]) or other
assert functions instead.
actual {any}
expected {any}
message {string|Error}
operator {string} Default: '!='
stackStartFn {Function} Default: assert.fail
If message is falsy, the error message is set as the values of actual and
expected separated by the provided operator. If just the two actual
and expected arguments are provided, operator will default to '!='. If
message is provided as third argument it will be used as the error
message and the other arguments will be stored as properties on the
thrown object. If stackStartFn is provided, all stack frames above that
function will be removed from stacktrace (see
Error.captureStackTrace). If no arguments are given, the default
message Failed will be used.
import assert from 'node:assert/strict';
assert.fail('a', 'b');
// AssertionError [ERR_ASSERTION]: 'a' != 'b'
assert.fail(1, 2, undefined, '>');
// AssertionError [ERR_ASSERTION]: 1 > 2
assert.fail(1, 2, 'fail');
// AssertionError [ERR_ASSERTION]: fail
assert.fail(1, 2, 'whoops', '>');
// AssertionError [ERR_ASSERTION]: whoops
assert.fail(1, 2, new TypeError('need array'));
// TypeError: need array
const assert = require('node:assert/strict');
assert.fail('a', 'b');
// AssertionError [ERR_ASSERTION]: 'a' != 'b'
assert.fail(1, 2, undefined, '>');
// AssertionError [ERR_ASSERTION]: 1 > 2
assert.fail(1, 2, 'fail');
// AssertionError [ERR_ASSERTION]: fail
assert.fail(1, 2, 'whoops', '>');
// AssertionError [ERR_ASSERTION]: whoops
assert.fail(1, 2, new TypeError('need array'));
// TypeError: need array
In the last three cases actual, expected, and operator have no
influence on the error message.
Example use of stackStartFn for truncating the exception’s
stacktrace:
import assert from 'node:assert/strict';
function suppressFrame() {
assert.fail('a', 'b', undefined, '!==', suppressFrame);
}
suppressFrame();
// AssertionError [ERR_ASSERTION]: 'a' !== 'b'
// at repl:1:1
// at ContextifyScript.Script.runInThisContext (vm.js:44:33)
// ...
const assert = require('node:assert/strict');
function suppressFrame() {
assert.fail('a', 'b', undefined, '!==', suppressFrame);
}
suppressFrame();
// AssertionError [ERR_ASSERTION]: 'a' !== 'b'
// at repl:1:1
// at ContextifyScript.Script.runInThisContext (vm.js:44:33)
// ...
assert.ifError(value)
value {any}
Throws value if value is not undefined or null. This is useful when
testing the error argument in callbacks. The stack trace contains all
frames from the error passed to ifError() including the potential
new frames for ifError() itself.
import assert from 'node:assert/strict';
assert.ifError(null);
// OK
assert.ifError(0);
// AssertionError [ERR_ASSERTION]: ifError got unwanted exception: 0
assert.ifError('error');
// AssertionError [ERR_ASSERTION]: ifError got unwanted exception:
assert.ifError(new Error());
// AssertionError [ERR_ASSERTION]: ifError got unwanted exception: E
// Create some random error frames.
let err;
(function errorFrame() {
err = new Error('test error');
})();
(function ifErrorFrame() {
assert.ifError(err);
})();
// AssertionError [ERR_ASSERTION]: ifError got unwanted exception: t
// at ifErrorFrame
// at errorFrame
const assert = require('node:assert/strict');
assert.ifError(null);
// OK
assert.ifError(0);
// AssertionError [ERR_ASSERTION]: ifError got unwanted exception: 0
assert.ifError('error');
// AssertionError [ERR_ASSERTION]: ifError got unwanted exception:
assert.ifError(new Error());
// AssertionError [ERR_ASSERTION]: ifError got unwanted exception: E
// Create some random error frames.
let err;
(function errorFrame() {
err = new Error('test error');
})();
(function ifErrorFrame() {
assert.ifError(err);
})();
// AssertionError [ERR_ASSERTION]: ifError got unwanted exception: t
// at ifErrorFrame
// at errorFrame
assert.match(string, regexp[,
message])
string {string}
regexp {RegExp}
message {string|Error}
Expects the string input to match the regular expression.
import assert from 'node:assert/strict';
assert.match('I will fail', /pass/);
// AssertionError [ERR_ASSERTION]: The input did not match the regul
assert.match(123, /pass/);
// AssertionError [ERR_ASSERTION]: The "string" argument must be of
assert.match('I will pass', /pass/);
// OK
const assert = require('node:assert/strict');
assert.match('I will fail', /pass/);
// AssertionError [ERR_ASSERTION]: The input did not match the regul
assert.match(123, /pass/);
// AssertionError [ERR_ASSERTION]: The "string" argument must be of
assert.match('I will pass', /pass/);
// OK
If the values do not match, or if the string argument is of another
type than string, an AssertionError is thrown with a message property
set equal to the value of the message parameter. If the message
parameter is undefined, a default error message is assigned. If the
message parameter is an instance of an Error then it will be thrown
instead of the AssertionError.
assert.notDeepEqual(actual,
expected[, message])
actual {any}
expected {any}
message {string|Error}
Strict assertion mode
An alias of assert.notDeepStrictEqual().
Legacy assertion mode
Stability: 3 - Legacy: Use assert.notDeepStrictEqual() instead.
Tests for any deep inequality. Opposite of assert.deepEqual().
import assert from 'node:assert';
const obj1 = {
a: {
b: 1,
},
};
const obj2 = {
a: {
b: 2,
},
};
const obj3 = {
a: {
b: 1,
},
};
const obj4 = { __proto__: obj1 };
assert.notDeepEqual(obj1, obj1);
// AssertionError: { a: { b: 1 } } notDeepEqual { a: { b: 1 } }
assert.notDeepEqual(obj1, obj2);
// OK
assert.notDeepEqual(obj1, obj3);
// AssertionError: { a: { b: 1 } } notDeepEqual { a: { b: 1 } }
assert.notDeepEqual(obj1, obj4);
// OK
const assert = require('node:assert');
const obj1 = {
a: {
b: 1,
},
};
const obj2 = {
a: {
b: 2,
},
};
const obj3 = {
a: {
b: 1,
},
};
const obj4 = { __proto__: obj1 };
assert.notDeepEqual(obj1, obj1);
// AssertionError: { a: { b: 1 } } notDeepEqual { a: { b: 1 } }
assert.notDeepEqual(obj1, obj2);
// OK
assert.notDeepEqual(obj1, obj3);
// AssertionError: { a: { b: 1 } } notDeepEqual { a: { b: 1 } }
assert.notDeepEqual(obj1, obj4);
// OK
If the values are deeply equal, an AssertionError is thrown with a
message property set equal to the value of the message parameter. If
the message parameter is undefined, a default error message is
assigned. If the message parameter is an instance of an Error then it
will be thrown instead of the AssertionError.
assert.notDeepStrictEqual(actual,
expected[, message])
actual {any}
expected {any}
message {string|Error}
Tests for deep strict inequality. Opposite of assert.deepStrictEqual().
import assert from 'node:assert/strict';
assert.notDeepStrictEqual({ a: 1 }, { a: '1' });
// OK
const assert = require('node:assert/strict');
assert.notDeepStrictEqual({ a: 1 }, { a: '1' });
// OK
If the values are deeply and strictly equal, an AssertionError is
thrown with a message property set equal to the value of the message
parameter. If the message parameter is undefined, a default error
message is assigned. If the message parameter is an instance of an
Error then it will be thrown instead of the AssertionError.
assert.notEqual(actual, expected[,
message])
actual {any}
expected {any}
message {string|Error}
Strict assertion mode
An alias of assert.notStrictEqual().
Legacy assertion mode
Stability: 3 - Legacy: Use assert.notStrictEqual() instead.
Tests shallow, coercive inequality with the != operator. NaN is
specially handled and treated as being identical if both sides are NaN.
import assert from 'node:assert';
assert.notEqual(1, 2);
// OK
assert.notEqual(1, 1);
// AssertionError: 1 != 1
assert.notEqual(1, '1');
// AssertionError: 1 != '1'
const assert = require('node:assert');
assert.notEqual(1, 2);
// OK
assert.notEqual(1, 1);
// AssertionError: 1 != 1
assert.notEqual(1, '1');
// AssertionError: 1 != '1'
If the values are equal, an AssertionError is thrown with a message
property set equal to the value of the message parameter. If the
message parameter is undefined, a default error message is assigned.
If the message parameter is an instance of an Error then it will be
thrown instead of the AssertionError.
assert.notStrictEqual(actual,
expected[, message])
actual {any}
expected {any}
message {string|Error}
Tests strict inequality between the actual and expected parameters as
determined by Object.is().
import assert from 'node:assert/strict';
assert.notStrictEqual(1, 2);
// OK
assert.notStrictEqual(1, 1);
// AssertionError [ERR_ASSERTION]: Expected "actual" to be strictly
//
// 1
assert.notStrictEqual(1, '1');
// OK
const assert = require('node:assert/strict');
assert.notStrictEqual(1, 2);
// OK
assert.notStrictEqual(1, 1);
// AssertionError [ERR_ASSERTION]: Expected "actual" to be strictly
//
// 1
//
assert.notStrictEqual(1, '1');
// OK
If the values are strictly equal, an AssertionError is thrown with a
message property set equal to the value of the message parameter. If
the message parameter is undefined, a default error message is
assigned. If the message parameter is an instance of an Error then it
will be thrown instead of the AssertionError.
assert.ok(value[, message])
value {any}
message {string|Error}
Tests if value is truthy. It is equivalent to assert.equal(!!value, true,
message).
If value is not truthy, an AssertionError is thrown with a message
property set equal to the value of the message parameter. If the
message parameter is undefined, a default error message is assigned. If
the message parameter is an instance of an Error then it will be thrown
instead of the AssertionError. If no arguments are passed in at all
message will be set to the string: 'No value argument passed to
`assert.ok()`'.
Be aware that in the repl the error message will be different to the
one thrown in a file! See below for further details.
import assert from 'node:assert/strict';
assert.ok(true);
// OK
assert.ok(1);
// OK
assert.ok();
// AssertionError: No value argument passed to `assert.ok()`
assert.ok(false, 'it\'s false');
// AssertionError: it's false
// In the repl:
assert.ok(typeof 123 === 'string');
// AssertionError: false == true
// In a file (e.g. test.js):
assert.ok(typeof 123 === 'string');
// AssertionError: The expression evaluated to a falsy value:
//
// assert.ok(typeof 123 === 'string')
assert.ok(false);
// AssertionError: The expression evaluated to a falsy value:
//
// assert.ok(false)
assert.ok(0);
// AssertionError: The expression evaluated to a falsy value:
//
// assert.ok(0)
const assert = require('node:assert/strict');
assert.ok(true);
// OK
assert.ok(1);
// OK
assert.ok();
// AssertionError: No value argument passed to `assert.ok()`
assert.ok(false, 'it\'s false');
// AssertionError: it's false
// In the repl:
assert.ok(typeof 123 === 'string');
// AssertionError: false == true
// In a file (e.g. test.js):
assert.ok(typeof 123 === 'string');
// AssertionError: The expression evaluated to a falsy value:
//
// assert.ok(typeof 123 === 'string')
assert.ok(false);
// AssertionError: The expression evaluated to a falsy value:
//
// assert.ok(false)
assert.ok(0);
// AssertionError: The expression evaluated to a falsy value:
//
// assert.ok(0)
import assert from 'node:assert/strict';
// Using `assert()` works the same:
assert(0);
// AssertionError: The expression evaluated to a falsy value:
//
// assert(0)
const assert = require('node:assert');
// Using `assert()` works the same:
assert(0);
// AssertionError: The expression evaluated to a falsy value:
//
// assert(0)
assert.rejects(asyncFn[, error][,
message])
asyncFn {Function|Promise}
error {RegExp|Function|Object|Error}
message {string}
Awaits the asyncFn promise or, if asyncFn is a function, immediately
calls the function and awaits the returned promise to complete. It
will then check that the promise is rejected.
If asyncFn is a function and it throws an error synchronously,
assert.rejects() will return a rejected Promise with that error. If the
function does not return a promise, assert.rejects() will return a
rejected Promise with an ERR_INVALID_RETURN_VALUE error. In both cases
the error handler is skipped.
Besides the async nature to await the completion behaves identically
to assert.throws().
If specified, error can be a Class, RegExp, a validation function, an
object where each property will be tested for, or an instance of error
where each property will be tested for including the non-enumerable
message and name properties.
If specified, message will be the message provided by the
AssertionError if the asyncFn fails to reject.
import assert from 'node:assert/strict';
await assert.rejects(
async () => {
throw new TypeError('Wrong value');
},
{
name: 'TypeError',
message: 'Wrong value',
},
);
const assert = require('node:assert/strict');
(async () => {
await assert.rejects(
async () => {
throw new TypeError('Wrong value');
},
{
name: 'TypeError',
message: 'Wrong value',
},
);
})();
import assert from 'node:assert/strict';
await assert.rejects(
async () => {
throw new TypeError('Wrong value');
},
(err) => {
assert.strictEqual(err.name, 'TypeError');
assert.strictEqual(err.message, 'Wrong value');
return true;
},
);
const assert = require('node:assert/strict');
(async () => {
await assert.rejects(
async () => {
throw new TypeError('Wrong value');
},
(err) => {
assert.strictEqual(err.name, 'TypeError');
assert.strictEqual(err.message, 'Wrong value');
return true;
},
);
})();
import assert from 'node:assert/strict';
assert.rejects(
Promise.reject(new Error('Wrong value')),
Error,
).then(() => {
// ...
});
const assert = require('node:assert/strict');
assert.rejects(
Promise.reject(new Error('Wrong value')),
Error,
).then(() => {
// ...
});
error cannot be a string. If a string is provided as the second
argument, then error is assumed to be omitted and the string will be
used for message instead. This can lead to easy-to-miss mistakes.
Please read the example in assert.throws() carefully if using a string
as the second argument gets considered.
assert.strictEqual(actual,
expected[, message])
actual {any}
expected {any}
message {string|Error}
Tests strict equality between the actual and expected parameters as
determined by Object.is().
import assert from 'node:assert/strict';
assert.strictEqual(1, 2);
// AssertionError [ERR_ASSERTION]: Expected inputs to be strictly eq
//
// 1 !== 2
assert.strictEqual(1, 1);
// OK
assert.strictEqual('Hello foobar', 'Hello World!');
// AssertionError [ERR_ASSERTION]: Expected inputs to be strictly eq
// + actual - expected
//
// + 'Hello foobar'
// - 'Hello World!'
// ^
const apples = 1;
const apples 1;
const oranges = 2;
assert.strictEqual(apples, oranges, `apples ${apples} !== oranges ${
// AssertionError [ERR_ASSERTION]: apples 1 !== oranges 2
assert.strictEqual(1, '1', new TypeError('Inputs are not identical')
// TypeError: Inputs are not identical
const assert = require('node:assert/strict');
assert.strictEqual(1, 2);
// AssertionError [ERR_ASSERTION]: Expected inputs to be strictly eq
//
// 1 !== 2
assert.strictEqual(1, 1);
// OK
assert.strictEqual('Hello foobar', 'Hello World!');
// AssertionError [ERR_ASSERTION]: Expected inputs to be strictly eq
// + actual - expected
//
// + 'Hello foobar'
// - 'Hello World!'
// ^
const apples = 1;
const oranges = 2;
assert.strictEqual(apples, oranges, `apples ${apples} !== oranges ${
// AssertionError [ERR_ASSERTION]: apples 1 !== oranges 2
assert.strictEqual(1, '1', new TypeError('Inputs are not identical')
// TypeError: Inputs are not identical
If the values are not strictly equal, an AssertionError is thrown with a
message property set equal to the value of the message parameter. If
the message parameter is undefined, a default error message is
assigned. If the message parameter is an instance of an Error then it
will be thrown instead of the AssertionError.
assert.throws(fn[, error][,
message])
fn {Function}
error {RegExp|Function|Object|Error}
message {string}
Expects the function fn to throw an error.
If specified, error can be a Class, RegExp, a validation function, a
validation object where each property will be tested for strict deep
equality, or an instance of error where each property will be tested
for strict deep equality including the non-enumerable message and
name properties. When using an object, it is also possible to use a
regular expression, when validating against a string property. See
below for examples.
If specified, message will be appended to the message provided by the
AssertionError if the fn call fails to throw or in case the error
validation fails.
Custom validation object/error instance:
import assert from 'node:assert/strict';
const err = new TypeError('Wrong value');
err.code = 404;
err.foo = 'bar';
err.info = {
nested: true,
baz: 'text',
};
err.reg = /abc/i;
assert.throws(
() => {
throw err;
},
{
name: 'TypeError',
message: 'Wrong value',
info: {
nested: true,
baz: 'text',
},
// Only properties on the validation object will be tested for.
// Using nested objects requires all properties to be present. O
// the validation is going to fail.
},
);
// Using regular expressions to validate error properties:
assert.throws(
() => {
throw err;
},
{
// The `name` and `message` properties are strings and using reg
// expressions on those will match against the string. If they f
// error is thrown.
name: /^TypeError$/,
message: /Wrong/,
foo: 'bar',
info: {
nested: true,
// It is not possible to use regular expressions for nested pr
baz: 'text',
},
// The `reg` property contains a regular expression and only if
// validation object contains an identical regular expression, i
// to pass.
reg: /abc/i,
},
);
// Fails due to the different `message` and `name` properties:
assert.throws(
() => {
const otherErr = new Error('Not found');
// Copy all enumerable properties from `err` to `otherErr`.
for (const [key, value] of Object.entries(err)) {
otherErr[key] = value;
}
throw otherErr;
},
// The error's `message` and `name` properties will also be checke
// an error as validation object.
err,
);
const assert = require('node:assert/strict');
const err = new TypeError('Wrong value');
err.code = 404;
err.foo = 'bar';
err.info = {
nested: true,
baz: 'text',
};
err.reg = /abc/i;
assert.throws(
() => {
throw err;
},
{
name: 'TypeError',
message: 'Wrong value',
info: {
nested: true,
baz: 'text',
},
// Only properties on the validation object will be tested for.
// Using nested objects requires all properties to be present. O
// the validation is going to fail.
},
);
// Using regular expressions to validate error properties:
assert.throws(
() => {
throw err;
},
{
// The `name` and `message` properties are strings and using reg
// expressions on those will match against the string. If they f
// error is thrown.
name: /^TypeError$/,
message: /Wrong/,
foo: 'bar',
info: {
nested: true,
// It is not possible to use regular expressions for nested pr
baz: 'text',
},
// The `reg` property contains a regular expression and only if
// validation object contains an identical regular expression, i
// to pass.
reg: /abc/i,
},
);
// Fails due to the different `message` and `name` properties:
assert.throws(
() => {
const otherErr = new Error('Not found');
// Copy all enumerable properties from `err` to `otherErr`.
for (const [key, value] of Object.entries(err)) {
otherErr[key] = value;
}
throw otherErr;
},
// The error's `message` and `name` properties will also be checke
// an error as validation object.
err,
);
Validate instanceof using constructor:
import assert from 'node:assert/strict';
assert.throws(
() => {
throw new Error('Wrong value');
},
Error,
);
const assert = require('node:assert/strict');
assert.throws(
() => {
throw new Error('Wrong value');
},
Error,
);
Validate error message using RegExp:
Using a regular expression runs .toString on the error object, and
will therefore also include the error name.
import assert from 'node:assert/strict';
assert.throws(
() => {
throw new Error('Wrong value');
},
/^Error: Wrong value$/,
);
const assert = require('node:assert/strict');
assert.throws(
() => {
throw new Error('Wrong value');
},
/^Error: Wrong value$/,
);
Custom error validation:
The function must return true to indicate all internal validations
passed. It will otherwise fail with an AssertionError.
import assert from 'node:assert/strict';
assert.throws(
() => {
throw new Error('Wrong value');
},
(err) => {
assert(err instanceof Error);
assert(/value/.test(err));
// Avoid returning anything from validation functions besides `t
// Otherwise, it's not clear what part of the validation failed.
// throw an error about the specific validation that failed (as
// example) and add as much helpful debugging information to tha
// possible.
return true;
},
'unexpected error',
);
const assert = require('node:assert/strict');
assert.throws(
() => {
throw new Error('Wrong value');
},
(err) => {
assert(err instanceof Error);
assert(/value/.test(err));
// Avoid returning anything from validation functions besides `t
// Otherwise, it's not clear what part of the validation failed.
// throw an error about the specific validation that failed (as
// example) and add as much helpful debugging information to tha
// possible.
return true;
},
'unexpected error',
);
error cannot be a string. If a string is provided as the second
argument, then error is assumed to be omitted and the string will be
used for message instead. This can lead to easy-to-miss mistakes.
Using the same message as the thrown error message is going to
result in an ERR_AMBIGUOUS_ARGUMENT error. Please read the example
below carefully if using a string as the second argument gets
considered:
import assert from 'node:assert/strict';
function throwingFirst() {
throw new Error('First');
}
function throwingSecond() {
throw new Error('Second');
}
function notThrowing() {}
// The second argument is a string and the input function threw an E
// The first case will not throw as it does not match for the error
// thrown by the input function!
assert.throws(throwingFirst, 'Second');
// In the next example the message has no benefit over the message f
// error and since it is not clear if the user intended to actually
// against the error message, Node.js throws an `ERR_AMBIGUOUS_ARGUM
assert.throws(throwingSecond, 'Second');
// TypeError [ERR_AMBIGUOUS_ARGUMENT]
// The string is only used (as message) in case the function does no
assert.throws(notThrowing, 'Second');
// AssertionError [ERR_ASSERTION]: Missing expected exception: Secon
// If it was intended to match for the error message do this instead
// It does not throw because the error messages match.
assert.throws(throwingSecond, /Second$/);
// If the error message does not match, an AssertionError is thrown.
assert.throws(throwingFirst, /Second$/);
// AssertionError [ERR_ASSERTION]
const assert = require('node:assert/strict');
function throwingFirst() {
throw new Error('First');
}
function throwingSecond() {
throw new Error('Second');
}
function notThrowing() {}
// The second argument is a string and the input function threw an E
// The first case will not throw as it does not match for the error
// thrown by the input function!
assert.throws(throwingFirst, 'Second');
// In the next example the message has no benefit over the message f
// error and since it is not clear if the user intended to actually
// against the error message, Node.js throws an `ERR_AMBIGUOUS_ARGUM
assert.throws(throwingSecond, 'Second');
// TypeError [ERR_AMBIGUOUS_ARGUMENT]
// The string is only used (as message) in case the function does no
assert.throws(notThrowing, 'Second');
// AssertionError [ERR_ASSERTION]: Missing expected exception: Secon
// If it was intended to match for the error message do this instead
// It does not throw because the error messages match.
assert.throws(throwingSecond, /Second$/);
// If the error message does not match, an AssertionError is thrown.
assert.throws(throwingFirst, /Second$/);
// AssertionError [ERR_ASSERTION]
Due to the confusing error-prone notation, avoid a string as the
second argument.
Buffer
Stability: 2 - Stable
objects are used to represent a fixed-length sequence of bytes.
Buffer
Many Node.js APIs support Buffers.
The Buffer class is a subclass of JavaScript’s Uint8Array class and
extends it with methods that cover additional use cases. Node.js APIs
accept plain Uint8Arrays wherever Buffers are supported as well.
While the Buffer class is available within the global scope, it is still
recommended to explicitly reference it via an import or require
statement.
import { Buffer } from 'node:buffer';
// Creates a zero-filled Buffer of length 10.
const buf1 = Buffer.alloc(10);
// Creates a Buffer of length 10,
// filled with bytes which all have the value `1`.
const buf2 = Buffer.alloc(10, 1);
// Creates an uninitialized buffer of length 10.
// This is faster than calling Buffer.alloc() but the returned
// Buffer instance might contain old data that needs to be
// overwritten using fill(), write(), or other functions that fill t
// contents.
const buf3 = Buffer.allocUnsafe(10);
// Creates a Buffer containing the bytes [1, 2, 3].
const buf4 = Buffer.from([1, 2, 3]);
// Creates a Buffer containing the bytes [1 1 1 1] the entries
// Creates a Buffer containing the bytes [1, 1, 1, 1] – the entries
// are all truncated using `(value & 255)` to fit into the range 0–2
const buf5 = Buffer.from([257, 257.5, -255, '1']);
// Creates a Buffer containing the UTF-8-encoded bytes for the strin
// [0x74, 0xc3, 0xa9, 0x73, 0x74] (in hexadecimal notation)
// [116, 195, 169, 115, 116] (in decimal notation)
const buf6 = Buffer.from('tést');
// Creates a Buffer containing the Latin-1 bytes [0x74, 0xe9, 0x73,
const buf7 = Buffer.from('tést', 'latin1');
const { Buffer } = require('node:buffer');
// Creates a zero-filled Buffer of length 10.
const buf1 = Buffer.alloc(10);
// Creates a Buffer of length 10,
// filled with bytes which all have the value `1`.
const buf2 = Buffer.alloc(10, 1);
// Creates an uninitialized buffer of length 10.
// This is faster than calling Buffer.alloc() but the returned
// Buffer instance might contain old data that needs to be
// overwritten using fill(), write(), or other functions that fill t
// contents.
const buf3 = Buffer.allocUnsafe(10);
// Creates a Buffer containing the bytes [1, 2, 3].
const buf4 = Buffer.from([1, 2, 3]);
// Creates a Buffer containing the bytes [1, 1, 1, 1] – the entries
// are all truncated using `(value & 255)` to fit into the range 0–2
const buf5 = Buffer.from([257, 257.5, -255, '1']);
// Creates a Buffer containing the UTF-8-encoded bytes for the strin
// [0x74, 0xc3, 0xa9, 0x73, 0x74] (in hexadecimal notation)
// [116, 195, 169, 115, 116] (in decimal notation)
const buf6 = Buffer.from('tést');
// Creates a Buffer containing the Latin-1 bytes [0x74, 0xe9, 0x73,
const buf7 = Buffer.from('tést', 'latin1');
Buffers and character encodings
When converting between Buffers and strings, a character encoding
may be specified. If no character encoding is specified, UTF-8 will be
used as the default.
import { Buffer } from 'node:buffer';
const buf = Buffer.from('hello world', 'utf8');
console.log(buf.toString('hex'));
// Prints: 68656c6c6f20776f726c64
console.log(buf.toString('base64'));
// Prints: aGVsbG8gd29ybGQ=
console.log(Buffer.from('fhqwhgads', 'utf8'));
// Prints: <Buffer 66 68 71 77 68 67 61 64 73>
console.log(Buffer.from('fhqwhgads', 'utf16le'));
// Prints: <Buffer 66 00 68 00 71 00 77 00 68 00 67 00 61 00 64 00 7
const { Buffer } = require('node:buffer');
const buf = Buffer.from('hello world', 'utf8');
console.log(buf.toString('hex'));
// Prints: 68656c6c6f20776f726c64
console.log(buf.toString('base64'));
// Prints: aGVsbG8gd29ybGQ=
console.log(Buffer.from('fhqwhgads', 'utf8'));
// Prints: <Buffer 66 68 71 77 68 67 61 64 73>
console.log(Buffer.from('fhqwhgads', 'utf16le'));
// Prints: <Buffer 66 00 68 00 71 00 77 00 68 00 67 00 61 00 64 00 7
Node.js buffers accept all case variations of encoding strings that
they receive. For example, UTF-8 can be specified as 'utf8', 'UTF8',
or 'uTf8'.
The character encodings currently supported by Node.js are the
following:
'utf8' (alias: 'utf-8'): Multi-byte encoded Unicode characters.
Many web pages and other document formats use UTF-8. This is
the default character encoding. When decoding a Buffer into a
string that does not exclusively contain valid UTF-8 data, the
Unicode replacement character U+FFFD � will be used to
represent those errors.
'utf16le' (alias: 'utf-16le'): Multi-byte encoded Unicode
characters. Unlike 'utf8', each character in the string will be
encoded using either 2 or 4 bytes. Node.js only supports the
little-endian variant of UTF-16.
'latin1': Latin-1 stands for ISO-8859-1. This character encoding
only supports the Unicode characters from U+0000 to U+00FF. Each
character is encoded using a single byte. Characters that do not
fit into that range are truncated and will be mapped to characters
in that range.
Converting a Buffer into a string using one of the above is referred to
as decoding, and converting a string into a Buffer is referred to as
encoding.
Node.js also supports the following binary-to-text encodings. For
binary-to-text encodings, the naming convention is reversed:
Converting a Buffer into a string is typically referred to as encoding,
and converting a string into a Buffer as decoding.
'base64':Base64 encoding. When creating a Buffer from a string,
this encoding will also correctly accept “URL and Filename Safe
Alphabet” as specified in RFC 4648, Section 5. Whitespace
characters such as spaces, tabs, and new lines contained within
the base64-encoded string are ignored.
'base64url': base64url encoding as specified in RFC 4648,
Section 5. When creating a Buffer from a string, this encoding
will also correctly accept regular base64-encoded strings. When
encoding a Buffer to a string, this encoding will omit padding.
'hex':Encode each byte as two hexadecimal characters. Data
truncation may occur when decoding strings that do not
exclusively consist of an even number of hexadecimal characters.
See below for an example.
The following legacy character encodings are also supported:
'ascii': For 7-bit ASCII data only. When encoding a string into a
Buffer, this is equivalent to using 'latin1'. When decoding a
Buffer into a string, using this encoding will additionally unset
the highest bit of each byte before decoding as 'latin1'.
Generally, there should be no reason to use this encoding, as
'utf8' (or, if the data is known to always be ASCII-only, 'latin1')
will be a better choice when encoding or decoding ASCII-only
text. It is only provided for legacy compatibility.
'binary':Alias for 'latin1'. The name of this encoding can be
very misleading, as all of the encodings listed here convert
between strings and binary data. For converting between strings
and Buffers, typically 'utf8' is the right choice.
'ucs2', 'ucs-2':Aliases of 'utf16le'. UCS-2 used to refer to a
variant of UTF-16 that did not support characters that had code
points larger than U+FFFF. In Node.js, these code points are
always supported.
import { Buffer } from 'node:buffer';
Buffer.from('1ag123', 'hex');
// Prints <Buffer 1a>, data truncated when first non-hexadecimal val
// ('g') encountered.
Buffer.from('1a7', 'hex');
// Prints <Buffer 1a>, data truncated when data ends in single digit
Buffer.from('1634', 'hex');
// Prints <Buffer 16 34>, all data represented.
const { Buffer } = require('node:buffer');
Buffer.from('1ag123', 'hex');
// Prints <Buffer 1a>, data truncated when first non-hexadecimal val
// ('g') encountered.
Buffer.from('1a7', 'hex');
// Prints <Buffer 1a>, data truncated when data ends in single digit
Buffer.from('1634', 'hex');
// Prints <Buffer 16 34>, all data represented.
Modern Web browsers follow the WHATWG Encoding Standard
which aliases both 'latin1' and 'ISO-8859-1' to 'win-1252'. This
means that while doing something like http.get(), if the returned
charset is one of those listed in the WHATWG specification it is
possible that the server actually returned 'win-1252'-encoded data,
and using 'latin1' encoding may incorrectly decode the characters.
Buffers and TypedArrays
Buffer instances are also JavaScript Uint8Array and TypedArray
instances. All TypedArray methods are available on Buffers. There are,
however, subtle incompatibilities between the Buffer API and the
TypedArray API.
In particular:
While TypedArray.prototype.slice() creates a copy of part of the
TypedArray, Buffer.prototype.slice() creates a view over the
existing Buffer without copying. This behavior can be surprising,
and only exists for legacy compatibility.
TypedArray.prototype.subarray() can be used to achieve the
behavior of Buffer.prototype.slice() on both Buffers and other
TypedArrays and should be preferred.
buf.toString() is incompatible with its TypedArray equivalent.
A number of methods, e.g. buf.indexOf(), support additional
arguments.
There are two ways to create new TypedArray instances from a Buffer:
Passing a Buffer to a TypedArray constructor will copy the Buffers
contents, interpreted as an array of integers, and not as a byte
sequence of the target type.
import { Buffer } from 'node:buffer';
const buf = Buffer.from([1, 2, 3, 4]);
const uint32array = new Uint32Array(buf);
console.log(uint32array);
// Prints: Uint32Array(4) [ 1, 2, 3, 4 ]
const { Buffer } = require('node:buffer');
const buf = Buffer.from([1, 2, 3, 4]);
const uint32array = new Uint32Array(buf);
console.log(uint32array);
// Prints: Uint32Array(4) [ 1, 2, 3, 4 ]
Passing the Buffers underlying ArrayBuffer will create a
TypedArray that shares its memory with the Buffer.
import { Buffer } from 'node:buffer';
const buf = Buffer.from('hello', 'utf16le');
const uint16array = new Uint16Array(
buf.buffer,
buf.byteOffset,
buf.length / Uint16Array.BYTES_PER_ELEMENT);
console.log(uint16array);
// Prints: Uint16Array(5) [ 104, 101, 108, 108, 111 ]
const { Buffer } = require('node:buffer');
const buf = Buffer.from('hello', 'utf16le');
const uint16array = new Uint16Array(
buf.buffer,
buf.byteOffset,
buf.length / Uint16Array.BYTES_PER_ELEMENT);
console.log(uint16array);
// Prints: Uint16Array(5) [ 104, 101, 108, 108, 111 ]
It is possible to create a new Buffer that shares the same allocated
memory as a TypedArray instance by using the TypedArray object’s
.buffer property in the same way. Buffer.from() behaves like new
Uint8Array() in this context.
import { Buffer } from 'node:buffer';
const arr = new Uint16Array(2);
arr[0] = 5000;
arr[1] = 4000;
// Copies the contents of `arr`.
const buf1 = Buffer.from(arr);
// Shares memory with `arr`.
const buf2 = Buffer.from(arr.buffer);
console.log(buf1);
// Prints: <Buffer 88 a0>
console.log(buf2);
// Prints: <Buffer 88 13 a0 0f>
arr[1] = 6000;
console.log(buf1);
// Prints: <Buffer 88 a0>
console.log(buf2);
// Prints: <Buffer 88 13 70 17>
const { Buffer } = require('node:buffer');
const arr = new Uint16Array(2);
arr[0] = 5000;
arr[1] = 4000;
// Copies the contents of `arr`.
const buf1 = Buffer.from(arr);
// Shares memory with `arr`.
const buf2 = Buffer.from(arr.buffer);
console.log(buf1);
// Prints: <Buffer 88 a0>
console.log(buf2);
// Prints: <Buffer 88 13 a0 0f>
arr[1] = 6000;
console.log(buf1);
// Prints: <Buffer 88 a0>
console.log(buf2);
// Prints: <Buffer 88 13 70 17>
When creating a Buffer using a TypedArray’s .buffer, it is possible to
use only a portion of the underlying ArrayBuffer by passing in
byteOffset and length parameters.
import { Buffer } from 'node:buffer';
const arr = new Uint16Array(20);
const buf = Buffer.from(arr.buffer, 0, 16);
console.log(buf.length);
// Prints: 16
const { Buffer } = require('node:buffer');
const arr = new Uint16Array(20);
const buf = Buffer.from(arr.buffer, 0, 16);
console.log(buf.length);
// Prints: 16
The Buffer.from() and TypedArray.from() have different signatures
and implementations. Specifically, the TypedArray variants accept a
second argument that is a mapping function that is invoked on every
element of the typed array:
TypedArray.from(source[, mapFn[, thisArg]])
The Buffer.from() method, however, does not support the use of a
mapping function:
Buffer.from(array)
Buffer.from(buffer)
Buffer.from(arrayBuffer[, byteOffset[, length]])
Buffer.from(string[, encoding])
Buffers and iteration
Buffer instances can be iterated over using for..of syntax:
import { Buffer } from 'node:buffer';
const buf = Buffer.from([1, 2, 3]);
for (const b of buf) {
console.log(b);
}
// Prints:
// 1
// 2
// 3
const { Buffer } = require('node:buffer');
const buf = Buffer.from([1, 2, 3]);
for (const b of buf) {
console.log(b);
}
// Prints:
// 1
// 2
// 3
Additionally, the buf.values(), buf.keys(), and buf.entries() methods
can be used to create iterators.
Class: Blob
A Blob encapsulates immutable, raw data that can be safely shared
across multiple worker threads.
new buffer.Blob([sources[, options]])
sources
{string[]|ArrayBuffer[]|TypedArray[]|DataView[]|Blob[]} An
array of string, {ArrayBuffer}, {TypedArray}, {DataView}, or
{Blob} objects, or any mix of such objects, that will be stored
within the Blob.
options {Object}
endings {string} One of either 'transparent' or 'native'.
When set to 'native', line endings in string source parts will
be converted to the platform native line-ending as specified
by require('node:os').EOL.
type {string} The Blob content-type. The intent is for type to
convey the MIME media type of the data, however no
validation of the type format is performed.
Creates a new Blob object containing a concatenation of the given
sources.
{ArrayBuffer}, {TypedArray}, {DataView}, and {Buffer} sources are
copied into the ‘Blob’ and can therefore be safely modified after the
‘Blob’ is created.
String sources are encoded as UTF-8 byte sequences and copied into
the Blob. Unmatched surrogate pairs within each string part will be
replaced by Unicode U+FFFD replacement characters.
blob.arrayBuffer()
Returns: {Promise}
Returns a promise that fulfills with an {ArrayBuffer} containing a
copy of the Blob data.
blob.size
The total size of the Blob in bytes.
blob.slice([start[, end[, type]]])
start {number} The starting index.
end {number} The ending index.
type {string} The content-type for the new Blob
Creates and returns a new Blob containing a subset of this Blob
objects data. The original Blob is not altered.
blob.stream()
Returns: {ReadableStream}
Returns a new ReadableStream that allows the content of the Blob to be
read.
blob.text()
Returns: {Promise}
Returns a promise that fulfills with the contents of the Blob decoded
as a UTF-8 string.
blob.type
Type: {string}
The content-type of the Blob.
Blob objects and MessageChannel
Once a {Blob} object is created, it can be sent via MessagePort to
multiple destinations without transferring or immediately copying
the data. The data contained by the Blob is copied only when the
arrayBuffer() or text() methods are called.
import { Blob } from 'node:buffer';
import { setTimeout as delay } from 'node:timers/promises';
const blob = new Blob(['hello there']);
const mc1 = new MessageChannel();
const mc2 = new MessageChannel();
mc1.port1.onmessage = async ({ data }) => {
console.log(await data.arrayBuffer());
mc1.port1.close();
};
mc2.port1.onmessage = async ({ data }) => {
await delay(1000);
console.log(await data.arrayBuffer());
mc2.port1.close();
};
mc1.port2.postMessage(blob);
mc2.port2.postMessage(blob);
// The Blob is still usable after posting.
blob.text().then(console.log);
const { Blob } = require('node:buffer');
const { setTimeout: delay } = require('node:timers/promises');
const blob = new Blob(['hello there']);
const mc1 = new MessageChannel();
const mc2 = new MessageChannel();
mc1.port1.onmessage = async ({ data }) => {
console.log(await data.arrayBuffer());
mc1.port1.close();
};
mc2.port1.onmessage = async ({ data }) => {
await delay(1000);
console.log(await data.arrayBuffer());
mc2.port1.close();
};
mc1.port2.postMessage(blob);
mc2.port2.postMessage(blob);
// The Blob is still usable after posting.
blob.text().then(console.log);
Class: Buffer
The Buffer class is a global type for dealing with binary data directly.
It can be constructed in a variety of ways.
Static method: Buffer.alloc(size[, fill[,
encoding]])
size {integer} The desired length of the new Buffer.
fill {string|Buffer|Uint8Array|integer} A value to pre-fill the
new Buffer with. Default: 0.
encoding {string} If fill is a string, this is its encoding. Default:
'utf8'.
Allocates a new Buffer of size bytes. If fill is undefined, the Buffer
will be zero-filled.
import { Buffer } from 'node:buffer';
const buf = Buffer.alloc(5);
console.log(buf);
// Prints: <Buffer 00 00 00 00 00>
const { Buffer } = require('node:buffer');
const buf = Buffer.alloc(5);
console.log(buf);
// Prints: <Buffer 00 00 00 00 00>
If size is larger than buffer.constants.MAX_LENGTH or smaller than 0,
ERR_OUT_OF_RANGE is thrown.
If fill is specified, the allocated Buffer will be initialized by calling
buf.fill(fill).
import { Buffer } from 'node:buffer';
const buf = Buffer.alloc(5, 'a');
console.log(buf);
// Prints: <Buffer 61 61 61 61 61>
const { Buffer } = require('node:buffer');
const buf = Buffer.alloc(5, 'a');
console.log(buf);
// Prints: <Buffer 61 61 61 61 61>
If both fill and encoding are specified, the allocated Buffer will be
initialized by calling buf.fill(fill, encoding).
import { Buffer } from 'node:buffer';
const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64');
console.log(buf);
// Prints: <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64>
const { Buffer } = require('node:buffer');
const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64');
console.log(buf);
// Prints: <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64>
Calling Buffer.alloc() can be measurably slower than the alternative
Buffer.allocUnsafe() but ensures that the newly created Buffer
instance contents will never contain sensitive data from previous
allocations, including data that might not have been allocated for
Buffers.
A TypeError will be thrown if size is not a number.
Static method: Buffer.allocUnsafe(size)
size {integer} The desired length of the new Buffer.
Allocates a new Buffer of size bytes. If size is larger than
buffer.constants.MAX_LENGTH or smaller than 0, ERR_OUT_OF_RANGE is
thrown.
The underlying memory for Buffer instances created in this way is
not initialized. The contents of the newly created Buffer are unknown
and may contain sensitive data. Use Buffer.alloc() instead to
initialize Buffer instances with zeroes.
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(10);
console.log(buf);
// Prints (contents may vary): <Buffer a0 8b 28 3f 01 00 00 00 50 32
buf.fill(0);
console.log(buf);
// Prints: <Buffer 00 00 00 00 00 00 00 00 00 00>
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(10);
console.log(buf);
// Prints (contents may vary): <Buffer a0 8b 28 3f 01 00 00 00 50 32
buf.fill(0);
console.log(buf);
// Prints: <Buffer 00 00 00 00 00 00 00 00 00 00>
A TypeError will be thrown if size is not a number.
The Buffer module pre-allocates an internal Buffer instance of size
Buffer.poolSize that is used as a pool for the fast allocation of new
Buffer instances created using Buffer.allocUnsafe(),
Buffer.from(array), and Buffer.concat() only when size is less than
Buffer.poolSize >>> 1 (floor of Buffer.poolSize divided by two).
Use of this pre-allocated internal memory pool is a key difference
between calling Buffer.alloc(size, fill)
vs. Buffer.allocUnsafe(size).fill(fill). Specifically,
Buffer.alloc(size, fill) will never use the internal Buffer pool,
while Buffer.allocUnsafe(size).fill(fill) will use the internal Buffer
pool if size is less than or equal to half Buffer.poolSize. The
difference is subtle but can be important when an application
requires the additional performance that Buffer.allocUnsafe()
provides.
Static method:
Buffer.allocUnsafeSlow(size)
size {integer} The desired length of the new Buffer.
Allocates a new Buffer of size bytes. If size is larger than
buffer.constants.MAX_LENGTH or smaller than 0, ERR_OUT_OF_RANGE is
thrown. A zero-length Buffer is created if size is 0.
The underlying memory for Buffer instances created in this way is
not initialized. The contents of the newly created Buffer are unknown
and may contain sensitive data. Use buf.fill(0) to initialize such
Buffer instances with zeroes.
When using Buffer.allocUnsafe() to allocate new Buffer instances,
allocations under 4 KiB are sliced from a single pre-allocated Buffer.
This allows applications to avoid the garbage collection overhead of
creating many individually allocated Buffer instances. This approach
improves both performance and memory usage by eliminating the
need to track and clean up as many individual ArrayBuffer objects.
However, in the case where a developer may need to retain a small
chunk of memory from a pool for an indeterminate amount of time,
it may be appropriate to create an un-pooled Buffer instance using
Buffer.allocUnsafeSlow() and then copying out the relevant bits.
import { Buffer } from 'node:buffer';
// Need to keep around a few small chunks of memory.
const store = [];
socket.on('readable', () => {
let data;
while (null !== (data = readable.read())) {
// Allocate for retained data.
const sb = Buffer.allocUnsafeSlow(10);
// Copy the data into the new allocation.
data.copy(sb, 0, 0, 10);
store.push(sb);
}
});
const { Buffer } = require('node:buffer');
// Need to keep around a few small chunks of memory.
const store = [];
socket.on('readable', () => {
let data;
while (null !== (data = readable.read())) {
// Allocate for retained data.
const sb = Buffer.allocUnsafeSlow(10);
// Copy the data into the new allocation.
data.copy(sb, 0, 0, 10);
store.push(sb);
}
});
A TypeError will be thrown if size is not a number.
Static method: Buffer.byteLength(string[,
encoding])
string
{string|Buffer|TypedArray|DataView|ArrayBuffer|SharedArrayB
uffer} A value to calculate the length of.
encoding {string} If string is a string, this is its encoding.
Default: 'utf8'.
Returns: {integer} The number of bytes contained within string.
Returns the byte length of a string when encoded using encoding. This
is not the same as String.prototype.length, which does not account
for the encoding that is used to convert the string into bytes.
For 'base64', 'base64url', and 'hex', this function assumes valid
input. For strings that contain non-base64/hex-encoded data
(e.g. whitespace), the return value might be greater than the length
of a Buffer created from the string.
import { Buffer } from 'node:buffer';
const str = '\u00bd + \u00bc = \u00be';
console.log(`${str}: ${str.length} characters, ` +
`${Buffer.byteLength(str, 'utf8')} bytes`);
// Prints: ½ + ¼ = ¾: 9 characters, 12 bytes
const { Buffer } = require('node:buffer');
const str = '\u00bd + \u00bc = \u00be';
console.log(`${str}: ${str.length} characters, ` +
`${Buffer.byteLength(str, 'utf8')} bytes`);
// Prints: ½ + ¼ = ¾: 9 characters, 12 bytes
When string is a Buffer/DataView/TypedArray/ArrayBuffer/
SharedArrayBuffer, the byte length as reported by .byteLength is
returned.
Static method: Buffer.compare(buf1, buf2)
buf1 {Buffer|Uint8Array}
buf2 {Buffer|Uint8Array}
Returns: {integer} Either -1, 0, or 1, depending on the result of
the comparison. See buf.compare() for details.
Compares buf1 to buf2, typically for the purpose of sorting arrays of
Buffer instances. This is equivalent to calling buf1.compare(buf2).
import { Buffer } from 'node:buffer';
const buf1 = Buffer.from('1234');
const buf2 = Buffer.from('0123');
const arr = [buf1, buf2];
console.log(arr.sort(Buffer.compare));
// Prints: [ <Buffer 30 31 32 33>, <Buffer 31 32 33 34> ]
// (This result is equal to: [buf2, buf1].)
const { Buffer } = require('node:buffer');
const buf1 = Buffer.from('1234');
const buf2 = Buffer.from('0123');
const arr = [buf1, buf2];
console.log(arr.sort(Buffer.compare));
// Prints: [ <Buffer 30 31 32 33>, <Buffer 31 32 33 34> ]
// (This result is equal to: [buf2, buf1].)
Static method: Buffer.concat(list[,
totalLength])
list {Buffer[] | Uint8Array[]} List of Buffer or Uint8Array
instances to concatenate.
totalLength {integer} Total length of the Buffer instances in list
when concatenated.
Returns: {Buffer}
Returns a new Buffer which is the result of concatenating all the
Buffer instances in the list together.
If the list has no items, or if the totalLength is 0, then a new zero-
length Buffer is returned.
If totalLength is not provided, it is calculated from the Buffer
instances in list by adding their lengths.
If totalLength is provided, it is coerced to an unsigned integer. If the
combined length of the Buffers in list exceeds totalLength, the result
is truncated to totalLength.
import { Buffer } from 'node:buffer';
// Create a single `Buffer` from a list of three `Buffer` instances.
const buf1 = Buffer.alloc(10);
const buf2 = Buffer.alloc(14);
const buf3 = Buffer.alloc(18);
const totalLength = buf1.length + buf2.length + buf3.length;
console.log(totalLength);
// Prints: 42
const bufA = Buffer.concat([buf1, buf2, buf3], totalLength);
console.log(bufA);
// Prints: <Buffer 00 00 00 00 ...>
console.log(bufA.length);
// Prints: 42
const { Buffer } = require('node:buffer');
// Create a single `Buffer` from a list of three `Buffer` instances.
const buf1 = Buffer.alloc(10);
const buf2 = Buffer.alloc(14);
const buf3 = Buffer.alloc(18);
const totalLength = buf1.length + buf2.length + buf3.length;
console.log(totalLength);
// Prints: 42
const bufA = Buffer.concat([buf1, buf2, buf3], totalLength);
console.log(bufA);
// Prints: <Buffer 00 00 00 00 ...>
console.log(bufA.length);
// Prints: 42
Buffer.concat() may also use the internal Buffer pool like
Buffer.allocUnsafe() does.
Static method: Buffer.copyBytesFrom(view[,
offset[, length]])
view {TypedArray} The {TypedArray} to copy.
offset {integer} The starting offset within view. Default:: 0.
length {integer} The number of elements from view to copy.
Default: view.length - offset.
Copies the underlying memory of view into a new Buffer.
const u16 = new Uint16Array([0, 0xffff]);
const buf = Buffer.copyBytesFrom(u16, 1, 1);
u16[1] = 0;
console.log(buf.length); // 2
console.log(buf[0]); // 255
console.log(buf[1]); // 255
Static method: Buffer.from(array)
array {integer[]}
Allocates a new Buffer using an array of bytes in the range 0 – 255.
Array entries outside that range will be truncated to fit into it.
import { Buffer } from 'node:buffer';
// Creates a new Buffer containing the UTF-8 bytes of the string 'bu
const buf = Buffer.from([0x62, 0x75, 0x66, 0x66, 0x65, 0x72]);
const { Buffer } = require('node:buffer');
// Creates a new Buffer containing the UTF-8 bytes of the string 'bu
const buf = Buffer.from([0x62, 0x75, 0x66, 0x66, 0x65, 0x72]);
If array is an Array-like object (that is, one with a length property of
type number), it is treated as if it is an array, unless it is a Buffer or a
Uint8Array. This means all other TypedArray variants get treated as an
Array. To create a Buffer from the bytes backing a TypedArray, use
Buffer.copyBytesFrom().
A TypeError will be thrown if array is not an Array or another type
appropriate for Buffer.from() variants.
Buffer.from(array) and Buffer.from(string) may also use the internal
Buffer pool like Buffer.allocUnsafe() does.
Static method: Buffer.from(arrayBuffer[,
byteOffset[, length]])
arrayBuffer {ArrayBuffer|SharedArrayBuffer} An ArrayBuffer,
SharedArrayBuffer, for example the .buffer property of a
TypedArray.
byteOffset {integer} Index of first byte to expose. Default: 0.
length {integer} Number of bytes to expose. Default:
arrayBuffer.byteLength - byteOffset.
This creates a view of the ArrayBuffer without copying the underlying
memory. For example, when passed a reference to the .buffer
property of a TypedArray instance, the newly created Buffer will share
the same allocated memory as the TypedArray’s underlying
ArrayBuffer.
import { Buffer } from 'node:buffer';
const arr = new Uint16Array(2);
arr[0] = 5000;
arr[1] = 4000;
// Shares memory with `arr`.
const buf = Buffer.from(arr.buffer);
console.log(buf);
// Prints: <Buffer 88 13 a0 0f>
// Changing the original Uint16Array changes the Buffer also.
arr[1] = 6000;
console.log(buf);
// Prints: <Buffer 88 13 70 17>
const { Buffer } = require('node:buffer');
const arr = new Uint16Array(2);
arr[0] = 5000;
arr[1] = 4000;
// Shares memory with `arr`.
const buf = Buffer.from(arr.buffer);
console.log(buf);
// Prints: <Buffer 88 13 a0 0f>
// Changing the original Uint16Array changes the Buffer also.
arr[1] = 6000;
console.log(buf);
// Prints: <Buffer 88 13 70 17>
The optional byteOffset and length arguments specify a memory
range within the arrayBuffer that will be shared by the Buffer.
import { Buffer } from 'node:buffer';
const ab = new ArrayBuffer(10);
const buf = Buffer.from(ab, 0, 2);
console.log(buf.length);
// Prints: 2
const { Buffer } = require('node:buffer');
const ab = new ArrayBuffer(10);
const buf = Buffer.from(ab, 0, 2);
console.log(buf.length);
// Prints: 2
A TypeError will be thrown if arrayBuffer is not an ArrayBuffer or a
SharedArrayBuffer or another type appropriate for Buffer.from()
variants.
It is important to remember that a backing ArrayBuffer can cover a
range of memory that extends beyond the bounds of a TypedArray
view. A new Buffer created using the buffer property of a TypedArray
may extend beyond the range of the TypedArray:
import { Buffer } from 'node:buffer';
const arrA = Uint8Array.from([0x63, 0x64, 0x65, 0x66]); // 4 element
const arrB = new Uint8Array(arrA.buffer, 1, 2); // 2 elements
console.log(arrA.buffer === arrB.buffer); // true
const buf = Buffer.from(arrB.buffer);
console.log(buf);
// Prints: <Buffer 63 64 65 66>
const { Buffer } = require('node:buffer');
const arrA = Uint8Array.from([0x63, 0x64, 0x65, 0x66]); // 4 element
const arrB = new Uint8Array(arrA.buffer, 1, 2); // 2 elements
console.log(arrA.buffer === arrB.buffer); // true
const buf = Buffer.from(arrB.buffer);
console.log(buf);
// Prints: <Buffer 63 64 65 66>
Static method: Buffer.from(buffer)
buffer{Buffer|Uint8Array} An existing Buffer or Uint8Array from
which to copy data.
Copies the passed buffer data onto a new Buffer instance.
import { Buffer } from 'node:buffer';
const buf1 = Buffer.from('buffer');
const buf2 = Buffer.from(buf1);
buf1[0] = 0x61;
console.log(buf1.toString());
// Prints: auffer
console.log(buf2.toString());
// Prints: buffer
const { Buffer } = require('node:buffer');
const buf1 = Buffer.from('buffer');
const buf2 = Buffer.from(buf1);
buf1[0] = 0x61;
console.log(buf1.toString());
// Prints: auffer
console.log(buf2.toString());
// Prints: buffer
A TypeError will be thrown if buffer is not a Buffer or another type
appropriate for Buffer.from() variants.
Static method: Buffer.from(object[,
offsetOrEncoding[, length]])
object {Object} An object supporting Symbol.toPrimitive or
valueOf().
offsetOrEncoding {integer|string} A byte-offset or encoding.
length {integer} A length.
For objects whose valueOf() function returns a value not strictly
equal to object, returns Buffer.from(object.valueOf(),
offsetOrEncoding, length).
import { Buffer } from 'node:buffer';
const buf = Buffer.from(new String('this is a test'));
// Prints: <Buffer 74 68 69 73 20 69 73 20 61 20 74 65 73 74>
const { Buffer } = require('node:buffer');
const buf = Buffer.from(new String('this is a test'));
// Prints: <Buffer 74 68 69 73 20 69 73 20 61 20 74 65 73 74>
For objects that support Symbol.toPrimitive, returns
Buffer.from(object[Symbol.toPrimitive]('string'),
offsetOrEncoding).
import { Buffer } from 'node:buffer';
class Foo {
[Symbol.toPrimitive]() {
return 'this is a test';
}
}
const buf = Buffer.from(new Foo(), 'utf8');
// Prints: <Buffer 74 68 69 73 20 69 73 20 61 20 74 65 73 74>
const { Buffer } = require('node:buffer');
class Foo {
[Symbol.toPrimitive]() {
return 'this is a test';
}
}
const buf = Buffer.from(new Foo(), 'utf8');
// Prints: <Buffer 74 68 69 73 20 69 73 20 61 20 74 65 73 74>
A TypeError will be thrown if object does not have the mentioned
methods or is not of another type appropriate for Buffer.from()
variants.
Static method: Buffer.from(string[,
encoding])
string {string} A string to encode.
encoding {string} The encoding of string. Default: 'utf8'.
Creates a new Buffer containing string. The encoding parameter
identifies the character encoding to be used when converting string
into bytes.
import { Buffer } from 'node:buffer';
const buf1 = Buffer.from('this is a tést');
const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex');
console.log(buf1.toString());
// Prints: this is a tést
console.log(buf2.toString());
// Prints: this is a tést
console.log(buf1.toString('latin1'));
// Prints: this is a tést
const { Buffer } = require('node:buffer');
const buf1 = Buffer.from('this is a tést');
const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex');
console.log(buf1.toString());
// Prints: this is a tést
console.log(buf2.toString());
// Prints: this is a tést
console.log(buf1.toString('latin1'));
// Prints: this is a tést
A TypeError will be thrown if string is not a string or another type
appropriate for Buffer.from() variants.
Static method: Buffer.isBuffer(obj)
obj{Object}
Returns: {boolean}
Returns true if obj is a Buffer, false otherwise.
import { Buffer } from 'node:buffer';
Buffer.isBuffer(Buffer.alloc(10)); // true
Buffer.isBuffer(Buffer.from('foo')); // true
Buffer.isBuffer('a string'); // false
Buffer.isBuffer([]); // false
Buffer.isBuffer(new Uint8Array(1024)); // false
const { Buffer } = require('node:buffer');
Buffer.isBuffer(Buffer.alloc(10)); // true
Buffer.isBuffer(Buffer.from('foo')); // true
Buffer.isBuffer('a string'); // false
Buffer.isBuffer([]); // false
Buffer.isBuffer(new Uint8Array(1024)); // false
Static method: Buffer.isEncoding(encoding)
encoding{string} A character encoding name to check.
Returns: {boolean}
Returns true if encoding is the name of a supported character
encoding, or false otherwise.
import { Buffer } from 'node:buffer';
console.log(Buffer.isEncoding('utf8'));
// Prints: true
console.log(Buffer.isEncoding('hex'));
// Prints: true
console.log(Buffer.isEncoding('utf/8'));
// Prints: false
console.log(Buffer.isEncoding(''));
// Prints: false
const { Buffer } = require('node:buffer');
console.log(Buffer.isEncoding('utf8'));
// Prints: true
console.log(Buffer.isEncoding('hex'));
// Prints: true
console.log(Buffer.isEncoding('utf/8'));
// Prints: false
console.log(Buffer.isEncoding(''));
// Prints: false
Class property: Buffer.poolSize
{integer} Default: 8192
This is the size (in bytes) of pre-allocated internal Buffer instances
used for pooling. This value may be modified.
buf[index]
index {integer}
The index operator [index] can be used to get and set the octet at
position index in buf. The values refer to individual bytes, so the legal
value range is between 0x00 and 0xFF (hex) or 0 and 255 (decimal).
This operator is inherited from Uint8Array, so its behavior on out-of-
bounds access is the same as Uint8Array. In other words, buf[index]
returns undefined when index is negative or greater or equal to
buf.length, and buf[index] = value does not modify the buffer if index
is negative or >= buf.length.
import { Buffer } from 'node:buffer';
import { Buffer } from node:buffer ;
// Copy an ASCII string into a `Buffer` one byte at a time.
// (This only works for ASCII-only strings. In general, one should u
// `Buffer.from()` to perform this conversion.)
const str = 'Node.js';
const buf = Buffer.allocUnsafe(str.length);
for (let i = 0; i < str.length; i++) {
buf[i] = str.charCodeAt(i);
}
console.log(buf.toString('utf8'));
// Prints: Node.js
const { Buffer } = require('node:buffer');
// Copy an ASCII string into a `Buffer` one byte at a time.
// (This only works for ASCII-only strings. In general, one should u
// `Buffer.from()` to perform this conversion.)
const str = 'Node.js';
const buf = Buffer.allocUnsafe(str.length);
for (let i = 0; i < str.length; i++) {
buf[i] = str.charCodeAt(i);
}
console.log(buf.toString('utf8'));
// Prints: Node.js
buf.buffer
{ArrayBuffer} The underlying ArrayBuffer object based on which
this Buffer object is created.
This ArrayBuffer is not guaranteed to correspond exactly to the
original Buffer. See the notes on buf.byteOffset for details.
import { Buffer } from 'node:buffer';
const arrayBuffer = new ArrayBuffer(16);
const buffer = Buffer.from(arrayBuffer);
console.log(buffer.buffer === arrayBuffer);
// Prints: true
const { Buffer } = require('node:buffer');
const arrayBuffer = new ArrayBuffer(16);
const buffer = Buffer.from(arrayBuffer);
console.log(buffer.buffer === arrayBuffer);
// Prints: true
buf.byteOffset
{integer} The byteOffset of the Buffers underlying ArrayBuffer
object.
When setting byteOffset in Buffer.from(ArrayBuffer, byteOffset,
length), or sometimes when allocating a Buffer smaller than
Buffer.poolSize, the buffer does not start from a zero offset on the
underlying ArrayBuffer.
This can cause problems when accessing the underlying ArrayBuffer
directly using buf.buffer, as other parts of the ArrayBuffer may be
unrelated to the Buffer object itself.
A common issue when creating a TypedArray object that shares its
memory with a Buffer is that in this case one needs to specify the
byteOffset correctly:
import { Buffer } from 'node:buffer';
// Create a buffer smaller than `Buffer.poolSize`.
const nodeBuffer = Buffer.from([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]);
// When casting the Node.js Buffer to an Int8Array, use the byteOffs
// to refer only to the part of `nodeBuffer buffer` that contains th
// to refer only to the part of nodeBuffer.buffer that contains th
// for `nodeBuffer`.
new Int8Array(nodeBuffer.buffer, nodeBuffer.byteOffset, nodeBuffer.l
const { Buffer } = require('node:buffer');
// Create a buffer smaller than `Buffer.poolSize`.
const nodeBuffer = Buffer.from([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]);
// When casting the Node.js Buffer to an Int8Array, use the byteOffs
// to refer only to the part of `nodeBuffer.buffer` that contains th
// for `nodeBuffer`.
new Int8Array(nodeBuffer.buffer, nodeBuffer.byteOffset, nodeBuffer.l
buf.compare(target[, targetStart[,
targetEnd[, sourceStart[, sourceEnd]]]])
target {Buffer|Uint8Array} A Buffer or Uint8Array with which to
compare buf.
targetStart {integer} The offset within target at which to begin
comparison. Default: 0.
targetEnd {integer} The offset within target at which to end
comparison (not inclusive). Default: target.length.
sourceStart {integer} The offset within buf at which to begin
comparison. Default: 0.
sourceEnd {integer} The offset within buf at which to end
comparison (not inclusive). Default: buf.length.
Returns: {integer}
Compares buf with target and returns a number indicating whether
buf comes before, after, or is the same as target in sort order.
Comparison is based on the actual sequence of bytes in each Buffer.
0 is returned if target is the same as buf
1 is returned if target should come before buf when sorted.
-1 is returned if target should come after buf when sorted.
import { Buffer } from 'node:buffer';
const buf1 = Buffer.from('ABC');
const buf2 = Buffer.from('BCD');
const buf3 = Buffer.from('ABCD');
console.log(buf1.compare(buf1));
// Prints: 0
console.log(buf1.compare(buf2));
// Prints: -1
console.log(buf1.compare(buf3));
// Prints: -1
console.log(buf2.compare(buf1));
// Prints: 1
console.log(buf2.compare(buf3));
// Prints: 1
console.log([buf1, buf2, buf3].sort(Buffer.compare));
// Prints: [ <Buffer 41 42 43>, <Buffer 41 42 43 44>, <Buffer 42 43
// (This result is equal to: [buf1, buf3, buf2].)
const { Buffer } = require('node:buffer');
const buf1 = Buffer.from('ABC');
const buf2 = Buffer.from('BCD');
const buf3 = Buffer.from('ABCD');
console.log(buf1.compare(buf1));
// Prints: 0
console.log(buf1.compare(buf2));
// Prints: -1
console.log(buf1.compare(buf3));
// Prints: -1
console.log(buf2.compare(buf1));
// Prints: 1
console.log(buf2.compare(buf3));
// Prints: 1
console.log([buf1, buf2, buf3].sort(Buffer.compare));
// Prints: [ <Buffer 41 42 43>, <Buffer 41 42 43 44>, <Buffer 42 43
// (This result is equal to: [buf1, buf3, buf2].)
The optional targetStart, targetEnd, sourceStart, and sourceEnd
arguments can be used to limit the comparison to specific ranges
within target and buf respectively.
import { Buffer } from 'node:buffer';
const buf1 = Buffer.from([1, 2, 3, 4, 5, 6, 7, 8, 9]);
const buf2 = Buffer.from([5, 6, 7, 8, 9, 1, 2, 3, 4]);
console.log(buf1.compare(buf2, 5, 9, 0, 4));
// Prints: 0
console.log(buf1.compare(buf2, 0, 6, 4));
// Prints: -1
console.log(buf1.compare(buf2, 5, 6, 5));
// Prints: 1
const { Buffer } = require('node:buffer');
const buf1 = Buffer.from([1, 2, 3, 4, 5, 6, 7, 8, 9]);
const buf2 = Buffer.from([5, 6, 7, 8, 9, 1, 2, 3, 4]);
console.log(buf1.compare(buf2, 5, 9, 0, 4));
// Prints: 0
console.log(buf1.compare(buf2, 0, 6, 4));
// Prints: -1
console.log(buf1.compare(buf2, 5, 6, 5));
// Prints: 1
ERR_OUT_OF_RANGE is thrown if targetStart < 0, sourceStart < 0,
targetEnd > target.byteLength, or sourceEnd > source.byteLength.
buf.copy(target[, targetStart[,
sourceStart[, sourceEnd]]])
target {Buffer|Uint8Array} A Buffer or Uint8Array to copy into.
targetStart {integer} The offset within target at which to begin
writing. Default: 0.
sourceStart {integer} The offset within buf from which to begin
copying. Default: 0.
sourceEnd {integer} The offset within buf at which to stop copying
(not inclusive). Default: buf.length.
Returns: {integer} The number of bytes copied.
Copies data from a region of buf to a region in target, even if the
target memory region overlaps with buf.
TypedArray.prototype.set() performs the same operation, and is
available for all TypedArrays, including Node.js Buffers, although it
takes different function arguments.
import { Buffer } from 'node:buffer';
// Create two `Buffer` instances.
const buf1 = Buffer.allocUnsafe(26);
const buf2 = Buffer.allocUnsafe(26).fill('!');
for (let i = 0; i < 26; i++) {
// 97 is the decimal ASCII value for 'a'.
buf1[i] = i + 97;
}
// Copy `buf1` bytes 16 through 19 into `buf2` starting at byte 8 of
buf1.copy(buf2, 8, 16, 20);
// This is equivalent to:
// buf2.set(buf1.subarray(16, 20), 8);
console.log(buf2.toString('ascii', 0, 25));
// Prints: !!!!!!!!qrst!!!!!!!!!!!!!
const { Buffer } = require('node:buffer');
// Create two `Buffer` instances.
const buf1 = Buffer.allocUnsafe(26);
const buf2 = Buffer.allocUnsafe(26).fill('!');
for (let i = 0; i < 26; i++) {
// 97 is the decimal ASCII value for 'a'.
buf1[i] = i + 97;
}
// Copy `buf1` bytes 16 through 19 into `buf2` starting at byte 8 of
buf1.copy(buf2, 8, 16, 20);
// This is equivalent to:
// b f2 t(b f1 b (16 20) 8)
// buf2.set(buf1.subarray(16, 20), 8);
console.log(buf2.toString('ascii', 0, 25));
// Prints: !!!!!!!!qrst!!!!!!!!!!!!!
import { Buffer } from 'node:buffer';
// Create a `Buffer` and copy data from one region to an overlapping
// within the same `Buffer`.
const buf = Buffer.allocUnsafe(26);
for (let i = 0; i < 26; i++) {
// 97 is the decimal ASCII value for 'a'.
buf[i] = i + 97;
}
buf.copy(buf, 0, 4, 10);
console.log(buf.toString());
// Prints: efghijghijklmnopqrstuvwxyz
const { Buffer } = require('node:buffer');
// Create a `Buffer` and copy data from one region to an overlapping
// within the same `Buffer`.
const buf = Buffer.allocUnsafe(26);
for (let i = 0; i < 26; i++) {
// 97 is the decimal ASCII value for 'a'.
buf[i] = i + 97;
}
buf.copy(buf, 0, 4, 10);
console.log(buf.toString());
// Prints: efghijghijklmnopqrstuvwxyz
buf.entries()
Returns: {Iterator}
Creates and returns an iterator of [index, byte] pairs from the
contents of buf.
import { Buffer } from 'node:buffer';
// Log the entire contents of a `Buffer`.
const buf = Buffer.from('buffer');
for (const pair of buf.entries()) {
console.log(pair);
}
// Prints:
// [0, 98]
// [1, 117]
// [2, 102]
// [3, 102]
// [4, 101]
// [5, 114]
const { Buffer } = require('node:buffer');
// Log the entire contents of a `Buffer`.
const buf = Buffer.from('buffer');
for (const pair of buf.entries()) {
console.log(pair);
}
// Prints:
// [0, 98]
// [1, 117]
// [2, 102]
// [3, 102]
// [4, 101]
// [5, 114]
buf.equals(otherBuffer)
otherBuffer{Buffer|Uint8Array} A Buffer or Uint8Array with
which to compare buf.
Returns: {boolean}
Returns true if both buf and otherBuffer have exactly the same bytes,
false otherwise. Equivalent to buf.compare(otherBuffer) === 0.
import { Buffer } from 'node:buffer';
const buf1 = Buffer.from('ABC');
const buf2 = Buffer.from('414243', 'hex');
const buf3 = Buffer.from('ABCD');
console.log(buf1.equals(buf2));
// Prints: true
console.log(buf1.equals(buf3));
// Prints: false
const { Buffer } = require('node:buffer');
const buf1 = Buffer.from('ABC');
const buf2 = Buffer.from('414243', 'hex');
const buf3 = Buffer.from('ABCD');
console.log(buf1.equals(buf2));
// Prints: true
console.log(buf1.equals(buf3));
// Prints: false
buf.fill(value[, offset[, end]][,
encoding])
value {string|Buffer|Uint8Array|integer} The value with which to
fill buf. Empty value (string, Uint8Array, Buffer) is coerced to 0.
offset {integer} Number of bytes to skip before starting to fill
buf. Default: 0.
end {integer} Where to stop filling buf (not inclusive). Default:
buf.length.
encoding{string} The encoding for value if value is a string.
Default: 'utf8'.
Returns: {Buffer} A reference to buf.
Fills buf with the specified value. If the offset and end are not given,
the entire buf will be filled:
import { Buffer } from 'node:buffer';
// Fill a `Buffer` with the ASCII character 'h'.
const b = Buffer.allocUnsafe(50).fill('h');
console.log(b.toString());
// Prints: hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
// Fill a buffer with empty string
const c = Buffer.allocUnsafe(5).fill('');
console.log(c.fill(''));
// Prints: <Buffer 00 00 00 00 00>
const { Buffer } = require('node:buffer');
// Fill a `Buffer` with the ASCII character 'h'.
const b = Buffer.allocUnsafe(50).fill('h');
console.log(b.toString());
// Prints: hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
// Fill a buffer with empty string
const c = Buffer.allocUnsafe(5).fill('');
console.log(c.fill(''));
// Prints: <Buffer 00 00 00 00 00>
value is coerced to a uint32 value if it is not a string, Buffer, or
integer. If the resulting integer is greater than 255 (decimal), buf will
be filled with value & 255.
If the final write of a fill() operation falls on a multi-byte character,
then only the bytes of that character that fit into buf are written:
import { Buffer } from 'node:buffer';
// Fill a `Buffer` with character that takes up two bytes in UTF-8.
console.log(Buffer.allocUnsafe(5).fill('\u0222'));
// Prints: <Buffer c8 a2 c8 a2 c8>
const { Buffer } = require('node:buffer');
// Fill a `Buffer` with character that takes up two bytes in UTF-8.
console.log(Buffer.allocUnsafe(5).fill('\u0222'));
// Prints: <Buffer c8 a2 c8 a2 c8>
If value contains invalid characters, it is truncated; if no valid fill data
remains, an exception is thrown:
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(5);
console.log(buf.fill('a'));
// Prints: <Buffer 61 61 61 61 61>
console.log(buf.fill('aazz', 'hex'));
// Prints: <Buffer aa aa aa aa aa>
console.log(buf.fill('zz', 'hex'));
// Throws an exception.
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(5);
console.log(buf.fill('a'));
// Prints: <Buffer 61 61 61 61 61>
console.log(buf.fill('aazz', 'hex'));
// Prints: <Buffer aa aa aa aa aa>
console.log(buf.fill('zz', 'hex'));
// Throws an exception.
buf.includes(value[, byteOffset][,
encoding])
value {string|Buffer|Uint8Array|integer} What to search for.
byteOffset {integer} Where to begin searching in buf. If negative,
then offset is calculated from the end of buf. Default: 0.
encoding {string} If value is a string, this is its encoding. Default:
'utf8'.
Returns: {boolean} true if value was found in buf, false
otherwise.
Equivalent to buf.indexOf() !== -1.
import { Buffer } from 'node:buffer';
const buf = Buffer.from('this is a buffer');
console.log(buf.includes('this'));
// Prints: true
console.log(buf.includes('is'));
// Prints: true
console.log(buf.includes(Buffer.from('a buffer')));
// Prints: true
console.log(buf.includes(97));
// Prints: true (97 is the decimal ASCII value for 'a')
console.log(buf.includes(Buffer.from('a buffer example')));
// Prints: false
console.log(buf.includes(Buffer.from('a buffer example').slice(0, 8)
// Prints: true
console.log(buf.includes('this', 4));
// Prints: false
const { Buffer } = require('node:buffer');
const buf = Buffer.from('this is a buffer');
console.log(buf.includes('this'));
// Prints: true
console.log(buf.includes('is'));
// Prints: true
console.log(buf.includes(Buffer.from('a buffer')));
// Prints: true
console.log(buf.includes(97));
// Prints: true (97 is the decimal ASCII value for 'a')
console.log(buf.includes(Buffer.from('a buffer example')));
// Prints: false
console.log(buf.includes(Buffer.from('a buffer example').slice(0, 8)
// Prints: true
console.log(buf.includes('this', 4));
// Prints: false
buf.indexOf(value[, byteOffset][,
encoding])
value {string|Buffer|Uint8Array|integer} What to search for.
byteOffset {integer} Where to begin searching in buf. If negative,
then offset is calculated from the end of buf. Default: 0.
encoding {string} If value is a string, this is the encoding used to
determine the binary representation of the string that will be
searched for in buf. Default: 'utf8'.
Returns: {integer} The index of the first occurrence of value in
buf, or -1 if buf does not contain value.
If value is:
a string, value is interpreted according to the character encoding
in encoding.
a Buffer or Uint8Array, value will be used in its entirety. To
compare a partial Buffer, use buf.subarray.
a number, value will be interpreted as an unsigned 8-bit integer
value between 0 and 255.
import { Buffer } from 'node:buffer';
const buf = Buffer.from('this is a buffer');
console.log(buf.indexOf('this'));
// Prints: 0
console.log(buf.indexOf('is'));
// Prints: 2
console.log(buf.indexOf(Buffer.from('a buffer')));
// Prints: 8
console.log(buf.indexOf(97));
// Prints: 8 (97 is the decimal ASCII value for 'a')
console.log(buf.indexOf(Buffer.from('a buffer example')));
// Prints: -1
console.log(buf.indexOf(Buffer.from('a buffer example').slice(0, 8))
// Prints: 8
const utf16Buffer = Buffer.from('\u039a\u0391\u03a3\u03a3\u0395', 'u
console.log(utf16Buffer.indexOf('\u03a3', 0, 'utf16le'));
// Prints: 4
console.log(utf16Buffer.indexOf('\u03a3', -4, 'utf16le'));
// Prints: 6
const { Buffer } = require('node:buffer');
const buf = Buffer.from('this is a buffer');
console.log(buf.indexOf('this'));
// Prints: 0
console.log(buf.indexOf('is'));
// Prints: 2
console.log(buf.indexOf(Buffer.from('a buffer')));
// Prints: 8
console.log(buf.indexOf(97));
// Prints: 8 (97 is the decimal ASCII value for 'a')
console.log(buf.indexOf(Buffer.from('a buffer example')));
// Prints: -1
console.log(buf.indexOf(Buffer.from('a buffer example').slice(0, 8))
// Prints: 8
const utf16Buffer = Buffer.from('\u039a\u0391\u03a3\u03a3\u0395', 'u
console.log(utf16Buffer.indexOf('\u03a3', 0, 'utf16le'));
// Prints: 4
console.log(utf16Buffer.indexOf('\u03a3', -4, 'utf16le'));
// Prints: 6
If value is not a string, number, or Buffer, this method will throw a
TypeError. If value is a number, it will be coerced to a valid byte value,
an integer between 0 and 255.
If byteOffset is not a number, it will be coerced to a number. If the
result of coercion is NaN or 0, then the entire buffer will be searched.
This behavior matches String.prototype.indexOf().
import { Buffer } from 'node:buffer';
const b = Buffer.from('abcdef');
// Passing a value that's a number, but not a valid byte.
// Prints: 2, equivalent to searching for 99 or 'c'.
console.log(b.indexOf(99.9));
console.log(b.indexOf(256 + 99));
// Passing a byteOffset that coerces to NaN or 0.
// Prints: 1, searching the whole buffer.
console.log(b.indexOf('b', undefined));
console.log(b.indexOf('b', {}));
console.log(b.indexOf('b', null));
console.log(b.indexOf('b', []));
const { Buffer } = require('node:buffer');
const b = Buffer.from('abcdef');
// Passing a value that's a number, but not a valid byte.
// Prints: 2, equivalent to searching for 99 or 'c'.
console.log(b.indexOf(99.9));
console.log(b.indexOf(256 + 99));
// Passing a byteOffset that coerces to NaN or 0.
// Prints: 1, searching the whole buffer.
console.log(b.indexOf('b', undefined));
console.log(b.indexOf('b', {}));
console.log(b.indexOf('b', null));
console.log(b.indexOf('b', []));
If value is an empty string or empty Buffer and byteOffset is less than
buf.length, byteOffset will be returned. If value is empty and
byteOffset is at least buf.length, buf.length will be returned.
buf.keys()
Returns: {Iterator}
Creates and returns an iterator of buf keys (indices).
import { Buffer } from 'node:buffer';
const buf = Buffer.from('buffer');
for (const key of buf.keys()) {
console.log(key);
}
// Prints:
// 0
// 1
// 2
// 3
// 4
// 5
const { Buffer } = require('node:buffer');
const buf = Buffer.from('buffer');
for (const key of buf.keys()) {
console.log(key);
}
// Prints:
// 0
// 1
// 2
// 3
// 4
// 5
buf.lastIndexOf(value[, byteOffset][,
encoding])
value {string|Buffer|Uint8Array|integer} What to search for.
byteOffset {integer} Where to begin searching in buf. If negative,
then offset is calculated from the end of buf. Default: buf.length
- 1.
encoding {string} If value is a string, this is the encoding used to
determine the binary representation of the string that will be
searched for in buf. Default: 'utf8'.
Returns: {integer} The index of the last occurrence of value in
buf, or -1 if buf does not contain value.
Identical to buf.indexOf(), except the last occurrence of value is found
rather than the first occurrence.
import { Buffer } from 'node:buffer';
const buf = Buffer.from('this buffer is a buffer');
console.log(buf.lastIndexOf('this'));
// Prints: 0
console.log(buf.lastIndexOf('buffer'));
// Prints: 17
console.log(buf.lastIndexOf(Buffer.from('buffer')));
// Prints: 17
console.log(buf.lastIndexOf(97));
// Prints: 15 (97 is the decimal ASCII value for 'a')
console.log(buf.lastIndexOf(Buffer.from('yolo')));
// Prints: -1
console.log(buf.lastIndexOf('buffer', 5));
// Prints: 5
console.log(buf.lastIndexOf('buffer', 4));
// Prints: -1
const utf16Buffer = Buffer.from('\u039a\u0391\u03a3\u03a3\u0395', 'u
console.log(utf16Buffer.lastIndexOf('\u03a3', undefined, 'utf16le'))
// Prints: 6
console.log(utf16Buffer.lastIndexOf('\u03a3', -5, 'utf16le'));
// Prints: 4
const { Buffer } = require('node:buffer');
const buf = Buffer.from('this buffer is a buffer');
console.log(buf.lastIndexOf('this'));
// Prints: 0
console.log(buf.lastIndexOf('buffer'));
// Prints: 17
console.log(buf.lastIndexOf(Buffer.from('buffer')));
// Prints: 17
console.log(buf.lastIndexOf(97));
// Prints: 15 (97 is the decimal ASCII value for 'a')
console.log(buf.lastIndexOf(Buffer.from('yolo')));
// Prints: -1
console.log(buf.lastIndexOf('buffer', 5));
// Prints: 5
console.log(buf.lastIndexOf('buffer', 4));
// Prints: -1
const utf16Buffer = Buffer.from('\u039a\u0391\u03a3\u03a3\u0395', 'u
console.log(utf16Buffer.lastIndexOf('\u03a3', undefined, 'utf16le'))
// Prints: 6
console.log(utf16Buffer.lastIndexOf('\u03a3', -5, 'utf16le'));
// Prints: 4
If value is not a string, number, or Buffer, this method will throw a
TypeError. If value is a number, it will be coerced to a valid byte value,
an integer between 0 and 255.
If byteOffset is not a number, it will be coerced to a number. Any
arguments that coerce to NaN, like {} or undefined, will search the
whole buffer. This behavior matches String.prototype.lastIndexOf().
import { Buffer } from 'node:buffer';
const b = Buffer.from('abcdef');
// Passing a value that's a number, but not a valid byte.
// Prints: 2, equivalent to searching for 99 or 'c'.
console.log(b.lastIndexOf(99.9));
console.log(b.lastIndexOf(256 + 99));
// Passing a byteOffset that coerces to NaN.
// Prints: 1, searching the whole buffer.
console.log(b.lastIndexOf('b', undefined));
console.log(b.lastIndexOf('b', {}));
// Passing a byteOffset that coerces to 0.
// Prints: -1, equivalent to passing 0.
console.log(b.lastIndexOf('b', null));
console.log(b.lastIndexOf('b', []));
const { Buffer } = require('node:buffer');
const b = Buffer.from('abcdef');
// Passing a value that's a number, but not a valid byte.
// Prints: 2, equivalent to searching for 99 or 'c'.
console.log(b.lastIndexOf(99.9));
console.log(b.lastIndexOf(256 + 99));
// Passing a byteOffset that coerces to NaN.
// Prints: 1, searching the whole buffer.
console.log(b.lastIndexOf('b', undefined));
console.log(b.lastIndexOf('b', {}));
// Passing a byteOffset that coerces to 0.
// Prints: -1, equivalent to passing 0.
console.log(b.lastIndexOf('b', null));
console.log(b.lastIndexOf('b', []));
If value is an empty string or empty Buffer, byteOffset will be
returned.
buf.length
{integer}
Returns the number of bytes in buf.
import { Buffer } from 'node:buffer';
// Create a `Buffer` and write a shorter string to it using UTF-8.
const buf = Buffer.alloc(1234);
console.log(buf.length);
// Prints: 1234
buf.write('some string', 0, 'utf8');
console.log(buf.length);
// Prints: 1234
const { Buffer } = require('node:buffer');
// Create a `Buffer` and write a shorter string to it using UTF-8.
const buf = Buffer.alloc(1234);
console.log(buf.length);
// Prints: 1234
buf.write('some string', 0, 'utf8');
console.log(buf.length);
// Prints: 1234
buf.parent
Stability: 0 - Deprecated: Use buf.buffer instead.
The buf.parent property is a deprecated alias for buf.buffer.
buf.readBigInt64BE([offset])
offset{integer} Number of bytes to skip before starting to read.
Must satisfy: 0 <= offset <= buf.length - 8. Default: 0.
Returns: {bigint}
Reads a signed, big-endian 64-bit integer from buf at the specified
offset.
Integers read from a Buffer are interpreted as two’s complement
signed values.
buf.readBigInt64LE([offset])
offset{integer} Number of bytes to skip before starting to read.
Must satisfy: 0 <= offset <= buf.length - 8. Default: 0.
Returns: {bigint}
Reads a signed, little-endian 64-bit integer from buf at the specified
offset.
Integers read from a Buffer are interpreted as two’s complement
signed values.
buf.readBigUInt64BE([offset])
offset{integer} Number of bytes to skip before starting to read.
Must satisfy: 0 <= offset <= buf.length - 8. Default: 0.
Returns: {bigint}
Reads an unsigned, big-endian 64-bit integer from buf at the
specified offset.
This function is also available under the readBigUint64BE alias.
import { Buffer } from 'node:buffer';
const buf = Buffer.from([0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0
console.log(buf.readBigUInt64BE(0));
// Prints: 4294967295n
const { Buffer } = require('node:buffer');
const buf = Buffer.from([0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0
console.log(buf.readBigUInt64BE(0));
// Prints: 4294967295n
buf.readBigUInt64LE([offset])
offset{integer} Number of bytes to skip before starting to read.
Must satisfy: 0 <= offset <= buf.length - 8. Default: 0.
Returns: {bigint}
Reads an unsigned, little-endian 64-bit integer from buf at the
specified offset.
This function is also available under the readBigUint64LE alias.
import { Buffer } from 'node:buffer';
const buf = Buffer.from([0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0
console.log(buf.readBigUInt64LE(0));
// Prints: 18446744069414584320n
const { Buffer } = require('node:buffer');
const buf = Buffer.from([0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0
console.log(buf.readBigUInt64LE(0));
// Prints: 18446744069414584320n
buf.readDoubleBE([offset])
offset{integer} Number of bytes to skip before starting to read.
Must satisfy 0 <= offset <= buf.length - 8. Default: 0.
Returns: {number}
Reads a 64-bit, big-endian double from buf at the specified offset.
import { Buffer } from 'node:buffer';
const buf = Buffer.from([1, 2, 3, 4, 5, 6, 7, 8]);
console.log(buf.readDoubleBE(0));
// Prints: 8.20788039913184e-304
const { Buffer } = require('node:buffer');
const buf = Buffer.from([1, 2, 3, 4, 5, 6, 7, 8]);
console.log(buf.readDoubleBE(0));
// Prints: 8.20788039913184e-304
buf.readDoubleLE([offset])
offset{integer} Number of bytes to skip before starting to read.
Must satisfy 0 <= offset <= buf.length - 8. Default: 0.
Returns: {number}
Reads a 64-bit, little-endian double from buf at the specified offset.
import { Buffer } from 'node:buffer';
const buf = Buffer.from([1, 2, 3, 4, 5, 6, 7, 8]);
console.log(buf.readDoubleLE(0));
// Prints: 5.447603722011605e-270
console.log(buf.readDoubleLE(1));
// Throws ERR_OUT_OF_RANGE.
const { Buffer } = require('node:buffer');
const buf = Buffer.from([1, 2, 3, 4, 5, 6, 7, 8]);
console.log(buf.readDoubleLE(0));
// Prints: 5.447603722011605e-270
console.log(buf.readDoubleLE(1));
// Throws ERR_OUT_OF_RANGE.
buf.readFloatBE([offset])
offset{integer} Number of bytes to skip before starting to read.
Must satisfy 0 <= offset <= buf.length - 4. Default: 0.
Returns: {number}
Reads a 32-bit, big-endian float from buf at the specified offset.
import { Buffer } from 'node:buffer';
const buf = Buffer.from([1, 2, 3, 4]);
console.log(buf.readFloatBE(0));
// Prints: 2.387939260590663e-38
const { Buffer } = require('node:buffer');
const buf = Buffer.from([1, 2, 3, 4]);
console.log(buf.readFloatBE(0));
// Prints: 2.387939260590663e-38
buf.readFloatLE([offset])
offset{integer} Number of bytes to skip before starting to read.
Must satisfy 0 <= offset <= buf.length - 4. Default: 0.
Returns: {number}
Reads a 32-bit, little-endian float from buf at the specified offset.
import { Buffer } from 'node:buffer';
const buf = Buffer.from([1, 2, 3, 4]);
console.log(buf.readFloatLE(0));
// Prints: 1.539989614439558e-36
console.log(buf.readFloatLE(1));
// Throws ERR_OUT_OF_RANGE.
const { Buffer } = require('node:buffer');
const buf = Buffer.from([1, 2, 3, 4]);
console.log(buf.readFloatLE(0));
// Prints: 1.539989614439558e-36
console.log(buf.readFloatLE(1));
// Throws ERR_OUT_OF_RANGE.
buf.readInt8([offset])
offset{integer} Number of bytes to skip before starting to read.
Must satisfy 0 <= offset <= buf.length - 1. Default: 0.
Returns: {integer}
Reads a signed 8-bit integer from buf at the specified offset.
Integers read from a Buffer are interpreted as two’s complement
signed values.
import { Buffer } from 'node:buffer';
const buf = Buffer.from([-1, 5]);
console.log(buf.readInt8(0));
// Prints: -1
console.log(buf.readInt8(1));
// Prints: 5
console.log(buf.readInt8(2));
// Throws ERR_OUT_OF_RANGE.
const { Buffer } = require('node:buffer');
const buf = Buffer.from([-1, 5]);
console.log(buf.readInt8(0));
// Prints: -1
console.log(buf.readInt8(1));
// Prints: 5
console.log(buf.readInt8(2));
// Throws ERR_OUT_OF_RANGE.
buf.readInt16BE([offset])
offset{integer} Number of bytes to skip before starting to read.
Must satisfy 0 <= offset <= buf.length - 2. Default: 0.
Returns: {integer}
Reads a signed, big-endian 16-bit integer from buf at the specified
offset.
Integers read from a Buffer are interpreted as two’s complement
signed values.
import { Buffer } from 'node:buffer';
const buf = Buffer.from([0, 5]);
console.log(buf.readInt16BE(0));
// Prints: 5
const { Buffer } = require('node:buffer');
const buf = Buffer.from([0, 5]);
console.log(buf.readInt16BE(0));
// Prints: 5
buf.readInt16LE([offset])
offset{integer} Number of bytes to skip before starting to read.
Must satisfy 0 <= offset <= buf.length - 2. Default: 0.
Returns: {integer}
Reads a signed, little-endian 16-bit integer from buf at the specified
offset.
Integers read from a Buffer are interpreted as two’s complement
signed values.
import { Buffer } from 'node:buffer';
const buf = Buffer.from([0, 5]);
console.log(buf.readInt16LE(0));
// Prints: 1280
console.log(buf.readInt16LE(1));
// Throws ERR_OUT_OF_RANGE.
const { Buffer } = require('node:buffer');
const buf = Buffer.from([0, 5]);
console.log(buf.readInt16LE(0));
// Prints: 1280
console.log(buf.readInt16LE(1));
// Throws ERR_OUT_OF_RANGE.
buf.readInt32BE([offset])
offset{integer} Number of bytes to skip before starting to read.
Must satisfy 0 <= offset <= buf.length - 4. Default: 0.
Returns: {integer}
Reads a signed, big-endian 32-bit integer from buf at the specified
offset.
Integers read from a Buffer are interpreted as two’s complement
signed values.
import { Buffer } from 'node:buffer';
const buf = Buffer.from([0, 0, 0, 5]);
console.log(buf.readInt32BE(0));
// Prints: 5
const { Buffer } = require('node:buffer');
const buf = Buffer.from([0, 0, 0, 5]);
console.log(buf.readInt32BE(0));
// Prints: 5
buf.readInt32LE([offset])
offset{integer} Number of bytes to skip before starting to read.
Must satisfy 0 <= offset <= buf.length - 4. Default: 0.
Returns: {integer}
Reads a signed, little-endian 32-bit integer from buf at the specified
offset.
Integers read from a Buffer are interpreted as two’s complement
signed values.
import { Buffer } from 'node:buffer';
const buf = Buffer.from([0, 0, 0, 5]);
console.log(buf.readInt32LE(0));
// Prints: 83886080
console.log(buf.readInt32LE(1));
// Throws ERR_OUT_OF_RANGE.
const { Buffer } = require('node:buffer');
const buf = Buffer.from([0, 0, 0, 5]);
console.log(buf.readInt32LE(0));
// Prints: 83886080
console.log(buf.readInt32LE(1));
// Throws ERR_OUT_OF_RANGE.
buf.readIntBE(offset, byteLength)
offset{integer} Number of bytes to skip before starting to read.
Must satisfy 0 <= offset <= buf.length - byteLength.
byteLength {integer} Number of bytes to read. Must satisfy 0 <
byteLength <= 6.
Returns: {integer}
Reads byteLength number of bytes from buf at the specified offset
and interprets the result as a big-endian, two’s complement signed
value supporting up to 48 bits of accuracy.
import { Buffer } from 'node:buffer';
const buf = Buffer.from([0x12, 0x34, 0x56, 0x78, 0x90, 0xab]);
console.log(buf.readIntBE(0, 6).toString(16));
// Prints: 1234567890ab
console.log(buf.readIntBE(1, 6).toString(16));
// Throws ERR_OUT_OF_RANGE.
console.log(buf.readIntBE(1, 0).toString(16));
// Throws ERR_OUT_OF_RANGE.
const { Buffer } = require('node:buffer');
const buf = Buffer.from([0x12, 0x34, 0x56, 0x78, 0x90, 0xab]);
console.log(buf.readIntBE(0, 6).toString(16));
// Prints: 1234567890ab
console.log(buf.readIntBE(1, 6).toString(16));
// Throws ERR_OUT_OF_RANGE.
console.log(buf.readIntBE(1, 0).toString(16));
// Throws ERR_OUT_OF_RANGE.
buf.readIntLE(offset, byteLength)
offset {integer} Number of bytes to skip before starting to read.
Must satisfy 0 <= offset <= buf.length - byteLength.
byteLength {integer} Number of bytes to read. Must satisfy 0 <
byteLength <= 6.
Returns: {integer}
Reads byteLength number of bytes from buf at the specified offset
and interprets the result as a little-endian, two’s complement signed
value supporting up to 48 bits of accuracy.
import { Buffer } from 'node:buffer';
const buf = Buffer.from([0x12, 0x34, 0x56, 0x78, 0x90, 0xab]);
console.log(buf.readIntLE(0, 6).toString(16));
// Prints: -546f87a9cbee
const { Buffer } = require('node:buffer');
const buf = Buffer.from([0x12, 0x34, 0x56, 0x78, 0x90, 0xab]);
console.log(buf.readIntLE(0, 6).toString(16));
// Prints: -546f87a9cbee
buf.readUInt8([offset])
offset{integer} Number of bytes to skip before starting to read.
Must satisfy 0 <= offset <= buf.length - 1. Default: 0.
Returns: {integer}
Reads an unsigned 8-bit integer from buf at the specified offset.
This function is also available under the readUint8 alias.
import { Buffer } from 'node:buffer';
const buf = Buffer.from([1, -2]);
console.log(buf.readUInt8(0));
// Prints: 1
console.log(buf.readUInt8(1));
// Prints: 254
console.log(buf.readUInt8(2));
// Throws ERR_OUT_OF_RANGE.
const { Buffer } = require('node:buffer');
const buf = Buffer.from([1, -2]);
console.log(buf.readUInt8(0));
// Prints: 1
console.log(buf.readUInt8(1));
// Prints: 254
console.log(buf.readUInt8(2));
// Throws ERR_OUT_OF_RANGE.
buf.readUInt16BE([offset])
offset{integer} Number of bytes to skip before starting to read.
Must satisfy 0 <= offset <= buf.length - 2. Default: 0.
Returns: {integer}
Reads an unsigned, big-endian 16-bit integer from buf at the
specified offset.
This function is also available under the readUint16BE alias.
import { Buffer } from 'node:buffer';
const buf = Buffer.from([0x12, 0x34, 0x56]);
console.log(buf.readUInt16BE(0).toString(16));
// Prints: 1234
console.log(buf.readUInt16BE(1).toString(16));
// Prints: 3456
const { Buffer } = require('node:buffer');
const buf = Buffer.from([0x12, 0x34, 0x56]);
console.log(buf.readUInt16BE(0).toString(16));
// Prints: 1234
console.log(buf.readUInt16BE(1).toString(16));
// Prints: 3456
buf.readUInt16LE([offset])
offset{integer} Number of bytes to skip before starting to read.
Must satisfy 0 <= offset <= buf.length - 2. Default: 0.
Returns: {integer}
Reads an unsigned, little-endian 16-bit integer from buf at the
specified offset.
This function is also available under the readUint16LE alias.
import { Buffer } from 'node:buffer';
const buf = Buffer.from([0x12, 0x34, 0x56]);
console.log(buf.readUInt16LE(0).toString(16));
// Prints: 3412
console.log(buf.readUInt16LE(1).toString(16));
// Prints: 5634
console.log(buf.readUInt16LE(2).toString(16));
// Throws ERR_OUT_OF_RANGE.
const { Buffer } = require('node:buffer');
const buf = Buffer.from([0x12, 0x34, 0x56]);
console.log(buf.readUInt16LE(0).toString(16));
// Prints: 3412
console.log(buf.readUInt16LE(1).toString(16));
// Prints: 5634
console.log(buf.readUInt16LE(2).toString(16));
// Throws ERR_OUT_OF_RANGE.
buf.readUInt32BE([offset])
offset{integer} Number of bytes to skip before starting to read.
Must satisfy 0 <= offset <= buf.length - 4. Default: 0.
Returns: {integer}
Reads an unsigned, big-endian 32-bit integer from buf at the
specified offset.
This function is also available under the readUint32BE alias.
import { Buffer } from 'node:buffer';
const buf = Buffer.from([0x12, 0x34, 0x56, 0x78]);
console.log(buf.readUInt32BE(0).toString(16));
// Prints: 12345678
const { Buffer } = require('node:buffer');
const buf = Buffer.from([0x12, 0x34, 0x56, 0x78]);
console.log(buf.readUInt32BE(0).toString(16));
// Prints: 12345678
buf.readUInt32LE([offset])
offset{integer} Number of bytes to skip before starting to read.
Must satisfy 0 <= offset <= buf.length - 4. Default: 0.
Returns: {integer}
Reads an unsigned, little-endian 32-bit integer from buf at the
specified offset.
This function is also available under the readUint32LE alias.
import { Buffer } from 'node:buffer';
const buf = Buffer.from([0x12, 0x34, 0x56, 0x78]);
console.log(buf.readUInt32LE(0).toString(16));
// Prints: 78563412
console.log(buf.readUInt32LE(1).toString(16));
// Throws ERR_OUT_OF_RANGE.
const { Buffer } = require('node:buffer');
const buf = Buffer.from([0x12, 0x34, 0x56, 0x78]);
console.log(buf.readUInt32LE(0).toString(16));
// Prints: 78563412
console.log(buf.readUInt32LE(1).toString(16));
// Throws ERR_OUT_OF_RANGE.
buf.readUIntBE(offset, byteLength)
offset {integer} Number of bytes to skip before starting to read.
Must satisfy 0 <= offset <= buf.length - byteLength.
byteLength {integer} Number of bytes to read. Must satisfy 0 <
byteLength <= 6.
Returns: {integer}
Reads byteLength number of bytes from buf at the specified offset
and interprets the result as an unsigned big-endian integer
supporting up to 48 bits of accuracy.
This function is also available under the readUintBE alias.
import { Buffer } from 'node:buffer';
const buf = Buffer.from([0x12, 0x34, 0x56, 0x78, 0x90, 0xab]);
console.log(buf.readUIntBE(0, 6).toString(16));
// Prints: 1234567890ab
console.log(buf.readUIntBE(1, 6).toString(16));
// Throws ERR_OUT_OF_RANGE.
const { Buffer } = require('node:buffer');
const buf = Buffer.from([0x12, 0x34, 0x56, 0x78, 0x90, 0xab]);
console.log(buf.readUIntBE(0, 6).toString(16));
// Prints: 1234567890ab
console.log(buf.readUIntBE(1, 6).toString(16));
// Throws ERR_OUT_OF_RANGE.
buf.readUIntLE(offset, byteLength)
offset {integer} Number of bytes to skip before starting to read.
Must satisfy 0 <= offset <= buf.length - byteLength.
byteLength {integer} Number of bytes to read. Must satisfy 0 <
byteLength <= 6.
Returns: {integer}
Reads byteLength number of bytes from buf at the specified offset
and interprets the result as an unsigned, little-endian integer
supporting up to 48 bits of accuracy.
This function is also available under the readUintLE alias.
import { Buffer } from 'node:buffer';
const buf = Buffer.from([0x12, 0x34, 0x56, 0x78, 0x90, 0xab]);
console.log(buf.readUIntLE(0, 6).toString(16));
// Prints: ab9078563412
const { Buffer } = require('node:buffer');
const buf = Buffer.from([0x12, 0x34, 0x56, 0x78, 0x90, 0xab]);
console.log(buf.readUIntLE(0, 6).toString(16));
// Prints: ab9078563412
buf.subarray([start[, end]])
start {integer} Where the new Buffer will start. Default: 0.
end {integer} Where the new Buffer will end (not inclusive).
Default: buf.length.
Returns: {Buffer}
Returns a new Buffer that references the same memory as the
original, but offset and cropped by the start and end indices.
Specifying end greater than buf.length will return the same result as
that of end equal to buf.length.
This method is inherited from TypedArray.prototype.subarray().
Modifying the new Buffer slice will modify the memory in the
original Buffer because the allocated memory of the two objects
overlap.
import { Buffer } from 'node:buffer';
// Create a `Buffer` with the ASCII alphabet, take a slice, and modi
// from the original `Buffer`.
const buf1 = Buffer.allocUnsafe(26);
for (let i = 0; i < 26; i++) {
// 97 is the decimal ASCII value for 'a'.
buf1[i] = i + 97;
}
const buf2 = buf1.subarray(0, 3);
console.log(buf2.toString('ascii', 0, buf2.length));
// Prints: abc
buf1[0] = 33;
console.log(buf2.toString('ascii', 0, buf2.length));
// Prints: !bc
const { Buffer } = require('node:buffer');
// Create a `Buffer` with the ASCII alphabet, take a slice, and modi
// from the original `Buffer`.
const buf1 = Buffer.allocUnsafe(26);
for (let i = 0; i < 26; i++) {
// 97 is the decimal ASCII value for 'a'.
buf1[i] = i + 97;
}
const buf2 = buf1.subarray(0, 3);
console.log(buf2.toString('ascii', 0, buf2.length));
g( g( , , g ));
// Prints: abc
buf1[0] = 33;
console.log(buf2.toString('ascii', 0, buf2.length));
// Prints: !bc
Specifying negative indexes causes the slice to be generated relative
to the end of buf rather than the beginning.
import { Buffer } from 'node:buffer';
const buf = Buffer.from('buffer');
console.log(buf.subarray(-6, -1).toString());
// Prints: buffe
// (Equivalent to buf.subarray(0, 5).)
console.log(buf.subarray(-6, -2).toString());
// Prints: buff
// (Equivalent to buf.subarray(0, 4).)
console.log(buf.subarray(-5, -2).toString());
// Prints: uff
// (Equivalent to buf.subarray(1, 4).)
const { Buffer } = require('node:buffer');
const buf = Buffer.from('buffer');
console.log(buf.subarray(-6, -1).toString());
// Prints: buffe
// (Equivalent to buf.subarray(0, 5).)
console.log(buf.subarray(-6, -2).toString());
// Prints: buff
// (Equivalent to buf.subarray(0, 4).)
console.log(buf.subarray(-5, -2).toString());
// Prints: uff
// (Equivalent to buf.subarray(1, 4).)
buf.slice([start[, end]])
start {integer} Where the new Buffer will start. Default: 0.
end {integer} Where the new Buffer will end (not inclusive).
Default: buf.length.
Returns: {Buffer}
Stability: 0 - Deprecated: Use buf.subarray instead.
Returns a new Buffer that references the same memory as the
original, but offset and cropped by the start and end indices.
This method is not compatible with the Uint8Array.prototype.slice(),
which is a superclass of Buffer. To copy the slice, use
Uint8Array.prototype.slice().
import { Buffer } from 'node:buffer';
const buf = Buffer.from('buffer');
const copiedBuf = Uint8Array.prototype.slice.call(buf);
copiedBuf[0]++;
console.log(copiedBuf.toString());
// Prints: cuffer
console.log(buf.toString());
// Prints: buffer
// With buf.slice(), the original buffer is modified.
const notReallyCopiedBuf = buf.slice();
notReallyCopiedBuf[0]++;
console.log(notReallyCopiedBuf.toString());
// Prints: cuffer
console.log(buf.toString());
// Also prints: cuffer (!)
const { Buffer } = require('node:buffer');
const buf = Buffer.from('buffer');
const copiedBuf = Uint8Array.prototype.slice.call(buf);
copiedBuf[0]++;
console.log(copiedBuf.toString());
// Prints: cuffer
console.log(buf.toString());
// Prints: buffer
// With buf.slice(), the original buffer is modified.
const notReallyCopiedBuf = buf.slice();
notReallyCopiedBuf[0]++;
console.log(notReallyCopiedBuf.toString());
// Prints: cuffer
console.log(buf.toString());
// Also prints: cuffer (!)
buf.swap16()
Returns: {Buffer} A reference to buf.
Interprets buf as an array of unsigned 16-bit integers and swaps the
byte order in-place. Throws ERR_INVALID_BUFFER_SIZE if buf.length is
not a multiple of 2.
import { Buffer } from 'node:buffer';
const buf1 = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8]);
console.log(buf1);
// Prints: <Buffer 01 02 03 04 05 06 07 08>
buf1.swap16();
console.log(buf1);
// Prints: <Buffer 02 01 04 03 06 05 08 07>
const buf2 = Buffer.from([0x1, 0x2, 0x3]);
buf2.swap16();
// Throws ERR_INVALID_BUFFER_SIZE.
const { Buffer } = require('node:buffer');
const buf1 = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8]);
console.log(buf1);
// Prints: <Buffer 01 02 03 04 05 06 07 08>
buf1.swap16();
console.log(buf1);
// Prints: <Buffer 02 01 04 03 06 05 08 07>
const buf2 = Buffer.from([0x1, 0x2, 0x3]);
buf2.swap16();
// Throws ERR_INVALID_BUFFER_SIZE.
One convenient use of buf.swap16() is to perform a fast in-place
conversion between UTF-16 little-endian and UTF-16 big-endian:
import { Buffer } from 'node:buffer';
const buf = Buffer.from('This is little-endian UTF-16', 'utf16le');
buf.swap16(); // Convert to big-endian UTF-16 text.
const { Buffer } = require('node:buffer');
const buf = Buffer.from('This is little-endian UTF-16', 'utf16le');
buf.swap16(); // Convert to big-endian UTF-16 text.
buf.swap32()
Returns: {Buffer} A reference to buf.
Interprets buf as an array of unsigned 32-bit integers and swaps the
byte order in-place. Throws ERR_INVALID_BUFFER_SIZE if buf.length is
not a multiple of 4.
import { Buffer } from 'node:buffer';
const buf1 = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8]);
console.log(buf1);
// Prints: <Buffer 01 02 03 04 05 06 07 08>
buf1.swap32();
console.log(buf1);
// Prints: <Buffer 04 03 02 01 08 07 06 05>
const buf2 = Buffer.from([0x1, 0x2, 0x3]);
buf2.swap32();
// Throws ERR_INVALID_BUFFER_SIZE.
const { Buffer } = require('node:buffer');
const buf1 = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8]);
console.log(buf1);
// Prints: <Buffer 01 02 03 04 05 06 07 08>
buf1.swap32();
console.log(buf1);
// Prints: <Buffer 04 03 02 01 08 07 06 05>
const buf2 = Buffer.from([0x1, 0x2, 0x3]);
buf2.swap32();
// Throws ERR_INVALID_BUFFER_SIZE.
buf.swap64()
Returns: {Buffer} A reference to buf.
Interprets buf as an array of 64-bit numbers and swaps byte order in-
place. Throws ERR_INVALID_BUFFER_SIZE if buf.length is not a multiple
of 8.
import { Buffer } from 'node:buffer';
const buf1 = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8]);
console.log(buf1);
// Prints: <Buffer 01 02 03 04 05 06 07 08>
buf1.swap64();
console.log(buf1);
// Prints: <Buffer 08 07 06 05 04 03 02 01>
const buf2 = Buffer.from([0x1, 0x2, 0x3]);
buf2.swap64();
// Throws ERR_INVALID_BUFFER_SIZE.
const { Buffer } = require('node:buffer');
const buf1 = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8]);
console.log(buf1);
// Prints: <Buffer 01 02 03 04 05 06 07 08>
buf1.swap64();
console.log(buf1);
// Prints: <Buffer 08 07 06 05 04 03 02 01>
const buf2 = Buffer.from([0x1, 0x2, 0x3]);
buf2.swap64();
// Throws ERR_INVALID_BUFFER_SIZE.
buf.toJSON()
Returns: {Object}
Returns a JSON representation of buf. JSON.stringify() implicitly
calls this function when stringifying a Buffer instance.
Buffer.from() accepts objects in the format returned from this
method. In particular, Buffer.from(buf.toJSON()) works like
Buffer.from(buf).
import { Buffer } from 'node:buffer';
const buf = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5]);
const json = JSON.stringify(buf);
console.log(json);
// Prints: {"type":"Buffer","data":[1,2,3,4,5]}
const copy = JSON.parse(json, (key, value) => {
return value && value.type === 'Buffer' ?
Buffer.from(value) :
value;
});
console.log(copy);
// Prints: <Buffer 01 02 03 04 05>
const { Buffer } = require('node:buffer');
const buf = Buffer.from([0x1, 0x2, 0x3, 0x4, 0x5]);
const json = JSON.stringify(buf);
console.log(json);
// Prints: {"type":"Buffer","data":[1,2,3,4,5]}
const copy = JSON.parse(json, (key, value) => {
return value && value.type === 'Buffer' ?
Buffer.from(value) :
value;
});
console.log(copy);
// Prints: <Buffer 01 02 03 04 05>
buf.toString([encoding[, start[, end]]])
encoding {string} The character encoding to use. Default: 'utf8'.
start {integer} The byte offset to start decoding at. Default: 0.
end {integer} The byte offset to stop decoding at (not inclusive).
Default: buf.length.
Returns: {string}
Decodes buf to a string according to the specified character encoding
in encoding. start and end may be passed to decode only a subset of
buf.
If encoding is 'utf8' and a byte sequence in the input is not valid
UTF-8, then each invalid byte is replaced with the replacement
character U+FFFD.
The maximum length of a string instance (in UTF-16 code units) is
available as buffer.constants.MAX_STRING_LENGTH.
import { Buffer } from 'node:buffer';
const buf1 = Buffer.allocUnsafe(26);
for (let i = 0; i < 26; i++) {
// 97 is the decimal ASCII value for 'a'.
buf1[i] = i + 97;
}
console.log(buf1.toString('utf8'));
// Prints: abcdefghijklmnopqrstuvwxyz
console.log(buf1.toString('utf8', 0, 5));
// Prints: abcde
const buf2 = Buffer.from('tést');
console.log(buf2.toString('hex'));
// Prints: 74c3a97374
console.log(buf2.toString('utf8', 0, 3));
// Prints: té
console.log(buf2.toString(undefined, 0, 3));
// Prints: té
const { Buffer } = require('node:buffer');
const buf1 = Buffer.allocUnsafe(26);
for (let i = 0; i < 26; i++) {
// 97 is the decimal ASCII value for 'a'.
buf1[i] = i + 97;
}
console.log(buf1.toString('utf8'));
// Prints: abcdefghijklmnopqrstuvwxyz
console.log(buf1.toString('utf8', 0, 5));
// Prints: abcde
const buf2 = Buffer.from('tést');
console.log(buf2.toString('hex'));
// Prints: 74c3a97374
console.log(buf2.toString('utf8', 0, 3));
// Prints: té
console.log(buf2.toString(undefined, 0, 3));
// Prints: té
buf.values()
Returns: {Iterator}
Creates and returns an iterator for buf values (bytes). This function is
called automatically when a Buffer is used in a for..of statement.
import { Buffer } from 'node:buffer';
const buf = Buffer.from('buffer');
for (const value of buf.values()) {
console.log(value);
}
// Prints:
// 98
// 117
// 102
// 102
// 101
// 114
for (const value of buf) {
console.log(value);
}
// Prints:
// 98
// 117
// 102
// 102
// 101
// 114
const { Buffer } = require('node:buffer');
const buf = Buffer.from('buffer');
for (const value of buf.values()) {
console.log(value);
}
// Prints:
// 98
// 117
// 102
// 102
// 101
// 114
for (const value of buf) {
console.log(value);
}
// Prints:
// 98
// 117
// 102
// 102
// 101
// 114
buf.write(string[, offset[, length]][,
encoding])
string {string} String to write to buf.
offset {integer} Number of bytes to skip before starting to write
string. Default: 0.
length {integer} Maximum number of bytes to write (written
bytes will not exceed buf.length - offset). Default: buf.length -
offset.
encoding {string} The character encoding of string. Default:
'utf8'.
Returns: {integer} Number of bytes written.
Writes string to buf at offset according to the character encoding in
encoding. The length parameter is the number of bytes to write. If buf
did not contain enough space to fit the entire string, only part of
string will be written. However, partially encoded characters will not
be written.
import { Buffer } from 'node:buffer';
const buf = Buffer.alloc(256);
const len = buf.write('\u00bd + \u00bc = \u00be', 0);
console.log(`${len} bytes: ${buf.toString('utf8', 0, len)}`);
// Prints: 12 bytes: ½ + ¼ = ¾
const buffer = Buffer.alloc(10);
const length = buffer.write('abcd', 8);
console.log(`${length} bytes: ${buffer.toString('utf8', 8, 10)}`);
// Prints: 2 bytes : ab
const { Buffer } = require('node:buffer');
const buf = Buffer.alloc(256);
const len = buf.write('\u00bd + \u00bc = \u00be', 0);
console.log(`${len} bytes: ${buf.toString('utf8', 0, len)}`);
// Prints: 12 bytes: ½ + ¼ = ¾
const buffer = Buffer.alloc(10);
const length = buffer.write('abcd', 8);
console.log(`${length} bytes: ${buffer.toString('utf8', 8, 10)}`);
// Prints: 2 bytes : ab
buf.writeBigInt64BE(value[, offset])
value {bigint} Number to be written to buf.
offset {integer} Number of bytes to skip before starting to write.
Must satisfy: 0 <= offset <= buf.length - 8. Default: 0.
Returns: {integer} offset plus the number of bytes written.
Writes value to buf at the specified offset as big-endian.
value is interpreted and written as a two’s complement signed
integer.
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(8);
buf.writeBigInt64BE(0x0102030405060708n, 0);
console.log(buf);
// Prints: <Buffer 01 02 03 04 05 06 07 08>
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(8);
buf.writeBigInt64BE(0x0102030405060708n, 0);
console.log(buf);
// Prints: <Buffer 01 02 03 04 05 06 07 08>
buf.writeBigInt64LE(value[, offset])
value {bigint} Number to be written to buf.
offset {integer} Number of bytes to skip before starting to write.
Must satisfy: 0 <= offset <= buf.length - 8. Default: 0.
Returns: {integer} offset plus the number of bytes written.
Writes value to buf at the specified offset as little-endian.
value is interpreted and written as a two’s complement signed
integer.
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(8);
buf.writeBigInt64LE(0x0102030405060708n, 0);
console.log(buf);
// Prints: <Buffer 08 07 06 05 04 03 02 01>
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(8);
buf.writeBigInt64LE(0x0102030405060708n, 0);
console.log(buf);
// Prints: <Buffer 08 07 06 05 04 03 02 01>
buf.writeBigUInt64BE(value[, offset])
value {bigint} Number to be written to buf.
offset {integer} Number of bytes to skip before starting to write.
Must satisfy: 0 <= offset <= buf.length - 8. Default: 0.
Returns: {integer} offset plus the number of bytes written.
Writes value to buf at the specified offset as big-endian.
This function is also available under the writeBigUint64BE alias.
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(8);
buf.writeBigUInt64BE(0xdecafafecacefaden, 0);
console.log(buf);
// Prints: <Buffer de ca fa fe ca ce fa de>
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(8);
buf.writeBigUInt64BE(0xdecafafecacefaden, 0);
console.log(buf);
// Prints: <Buffer de ca fa fe ca ce fa de>
buf.writeBigUInt64LE(value[, offset])
value {bigint} Number to be written to buf.
offset {integer} Number of bytes to skip before starting to write.
Must satisfy: 0 <= offset <= buf.length - 8. Default: 0.
Returns: {integer} offset plus the number of bytes written.
Writes value to buf at the specified offset as little-endian
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(8);
buf.writeBigUInt64LE(0xdecafafecacefaden, 0);
console.log(buf);
// Prints: <Buffer de fa ce ca fe fa ca de>
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(8);
buf.writeBigUInt64LE(0xdecafafecacefaden, 0);
console.log(buf);
// Prints: <Buffer de fa ce ca fe fa ca de>
This function is also available under the writeBigUint64LE alias.
buf.writeDoubleBE(value[, offset])
value {number} Number to be written to buf.
offset {integer} Number of bytes to skip before starting to write.
Must satisfy 0 <= offset <= buf.length - 8. Default: 0.
Returns: {integer} offset plus the number of bytes written.
Writes value to buf at the specified offset as big-endian. The value
must be a JavaScript number. Behavior is undefined when value is
anything other than a JavaScript number.
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(8);
buf.writeDoubleBE(123.456, 0);
console.log(buf);
// Prints: <Buffer 40 5e dd 2f 1a 9f be 77>
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(8);
buf.writeDoubleBE(123.456, 0);
console.log(buf);
// Prints: <Buffer 40 5e dd 2f 1a 9f be 77>
buf.writeDoubleLE(value[, offset])
value {number} Number to be written to buf.
offset {integer} Number of bytes to skip before starting to write.
Must satisfy 0 <= offset <= buf.length - 8. Default: 0.
Returns: {integer} offset plus the number of bytes written.
Writes value to buf at the specified offset as little-endian. The value
must be a JavaScript number. Behavior is undefined when value is
anything other than a JavaScript number.
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(8);
buf.writeDoubleLE(123.456, 0);
console.log(buf);
// Prints: <Buffer 77 be 9f 1a 2f dd 5e 40>
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(8);
buf.writeDoubleLE(123.456, 0);
console.log(buf);
// Prints: <Buffer 77 be 9f 1a 2f dd 5e 40>
buf.writeFloatBE(value[, offset])
value {number} Number to be written to buf.
offset {integer} Number of bytes to skip before starting to write.
Must satisfy 0 <= offset <= buf.length - 4. Default: 0.
Returns: {integer} offset plus the number of bytes written.
Writes value to buf at the specified offset as big-endian. Behavior is
undefined when value is anything other than a JavaScript number.
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(4);
buf.writeFloatBE(0xcafebabe, 0);
console.log(buf);
// Prints: <Buffer 4f 4a fe bb>
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(4);
buf.writeFloatBE(0xcafebabe, 0);
console.log(buf);
// Prints: <Buffer 4f 4a fe bb>
buf.writeFloatLE(value[, offset])
value {number} Number to be written to buf.
offset {integer} Number of bytes to skip before starting to write.
Must satisfy 0 <= offset <= buf.length - 4. Default: 0.
Returns: {integer} offset plus the number of bytes written.
Writes value to buf at the specified offset as little-endian. Behavior is
undefined when value is anything other than a JavaScript number.
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(4);
buf.writeFloatLE(0xcafebabe, 0);
console.log(buf);
// Prints: <Buffer bb fe 4a 4f>
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(4);
buf.writeFloatLE(0xcafebabe, 0);
console.log(buf);
// Prints: <Buffer bb fe 4a 4f>
buf.writeInt8(value[, offset])
value {integer} Number to be written to buf.
offset {integer} Number of bytes to skip before starting to write.
Must satisfy 0 <= offset <= buf.length - 1. Default: 0.
Returns: {integer} offset plus the number of bytes written.
Writes value to buf at the specified offset. value must be a valid
signed 8-bit integer. Behavior is undefined when value is anything
other than a signed 8-bit integer.
value is interpreted and written as a two’s complement signed
integer.
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(2);
buf.writeInt8(2, 0);
buf.writeInt8(-2, 1);
console.log(buf);
// Prints: <Buffer 02 fe>
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(2);
buf.writeInt8(2, 0);
buf.writeInt8(-2, 1);
console.log(buf);
// Prints: <Buffer 02 fe>
buf.writeInt16BE(value[, offset])
value {integer} Number to be written to buf.
offset {integer} Number of bytes to skip before starting to write.
Must satisfy 0 <= offset <= buf.length - 2. Default: 0.
Returns: {integer} offset plus the number of bytes written.
Writes value to buf at the specified offset as big-endian. The value
must be a valid signed 16-bit integer. Behavior is undefined when
value is anything other than a signed 16-bit integer.
The value is interpreted and written as a two’s complement signed
integer.
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(2);
buf.writeInt16BE(0x0102, 0);
console.log(buf);
// Prints: <Buffer 01 02>
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(2);
buf.writeInt16BE(0x0102, 0);
console.log(buf);
// Prints: <Buffer 01 02>
buf.writeInt16LE(value[, offset])
value {integer} Number to be written to buf.
offset {integer} Number of bytes to skip before starting to write.
Must satisfy 0 <= offset <= buf.length - 2. Default: 0.
Returns: {integer} offset plus the number of bytes written.
Writes value to buf at the specified offset as little-endian. The value
must be a valid signed 16-bit integer. Behavior is undefined when
value is anything other than a signed 16-bit integer.
The value is interpreted and written as a two’s complement signed
integer.
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(2);
buf.writeInt16LE(0x0304, 0);
console.log(buf);
// Prints: <Buffer 04 03>
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(2);
buf.writeInt16LE(0x0304, 0);
console.log(buf);
// Prints: <Buffer 04 03>
buf.writeInt32BE(value[, offset])
value {integer} Number to be written to buf.
offset {integer} Number of bytes to skip before starting to write.
Must satisfy 0 <= offset <= buf.length - 4. Default: 0.
Returns: {integer} offset plus the number of bytes written.
Writes value to buf at the specified offset as big-endian. The value
must be a valid signed 32-bit integer. Behavior is undefined when
value is anything other than a signed 32-bit integer.
The value is interpreted and written as a two’s complement signed
integer.
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(4);
buf.writeInt32BE(0x01020304, 0);
console.log(buf);
// Prints: <Buffer 01 02 03 04>
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(4);
buf.writeInt32BE(0x01020304, 0);
console.log(buf);
// Prints: <Buffer 01 02 03 04>
buf.writeInt32LE(value[, offset])
value {integer} Number to be written to buf.
offset {integer} Number of bytes to skip before starting to write.
Must satisfy 0 <= offset <= buf.length - 4. Default: 0.
Returns: {integer} offset plus the number of bytes written.
Writes value to buf at the specified offset as little-endian. The value
must be a valid signed 32-bit integer. Behavior is undefined when
value is anything other than a signed 32-bit integer.
The value is interpreted and written as a two’s complement signed
integer.
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(4);
buf.writeInt32LE(0x05060708, 0);
console.log(buf);
// Prints: <Buffer 08 07 06 05>
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(4);
buf.writeInt32LE(0x05060708, 0);
console.log(buf);
// Prints: <Buffer 08 07 06 05>
buf.writeIntBE(value, offset, byteLength)
value {integer} Number to be written to buf.
offset {integer} Number of bytes to skip before starting to write.
Must satisfy 0 <= offset <= buf.length - byteLength.
byteLength {integer} Number of bytes to write. Must satisfy 0 <
byteLength <= 6.
Returns: {integer} offset plus the number of bytes written.
Writes byteLength bytes of value to buf at the specified offset as big-
endian. Supports up to 48 bits of accuracy. Behavior is undefined
when value is anything other than a signed integer.
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(6);
buf.writeIntBE(0x1234567890ab, 0, 6);
console.log(buf);
// Prints: <Buffer 12 34 56 78 90 ab>
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(6);
buf.writeIntBE(0x1234567890ab, 0, 6);
console.log(buf);
// Prints: <Buffer 12 34 56 78 90 ab>
buf.writeIntLE(value, offset, byteLength)
value {integer} Number to be written to buf.
offset {integer} Number of bytes to skip before starting to write.
Must satisfy 0 <= offset <= buf.length - byteLength.
byteLength {integer} Number of bytes to write. Must satisfy 0 <
byteLength <= 6.
Returns: {integer} offset plus the number of bytes written.
Writes byteLength bytes of value to buf at the specified offset as little-
endian. Supports up to 48 bits of accuracy. Behavior is undefined
when value is anything other than a signed integer.
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(6);
buf.writeIntLE(0x1234567890ab, 0, 6);
console.log(buf);
// Prints: <Buffer ab 90 78 56 34 12>
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(6);
buf.writeIntLE(0x1234567890ab, 0, 6);
console.log(buf);
// Prints: <Buffer ab 90 78 56 34 12>
buf.writeUInt8(value[, offset])
value {integer} Number to be written to buf.
offset {integer} Number of bytes to skip before starting to write.
Must satisfy 0 <= offset <= buf.length - 1. Default: 0.
Returns: {integer} offset plus the number of bytes written.
Writes value to buf at the specified offset. value must be a valid
unsigned 8-bit integer. Behavior is undefined when value is anything
other than an unsigned 8-bit integer.
This function is also available under the writeUint8 alias.
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(4);
buf.writeUInt8(0x3, 0);
buf.writeUInt8(0x4, 1);
buf.writeUInt8(0x23, 2);
buf.writeUInt8(0x42, 3);
console.log(buf);
// Prints: <Buffer 03 04 23 42>
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(4);
buf.writeUInt8(0x3, 0);
buf.writeUInt8(0x4, 1);
buf.writeUInt8(0x23, 2);
buf.writeUInt8(0x42, 3);
console.log(buf);
// Prints: <Buffer 03 04 23 42>
buf.writeUInt16BE(value[, offset])
value {integer} Number to be written to buf.
offset {integer} Number of bytes to skip before starting to write.
Must satisfy 0 <= offset <= buf.length - 2. Default: 0.
Returns: {integer} offset plus the number of bytes written.
Writes value to buf at the specified offset as big-endian. The value
must be a valid unsigned 16-bit integer. Behavior is undefined when
value is anything other than an unsigned 16-bit integer.
This function is also available under the writeUint16BE alias.
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(4);
buf.writeUInt16BE(0xdead, 0);
buf.writeUInt16BE(0xbeef, 2);
console.log(buf);
// Prints: <Buffer de ad be ef>
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(4);
buf.writeUInt16BE(0xdead, 0);
buf.writeUInt16BE(0xbeef, 2);
console.log(buf);
// Prints: <Buffer de ad be ef>
buf.writeUInt16LE(value[, offset])
value {integer} Number to be written to buf.
offset {integer} Number of bytes to skip before starting to write.
Must satisfy 0 <= offset <= buf.length - 2. Default: 0.
Returns: {integer} offset plus the number of bytes written.
Writes value to buf at the specified offset as little-endian. The value
must be a valid unsigned 16-bit integer. Behavior is undefined when
value is anything other than an unsigned 16-bit integer.
This function is also available under the writeUint16LE alias.
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(4);
buf.writeUInt16LE(0xdead, 0);
buf.writeUInt16LE(0xbeef, 2);
console.log(buf);
// Prints: <Buffer ad de ef be>
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(4);
buf.writeUInt16LE(0xdead, 0);
buf.writeUInt16LE(0xbeef, 2);
console.log(buf);
// Prints: <Buffer ad de ef be>
buf.writeUInt32BE(value[, offset])
value {integer} Number to be written to buf.
offset {integer} Number of bytes to skip before starting to write.
Must satisfy 0 <= offset <= buf.length - 4. Default: 0.
Returns: {integer} offset plus the number of bytes written.
Writes value to buf at the specified offset as big-endian. The value
must be a valid unsigned 32-bit integer. Behavior is undefined when
value is anything other than an unsigned 32-bit integer.
This function is also available under the writeUint32BE alias.
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(4);
buf.writeUInt32BE(0xfeedface, 0);
console.log(buf);
// Prints: <Buffer fe ed fa ce>
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(4);
buf.writeUInt32BE(0xfeedface, 0);
console.log(buf);
// Prints: <Buffer fe ed fa ce>
buf.writeUInt32LE(value[, offset])
value {integer} Number to be written to buf.
offset {integer} Number of bytes to skip before starting to write.
Must satisfy 0 <= offset <= buf.length - 4. Default: 0.
Returns: {integer} offset plus the number of bytes written.
Writes value to buf at the specified offset as little-endian. The value
must be a valid unsigned 32-bit integer. Behavior is undefined when
value is anything other than an unsigned 32-bit integer.
This function is also available under the writeUint32LE alias.
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(4);
buf.writeUInt32LE(0xfeedface, 0);
console.log(buf);
// Prints: <Buffer ce fa ed fe>
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(4);
buf.writeUInt32LE(0xfeedface, 0);
console.log(buf);
// Prints: <Buffer ce fa ed fe>
buf.writeUIntBE(value, offset, byteLength)
value {integer} Number to be written to buf.
offset {integer} Number of bytes to skip before starting to write.
Must satisfy 0 <= offset <= buf.length - byteLength.
byteLength {integer} Number of bytes to write. Must satisfy 0 <
byteLength <= 6.
Returns: {integer} offset plus the number of bytes written.
Writes byteLength bytes of value to buf at the specified offset as big-
endian. Supports up to 48 bits of accuracy. Behavior is undefined
when value is anything other than an unsigned integer.
This function is also available under the writeUintBE alias.
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(6);
buf.writeUIntBE(0x1234567890ab, 0, 6);
console.log(buf);
// Prints: <Buffer 12 34 56 78 90 ab>
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(6);
buf.writeUIntBE(0x1234567890ab, 0, 6);
console.log(buf);
// Prints: <Buffer 12 34 56 78 90 ab>
buf.writeUIntLE(value, offset, byteLength)
value {integer} Number to be written to buf.
offset {integer} Number of bytes to skip before starting to write.
Must satisfy 0 <= offset <= buf.length - byteLength.
byteLength {integer} Number of bytes to write. Must satisfy 0 <
byteLength <= 6.
Returns: {integer} offset plus the number of bytes written.
Writes byteLength bytes of value to buf at the specified offset as little-
endian. Supports up to 48 bits of accuracy. Behavior is undefined
when value is anything other than an unsigned integer.
This function is also available under the writeUintLE alias.
import { Buffer } from 'node:buffer';
const buf = Buffer.allocUnsafe(6);
buf.writeUIntLE(0x1234567890ab, 0, 6);
console.log(buf);
// Prints: <Buffer ab 90 78 56 34 12>
const { Buffer } = require('node:buffer');
const buf = Buffer.allocUnsafe(6);
buf.writeUIntLE(0x1234567890ab, 0, 6);
console.log(buf);
// Prints: <Buffer ab 90 78 56 34 12>
new Buffer(array)
Stability: 0 - Deprecated: Use Buffer.from(array) instead.
array {integer[]} An array of bytes to copy from.
See Buffer.from(array).
new Buffer(arrayBuffer[, byteOffset[,
length]])
Stability: 0 - Deprecated: Use Buffer.from(arrayBuffer[,
byteOffset[, length]]) instead.
arrayBuffer {ArrayBuffer|SharedArrayBuffer} An ArrayBuffer,
SharedArrayBuffer or the .buffer property of a TypedArray.
byteOffset {integer} Index of first byte to expose. Default: 0.
length {integer} Number of bytes to expose. Default:
arrayBuffer.byteLength - byteOffset.
See Buffer.from(arrayBuffer[, byteOffset[, length]]).
new Buffer(buffer)
Stability: 0 - Deprecated: Use Buffer.from(buffer) instead.
buffer{Buffer|Uint8Array} An existing Buffer or Uint8Array from
which to copy data.
See Buffer.from(buffer).
new Buffer(size)
Stability: 0 - Deprecated: Use Buffer.alloc() instead (also see
Buffer.allocUnsafe()).
size {integer} The desired length of the new Buffer.
See Buffer.alloc() and Buffer.allocUnsafe(). This variant of the
constructor is equivalent to Buffer.alloc().
new Buffer(string[, encoding])
Stability: 0 - Deprecated: Use Buffer.from(string[, encoding])
instead.
string {string} String to encode.
encoding {string} The encoding of string. Default: 'utf8'.
See Buffer.from(string[, encoding]).
Class: File
Extends: {Blob}
A File provides information about files.
new buffer.File(sources, fileName[,
options])
sources
{string[]|ArrayBuffer[]|TypedArray[]|DataView[]|Blob[]|File[]}
An array of string, {ArrayBuffer}, {TypedArray}, {DataView},
{File}, or {Blob} objects, or any mix of such objects, that will be
stored within the File.
fileName {string} The name of the file.
options {Object}
endings {string} One of either 'transparent' or 'native'.
When set to 'native', line endings in string source parts will
be converted to the platform native line-ending as specified
by require('node:os').EOL.
type {string} The File content-type.
lastModified {number} The last modified date of the file.
Default: Date.now().
file.name
Type: {string}
The name of the File.
file.lastModified
Type: {number}
The last modified date of the File.
node:buffer module APIs
While, the Buffer object is available as a global, there are additional
Buffer-related APIs that are available only via the node:buffer module
accessed using require('node:buffer').
buffer.atob(data)
Stability: 3 - Legacy. Use Buffer.from(data, 'base64') instead.
data {any} The Base64-encoded input string.
Decodes a string of Base64-encoded data into bytes, and encodes
those bytes into a string using Latin-1 (ISO-8859-1).
The data may be any JavaScript-value that can be coerced into a
string.
This function is only provided for compatibility with legacy
web platform APIs and should never be used in new code,
because they use strings to represent binary data and
predate the introduction of typed arrays in JavaScript. For
code running using Node.js APIs, converting between
base64-encoded strings and binary data should be
performed using Buffer.from(str, 'base64') and
buf.toString('base64').
buffer.btoa(data)
Stability: 3 - Legacy. Use buf.toString('base64') instead.
data {any} An ASCII (Latin1) string.
Decodes a string into bytes using Latin-1 (ISO-8859), and encodes
those bytes into a string using Base64.
The data may be any JavaScript-value that can be coerced into a
string.
This function is only provided for compatibility with legacy
web platform APIs and should never be used in new code,
because they use strings to represent binary data and
predate the introduction of typed arrays in JavaScript. For
code running using Node.js APIs, converting between
base64-encoded strings and binary data should be
performed using Buffer.from(str, 'base64') and
buf.toString('base64').
buffer.isAscii(input)
input {Buffer | ArrayBuffer | TypedArray} The input to validate.
Returns: {boolean}
This function returns true if input contains only valid ASCII-encoded
data, including the case in which input is empty.
Throws if the input is a detached array buffer.
buffer.isUtf8(input)
input {Buffer | ArrayBuffer | TypedArray} The input to validate.
Returns: {boolean}
This function returns true if input contains only valid UTF-8-
encoded data, including the case in which input is empty.
Throws if the input is a detached array buffer.
buffer.INSPECT_MAX_BYTES
{integer} Default: 50
Returns the maximum number of bytes that will be returned when
buf.inspect() is called. This can be overridden by user modules. See
util.inspect() for more details on buf.inspect() behavior.
buffer.kMaxLength
{integer} The largest size allowed for a single Buffer instance.
An alias for buffer.constants.MAX_LENGTH.
buffer.kStringMaxLength
{integer} The largest length allowed for a single string instance.
An alias for buffer.constants.MAX_STRING_LENGTH.
buffer.resolveObjectURL(id)
Stability: 1 - Experimental
id {string} A 'blob:nodedata:... URL string returned by a prior
call to URL.createObjectURL().
Returns: {Blob}
Resolves a 'blob:nodedata:...' an associated {Blob} object registered
using a prior call to URL.createObjectURL().
buffer.transcode(source, fromEnc, toEnc)
source {Buffer|Uint8Array} A Buffer or Uint8Array instance.
fromEnc {string} The current encoding.
toEnc{string} To target encoding.
Returns: {Buffer}
Re-encodes the given Buffer or Uint8Array instance from one
character encoding to another. Returns a new Buffer instance.
Throws if the fromEnc or toEnc specify invalid character encodings or
if conversion from fromEnc to toEnc is not permitted.
Encodings supported by buffer.transcode() are: 'ascii', 'utf8',
'utf16le', 'ucs2', 'latin1', and 'binary'.
The transcoding process will use substitution characters if a given
byte sequence cannot be adequately represented in the target
encoding. For instance:
import { Buffer, transcode } from 'node:buffer';
const newBuf = transcode(Buffer.from('€'), 'utf8', 'ascii');
console.log(newBuf.toString('ascii'));
// Prints: '?'
const { Buffer, transcode } = require('node:buffer');
const newBuf = transcode(Buffer.from('€'), 'utf8', 'ascii');
console.log(newBuf.toString('ascii'));
// Prints: '?'
Because the Euro (€) sign is not representable in US-ASCII, it is
replaced with ? in the transcoded Buffer.
Class: SlowBuffer
Stability: 0 - Deprecated: Use Buffer.allocUnsafeSlow() instead.
See Buffer.allocUnsafeSlow(). This was never a class in the sense that
the constructor always returned a Buffer instance, rather than a
SlowBuffer instance.
new SlowBuffer(size)
Stability: 0 - Deprecated: Use Buffer.allocUnsafeSlow() instead.
size {integer} The desired length of the new SlowBuffer.
See Buffer.allocUnsafeSlow().
Buffer constants
buffer.constants.MAX_LENGTH
{integer} The largest size allowed for a single Buffer instance.
On 32-bit architectures, this value currently is 230 - 1 (about 1 GiB).
On 64-bit architectures, this value currently is 232 (about 4 GiB).
It reflects v8::TypedArray::kMaxLength under the hood.
This value is also available as buffer.kMaxLength.
buffer.constants.MAX_STRING_LENGTH
{integer} The largest length allowed for a single string instance.
Represents the largest length that a string primitive can have,
counted in UTF-16 code units.
This value may depend on the JS engine that is being used.
Buffer.from(), Buffer.alloc(), and
Buffer.allocUnsafe()
In versions of Node.js prior to 6.0.0, Buffer instances were created
using the Buffer constructor function, which allocates the returned
Buffer differently based on what arguments are provided:
Passing a number as the first argument to Buffer() (e.g. new
Buffer(10)) allocates a new Buffer object of the specified size.
Prior to Node.js 8.0.0, the memory allocated for such Buffer
instances is not initialized and can contain sensitive data. Such
Buffer instances must be subsequently initialized by using either
buf.fill(0) or by writing to the entire Buffer before reading data
from the Buffer. While this behavior is intentional to improve
performance, development experience has demonstrated that a
more explicit distinction is required between creating a fast-but-
uninitialized Buffer versus creating a slower-but-safer Buffer.
Since Node.js 8.0.0, Buffer(num) and new Buffer(num) return a
Buffer with initialized memory.
Passing a string, array, or Buffer as the first argument copies the
passed object’s data into the Buffer.
Passing an ArrayBuffer or a SharedArrayBuffer returns a Buffer
that shares allocated memory with the given array buffer.
Because the behavior of new Buffer() is different depending on the
type of the first argument, security and reliability issues can be
inadvertently introduced into applications when argument validation
or Buffer initialization is not performed.
For example, if an attacker can cause an application to receive a
number where a string is expected, the application may call new
Buffer(100) instead of new Buffer("100"), leading it to allocate a 100
byte buffer instead of allocating a 3 byte buffer with content "100".
This is commonly possible using JSON API calls. Since JSON
distinguishes between numeric and string types, it allows injection of
numbers where a naively written application that does not validate
its input sufficiently might expect to always receive a string. Before
Node.js 8.0.0, the 100 byte buffer might contain arbitrary pre-
existing in-memory data, so may be used to expose in-memory
secrets to a remote attacker. Since Node.js 8.0.0, exposure of
memory cannot occur because the data is zero-filled. However, other
attacks are still possible, such as causing very large buffers to be
allocated by the server, leading to performance degradation or
crashing on memory exhaustion.
To make the creation of Buffer instances more reliable and less error-
prone, the various forms of the new Buffer() constructor have been
deprecated and replaced by separate Buffer.from(), Buffer.alloc(),
and Buffer.allocUnsafe() methods.
Developers should migrate all existing uses of the new Buffer()
constructors to one of these new APIs.
Buffer.from(array) returns a new Buffer that contains a copy of
the provided octets.
Buffer.from(arrayBuffer[, byteOffset[, length]]) returns a new
Buffer that shares the same allocated memory as the given
ArrayBuffer.
Buffer.from(buffer) returns a new Buffer that contains a copy of
the contents of the given Buffer.
Buffer.from(string[, encoding]) returns a new Buffer that
contains a copy of the provided string.
Buffer.alloc(size[, fill[, encoding]]) returns a new initialized
Buffer of the specified size. This method is slower than
Buffer.allocUnsafe(size) but guarantees that newly created
Buffer instances never contain old data that is potentially
sensitive. A TypeError will be thrown if size is not a number.
Buffer.allocUnsafe(size) and Buffer.allocUnsafeSlow(size) each
return a new uninitialized Buffer of the specified size. Because
the Buffer is uninitialized, the allocated segment of memory
might contain old data that is potentially sensitive.
Buffer instances returned by Buffer.allocUnsafe() and
Buffer.from(array) may be allocated off a shared internal memory
pool if size is less than or equal to half Buffer.poolSize. Instances
returned by Buffer.allocUnsafeSlow() never use the shared internal
memory pool.
The --zero-fill-buffers command-line
option
Node.js can be started using the --zero-fill-buffers command-line
option to cause all newly-allocated Buffer instances to be zero-filled
upon creation by default. Without the option, buffers created with
Buffer.allocUnsafe(), Buffer.allocUnsafeSlow(), and new
SlowBuffer(size) are not zero-filled. Use of this flag can have a
measurable negative impact on performance. Use the --zero-fill-
buffers option only when necessary to enforce that newly allocated
Buffer instances cannot contain old data that is potentially sensitive.
$ node --zero-fill-buffers
> Buffer.allocUnsafe(5);
<Buffer 00 00 00 00 00>
What makes Buffer.allocUnsafe() and
Buffer.allocUnsafeSlow() “unsafe”?
When calling Buffer.allocUnsafe() and Buffer.allocUnsafeSlow(), the
segment of allocated memory is uninitialized (it is not zeroed-out).
While this design makes the allocation of memory quite fast, the
allocated segment of memory might contain old data that is
potentially sensitive. Using a Buffer created by Buffer.allocUnsafe()
without completely overwriting the memory can allow this old data
to be leaked when the Buffer memory is read.
While there are clear performance advantages to using
Buffer.allocUnsafe(), extra care must be taken in order to avoid
introducing security vulnerabilities into an application.
C++ addons
Addons are dynamically-linked shared objects written in C++. The
require() function can load addons as ordinary Node.js modules.
Addons provide an interface between JavaScript and C/C++
libraries.
There are three options for implementing addons: Node-API, nan, or
direct use of internal V8, libuv, and Node.js libraries. Unless there is
a need for direct access to functionality which is not exposed by
Node-API, use Node-API. Refer to C/C++ addons with Node-API for
more information on Node-API.
When not using Node-API, implementing addons is complicated,
involving knowledge of several components and APIs:
V8: the C++ library Node.js uses to provide the JavaScript
implementation. V8 provides the mechanisms for creating
objects, calling functions, etc. V8’s API is documented mostly in
the v8.h header file (deps/v8/include/v8.h in the Node.js source
tree), which is also available online.
libuv: The C library that implements the Node.js event loop, its
worker threads and all of the asynchronous behaviors of the
platform. It also serves as a cross-platform abstraction library,
giving easy, POSIX-like access across all major operating systems
to many common system tasks, such as interacting with the file
system, sockets, timers, and system events. libuv also provides a
threading abstraction similar to POSIX threads for more
sophisticated asynchronous addons that need to move beyond
the standard event loop. Addon authors should avoid blocking
the event loop with I/O or other time-intensive tasks by
offloading work via libuv to non-blocking system operations,
worker threads, or a custom use of libuv threads.
Internal Node.js libraries. Node.js itself exports C++ APIs that
addons can use, the most important of which is the
node::ObjectWrap class.
Node.js includes other statically linked libraries including
OpenSSL. These other libraries are located in the deps/ directory
in the Node.js source tree. Only the libuv, OpenSSL, V8, and zlib
symbols are purposefully re-exported by Node.js and may be
used to various extents by addons. See Linking to libraries
included with Node.js for additional information.
All of the following examples are available for download and may be
used as the starting-point for an addon.
Hello world
This “Hello world” example is a simple addon, written in C++, that is
the equivalent of the following JavaScript code:
module.exports.hello = () => 'world';
First, create the file hello.cc:
// hello.cc
#include <node.h>
namespace demo {
using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Object;
using v8::String;
using v8::Value;
void Method(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
args.GetReturnValue().Set(String::NewFromUtf8(
isolate, "world").ToLocalChecked());
}
void Initialize(Local<Object> exports) {
NODE_SET_METHOD(exports, "hello", Method);
}
NODE_MODULE(NODE_GYP_MODULE_NAME, Initialize)
} // namespace demo
All Node.js addons must export an initialization function following
the pattern:
void Initialize(Local<Object> exports);
NODE_MODULE(NODE_GYP_MODULE_NAME, Initialize)
There is no semi-colon after NODE_MODULE as it’s not a function (see
node.h).
The module_name must match the filename of the final binary
(excluding the .node suffix).
In the hello.cc example, then, the initialization function is Initialize
and the addon module name is addon.
When building addons with node-gyp, using the macro
NODE_GYP_MODULE_NAME as the first parameter of NODE_MODULE() will
ensure that the name of the final binary will be passed to
NODE_MODULE().
Addons defined with NODE_MODULE() can not be loaded in multiple
contexts or multiple threads at the same time.
Context-aware addons
There are environments in which Node.js addons may need to be
loaded multiple times in multiple contexts. For example, the
Electron runtime runs multiple instances of Node.js in a single
process. Each instance will have its own require() cache, and thus
each instance will need a native addon to behave correctly when
loaded via require(). This means that the addon must support
multiple initializations.
A context-aware addon can be constructed by using the macro
NODE_MODULE_INITIALIZER, which expands to the name of a function
which Node.js will expect to find when it loads an addon. An addon
can thus be initialized as in the following example:
using namespace v8;
extern "C" NODE_MODULE_EXPORT void
NODE_MODULE_INITIALIZER(Local<Object> exports,
Local<Value> module,
Local<Context> context) {
/* Perform addon initialization steps here. */
}
Another option is to use the macro NODE_MODULE_INIT(), which will
also construct a context-aware addon. Unlike NODE_MODULE(), which is
used to construct an addon around a given addon initializer function,
NODE_MODULE_INIT() serves as the declaration of such an initializer to
be followed by a function body.
The following three variables may be used inside the function body
following an invocation of NODE_MODULE_INIT():
Local<Object> exports,
Local<Value> module, and
Local<Context> context
The choice to build a context-aware addon carries with it the
responsibility of carefully managing global static data. Since the
addon may be loaded multiple times, potentially even from different
threads, any global static data stored in the addon must be properly
protected, and must not contain any persistent references to
JavaScript objects. The reason for this is that JavaScript objects are
only valid in one context, and will likely cause a crash when accessed
from the wrong context or from a different thread than the one on
which they were created.
The context-aware addon can be structured to avoid global static
data by performing the following steps:
Define a class which will hold per-addon-instance data and
which has a static member of the form
static void DeleteInstance(void* data) {
// Cast `data` to an instance of the class and delete it.
}
Heap-allocate an instance of this class in the addon initializer.
This can be accomplished using the new keyword.
Call node::AddEnvironmentCleanupHook(), passing it the above-
created instance and a pointer to DeleteInstance(). This will
ensure the instance is deleted when the environment is torn
down.
Store the instance of the class in a v8::External, and
Pass the v8::External to all methods exposed to JavaScript by
passing it to v8::FunctionTemplate::New() or v8::Function::New()
which creates the native-backed JavaScript functions. The third
parameter of v8::FunctionTemplate::New() or v8::Function::New()
accepts the v8::External and makes it available in the native
callback using the v8::FunctionCallbackInfo::Data() method.
This will ensure that the per-addon-instance data reaches each
binding that can be called from JavaScript. The per-addon-instance
data must also be passed into any asynchronous callbacks the addon
may create.
The following example illustrates the implementation of a context-
aware addon:
#include <node.h>
using namespace v8;
class AddonData {
public:
explicit AddonData(Isolate* isolate):
call_count(0) {
// Ensure this per-addon-instance data is deleted at environment
node::AddEnvironmentCleanupHook(isolate, DeleteInstance, this);
}
// Per-addon data.
int call_count;
static void DeleteInstance(void* data) {
delete static_cast<AddonData*>(data);
}
};
static void Method(const v8::FunctionCallbackInfo<v8::Value>& info)
// Retrieve the per-addon-instance data.
AddonData* data =
reinterpret_cast<AddonData*>(info.Data().As<External>()->Valu
data->call_count++;
info.GetReturnValue().Set((double)data->call_count);
}
// Initialize this addon to be context-aware.
NODE_MODULE_INIT(/* exports, module, context */) {
Isolate* isolate = context->GetIsolate();
// Create a new instance of `AddonData` for this instance of the a
// tie its life cycle to that of the Node.js environment.
AddonData* data = new AddonData(isolate);
// Wrap the data in a `v8::External` so we can pass it to the meth
// expose.
Local<External> external = External::New(isolate, data);
// Expose the method `Method` to JavaScript, and make sure it rece
// per-addon-instance data we created above by passing `external`
// third parameter to the `FunctionTemplate` constructor.
exports->Set(context,
String::NewFromUtf8(isolate, "method").ToLocalChecked
FunctionTemplate::New(isolate, Method, external)
->GetFunction(context).ToLocalChecked()).FromJust(
}
Worker support
In order to be loaded from multiple Node.js environments, such as a
main thread and a Worker thread, an add-on needs to either:
Be an Node-API addon, or
Be declared as context-aware using NODE_MODULE_INIT() as
described above
In order to support Worker threads, addons need to clean up any
resources they may have allocated when such a thread exists. This
can be achieved through the usage of the AddEnvironmentCleanupHook()
function:
void AddEnvironmentCleanupHook(v8::Isolate* isolate,
void (*fun)(void* arg),
void* arg);
This function adds a hook that will run before a given Node.js
instance shuts down. If necessary, such hooks can be removed before
they are run using RemoveEnvironmentCleanupHook(), which has the
same signature. Callbacks are run in last-in first-out order.
If necessary, there is an additional pair of
AddEnvironmentCleanupHook() and RemoveEnvironmentCleanupHook()
overloads, where the cleanup hook takes a callback function. This
can be used for shutting down asynchronous resources, such as any
libuv handles registered by the addon.
The following addon.cc uses AddEnvironmentCleanupHook:
// addon.cc
#include <node.h>
#include <assert.h>
#include <stdlib.h>
using node::AddEnvironmentCleanupHook;
using v8::HandleScope;
using v8::Isolate;
using v8::Local;
using v8::Object;
// Note: In a real-world application, do not rely on static/global d
static char cookie[] = "yum yum";
static int cleanup_cb1_called = 0;
static int cleanup_cb2_called = 0;
static void cleanup_cb1(void* arg) {
Isolate* isolate = static_cast<Isolate*>(arg);
HandleScope scope(isolate);
Local<Object> obj = Object::New(isolate);
assert(!obj.IsEmpty()); // assert VM is still alive
assert(obj->IsObject());
cleanup_cb1_called++;
}
static void cleanup_cb2(void* arg) {
assert(arg == static_cast<void*>(cookie));
cleanup_cb2_called++;
}
static void sanity_check(void*) {
assert(cleanup_cb1_called == 1);
assert(cleanup_cb2_called == 1);
}
// Initialize this addon to be context-aware.
NODE_MODULE_INIT(/* exports, module, context */) {
Isolate* isolate = context->GetIsolate();
AddEnvironmentCleanupHook(isolate, sanity_check, nullptr);
AddEnvironmentCleanupHook(isolate, cleanup_cb2, cookie);
AddEnvironmentCleanupHook(isolate, cleanup_cb1, isolate);
}
Test in JavaScript by running:
// test.js
require('./build/Release/addon');
Building
Once the source code has been written, it must be compiled into the
binary addon.node file. To do so, create a file called binding.gyp in the
top-level of the project describing the build configuration of the
module using a JSON-like format. This file is used by node-gyp, a
tool written specifically to compile Node.js addons.
{
"targets": [
{
"target_name": "addon",
"sources": [ "hello.cc" ]
}
]
}
A version of the node-gyp utility is bundled and distributed with
Node.js as part of npm. This version is not made directly available for
developers to use and is intended only to support the ability to use
the npm install command to compile and install addons. Developers
who wish to use node-gyp directly can install it using the command
npm install -g node-gyp. See the node-gyp installation instructions for
more information, including platform-specific requirements.
Once the binding.gyp file has been created, use node-gyp configure to
generate the appropriate project build files for the current platform.
This will generate either a Makefile (on Unix platforms) or a vcxproj
file (on Windows) in the build/ directory.
Next, invoke the node-gyp build command to generate the compiled
addon.node file. This will be put into the build/Release/ directory.
When using npm install to install a Node.js addon, npm uses its own
bundled version of node-gyp to perform this same set of actions,
generating a compiled version of the addon for the user’s platform on
demand.
Once built, the binary addon can be used from within Node.js by
pointing require() to the built addon.node module:
// hello.js
const addon = require('./build/Release/addon');
console.log(addon.hello());
// Prints: 'world'
Because the exact path to the compiled addon binary can vary
depending on how it is compiled (i.e. sometimes it may be in
./build/Debug/), addons can use the bindings package to load the
compiled module.
While the bindings package implementation is more sophisticated in
how it locates addon modules, it is essentially using a try…catch
pattern similar to:
try {
return require('./build/Release/addon.node');
} catch (err) {
return require('./build/Debug/addon.node');
}
Linking to libraries included with Node.js
Node.js uses statically linked libraries such as V8, libuv, and
OpenSSL. All addons are required to link to V8 and may link to any
of the other dependencies as well. Typically, this is as simple as
including the appropriate #include <...> statements (e.g. #include
<v8.h>) and node-gyp will locate the appropriate headers
automatically. However, there are a few caveats to be aware of:
When node-gyp runs, it will detect the specific release version of
Node.js and download either the full source tarball or just the
headers. If the full source is downloaded, addons will have
complete access to the full set of Node.js dependencies. However,
if only the Node.js headers are downloaded, then only the
symbols exported by Node.js will be available.
node-gyp can be run using the --nodedir flag pointing at a local
Node.js source image. Using this option, the addon will have
access to the full set of dependencies.
Loading addons using require()
The filename extension of the compiled addon binary is .node (as
opposed to .dll or .so). The require() function is written to look for
files with the .node file extension and initialize those as dynamically-
linked libraries.
When calling require(), the .node extension can usually be omitted
and Node.js will still find and initialize the addon. One caveat,
however, is that Node.js will first attempt to locate and load modules
or JavaScript files that happen to share the same base name. For
instance, if there is a file addon.js in the same directory as the binary
addon.node, then require('addon') will give precedence to the addon.js
file and load it instead.
Native abstractions for Node.js
Each of the examples illustrated in this document directly use the
Node.js and V8 APIs for implementing addons. The V8 API can, and
has, changed dramatically from one V8 release to the next (and one
major Node.js release to the next). With each change, addons may
need to be updated and recompiled in order to continue functioning.
The Node.js release schedule is designed to minimize the frequency
and impact of such changes but there is little that Node.js can do to
ensure stability of the V8 APIs.
The Native Abstractions for Node.js (or nan) provide a set of tools
that addon developers are recommended to use to keep compatibility
between past and future releases of V8 and Node.js. See the nan
examples for an illustration of how it can be used.
Node-API
Stability: 2 - Stable
Node-API is an API for building native addons. It is independent
from the underlying JavaScript runtime (e.g. V8) and is maintained
as part of Node.js itself. This API will be Application Binary Interface
(ABI) stable across versions of Node.js. It is intended to insulate
addons from changes in the underlying JavaScript engine and allow
modules compiled for one version to run on later versions of Node.js
without recompilation. Addons are built/packaged with the same
approach/tools outlined in this document (node-gyp, etc.). The only
difference is the set of APIs that are used by the native code. Instead
of using the V8 or Native Abstractions for Node.js APIs, the
functions available in the Node-API are used.
Creating and maintaining an addon that benefits from the ABI
stability provided by Node-API carries with it certain
implementation considerations.
To use Node-API in the above “Hello world” example, replace the
content of hello.cc with the following. All other instructions remain
the same.
// hello.cc using Node-API
#include <node_api.h>
namespace demo {
napi_value Method(napi_env env, napi_callback_info args) {
napi_value greeting;
napi_status status;
status = napi_create_string_utf8(env, "world", NAPI_AUTO_LENGTH, &
if (status != napi_ok) return nullptr;
return greeting;
}
napi_value init(napi_env env, napi_value exports) {
napi_status status;
napi_value fn;
status = napi_create_function(env, nullptr, 0, Method, nullptr, &
if (status != napi_ok) return nullptr;
status = napi_set_named_property(env, exports, "hello", fn);
if (status != napi_ok) return nullptr;
return exports;
}
NAPI_MODULE(NODE_GYP_MODULE_NAME, init)
} // namespace demo
The functions available and how to use them are documented in
C/C++ addons with Node-API.
Addon examples
Following are some example addons intended to help developers get
started. The examples use the V8 APIs. Refer to the online V8
reference for help with the various V8 calls, and V8’s Embedder’s
Guide for an explanation of several concepts used such as handles,
scopes, function templates, etc.
Each of these examples using the following binding.gyp file:
{
"targets": [
{
"target_name": "addon",
"sources": [ "addon.cc" ]
}
]
}
In cases where there is more than one .cc file, simply add the
additional filename to the sources array:
"sources": ["addon.cc", "myexample.cc"]
Once the binding.gyp file is ready, the example addons can be
configured and built using node-gyp:
node-gyp configure build
Function arguments
Addons will typically expose objects and functions that can be
accessed from JavaScript running within Node.js. When functions
are invoked from JavaScript, the input arguments and return value
must be mapped to and from the C/C++ code.
The following example illustrates how to read function arguments
passed from JavaScript and how to return a result:
// addon.cc
#include <node.h>
namespace demo {
using v8::Exception;
using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Number;
using v8::Object;
using v8::String;
using v8::Value;
// This is the implementation of the "add" method
// Input arguments are passed using the
// const FunctionCallbackInfo<Value>& args struct
void Add(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
// Check the number of arguments passed.
if (args.Length() < 2) {
// Throw an Error that is passed back to JavaScript
isolate->ThrowException(Exception::TypeError(
String::NewFromUtf8(isolate,
"Wrong number of arguments").ToLocalChec
return;
}
// Check the argument types
if (!args[0]->IsNumber() || !args[1]->IsNumber()) {
isolate->ThrowException(Exception::TypeError(
String::NewFromUtf8(isolate,
"Wrong arguments").ToLocalChecked()));
return;
}
// Perform the operation
double value =
args[0].As<Number>()->Value() + args[1].As<Number>()->Value();
Local<Number> num = Number::New(isolate, value);
// Set the return value (using the passed in
// FunctionCallbackInfo<Value>&)
args.GetReturnValue().Set(num);
}
void Init(Local<Object> exports) {
void Init(Local<Object> exports) {
NODE_SET_METHOD(exports, "add", Add);
}
NODE_MODULE(NODE_GYP_MODULE_NAME, Init)
} // namespace demo
Once compiled, the example addon can be required and used from
within Node.js:
// test.js
const addon = require('./build/Release/addon');
console.log('This should be eight:', addon.add(3, 5));
Callbacks
It is common practice within addons to pass JavaScript functions to
a C++ function and execute them from there. The following example
illustrates how to invoke such callbacks:
// addon.cc
#include <node.h>
namespace demo {
using v8::Context;
using v8::Function;
using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Null;
using v8::Object;
using v8::String;
using v8::Value;
void RunCallback(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
Local<Context> context = isolate->GetCurrentContext();
Local<Function> cb = Local<Function>::Cast(args[0]);
const unsigned argc = 1;
Local<Value> argv[argc] = {
String::NewFromUtf8(isolate,
"hello world").ToLocalChecked() };
cb->Call(context, Null(isolate), argc, argv).ToLocalChecked();
}
void Init(Local<Object> exports, Local<Object> module) {
NODE_SET_METHOD(module, "exports", RunCallback);
}
NODE_MODULE(NODE_GYP_MODULE_NAME, Init)
} // namespace demo
This example uses a two-argument form of Init() that receives the
full module object as the second argument. This allows the addon to
completely overwrite exports with a single function instead of adding
the function as a property of exports.
To test it, run the following JavaScript:
// test.js
const addon = require('./build/Release/addon');
addon((msg) => {
console.log(msg);
// Prints: 'hello world'
});
In this example, the callback function is invoked synchronously.
Object factory
Addons can create and return new objects from within a C++
function as illustrated in the following example. An object is created
and returned with a property msg that echoes the string passed to
createObject():
// addon.cc
#include <node.h>
namespace demo {
using v8::Context;
using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Object;
using v8::String;
using v8::Value;
void CreateObject(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
Local<Context> context = isolate->GetCurrentContext();
Local<Object> obj = Object::New(isolate);
obj->Set(context,
String::NewFromUtf8(isolate,
"msg").ToLocalChecked(),
args[0]->ToString(context).ToLocalChe
.FromJust();
args.GetReturnValue().Set(obj);
}
void Init(Local<Object> exports, Local<Object> module) {
NODE_SET_METHOD(module, "exports", CreateObject);
}
NODE_MODULE(NODE_GYP_MODULE_NAME, Init)
} // namespace demo
To test it in JavaScript:
// test.js
const addon = require('./build/Release/addon');
const obj1 = addon('hello');
const obj2 = addon('world');
console.log(obj1.msg, obj2.msg);
// Prints: 'hello world'
Function factory
Another common scenario is creating JavaScript functions that wrap
C++ functions and returning those back to JavaScript:
// addon.cc
#include <node.h>
namespace demo {
using v8::Context;
using v8::Function;
using v8::FunctionCallbackInfo;
using v8::FunctionTemplate;
using v8::Isolate;
using v8::Local;
using v8::Object;
using v8::String;
using v8::Value;
void MyFunction(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
args.GetReturnValue().Set(String::NewFromUtf8(
isolate, "hello world").ToLocalChecked());
}
void CreateFunction(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
Local<Context> context = isolate->GetCurrentContext();
Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate, MyFun
Local<Function> fn = tpl->GetFunction(context).ToLocalChecked();
// omit this to make it anonymous
fn->SetName(String::NewFromUtf8(
isolate, "theFunction").ToLocalChecked());
args.GetReturnValue().Set(fn);
}
void Init(Local<Object> exports, Local<Object> module) {
NODE_SET_METHOD(module, "exports", CreateFunction);
}
NODE_MODULE(NODE_GYP_MODULE_NAME, Init)
} // namespace demo
To test:
// test.js
const addon = require('./build/Release/addon');
const fn = addon();
console.log(fn());
// Prints: 'hello world'
Wrapping C++ objects
It is also possible to wrap C++ objects/classes in a way that allows
new instances to be created using the JavaScript new operator:
// addon.cc
#include <node.h>
#include "myobject.h"
namespace demo {
using v8::Local;
using v8::Object;
void InitAll(Local<Object> exports) {
MyObject::Init(exports);
}
NODE_MODULE(NODE_GYP_MODULE_NAME, InitAll)
} // namespace demo
Then, in myobject.h, the wrapper class inherits from node::ObjectWrap:
// myobject.h
#ifndef MYOBJECT_H
#define MYOBJECT_H
#include <node h>
#include <node.h>
#include <node_object_wrap.h>
namespace demo {
class MyObject : public node::ObjectWrap {
public:
static void Init(v8::Local<v8::Object> exports);
private:
explicit MyObject(double value = 0);
~MyObject();
static void New(const v8::FunctionCallbackInfo<v8::Value>& args);
static void PlusOne(const v8::FunctionCallbackInfo<v8::Value>& arg
double value_;
};
} // namespace demo
#endif
In myobject.cc, implement the various methods that are to be
exposed. Below, the method plusOne() is exposed by adding it to the
constructor’s prototype:
// myobject.cc
#include "myobject.h"
namespace demo {
using v8::Context;
using v8::Function;
using v8::FunctionCallbackInfo;
using v8::FunctionTemplate;
using v8::Isolate;
using v8::Local;
using v8::Number;
using v8::Object;
using v8::ObjectTemplate;
using v8::String;
using v8::Value;
MyObject::MyObject(double value) : value_(value) {
}
MyObject::~MyObject() {
}
void MyObject::Init(Local<Object> exports) {
Isolate* isolate = exports->GetIsolate();
Local<Context> context = isolate->GetCurrentContext();
Local<ObjectTemplate> addon_data_tpl = ObjectTemplate::New(isolate
addon_data_tpl->SetInternalFieldCount(1); // 1 field for the MyOb
Local<Object> addon_data =
addon_data_tpl->NewInstance(context).ToLocalChecked();
// Prepare constructor template
Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate, New,
tpl->SetClassName(String::NewFromUtf8(isolate, "MyObject").ToLocal
tpl->InstanceTemplate()->SetInternalFieldCount(1);
// Prototype
NODE_SET_PROTOTYPE_METHOD(tpl, "plusOne", PlusOne);
Local<Function> constructor = tpl->GetFunction(context).ToLocalChe
addon_data->SetInternalField(0, constructor);
exports->Set(context, String::NewFromUtf8(
isolate, "MyObject").ToLocalChecked(),
constructor).FromJust();
}
void MyObject::New(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
Local<Context> context = isolate->GetCurrentContext();
if (args.IsConstructCall()) {
// Invoked as constructor: `new MyObject(...)`
double value = args[0]->IsUndefined() ?
0 : args[0]->NumberValue(context).FromMaybe(0);
MyObject* obj = new MyObject(value);
obj->Wrap(args.This());
args.GetReturnValue().Set(args.This());
} else {
// Invoked as plain function `MyObject(...)`, turn into construc
const int argc = 1;
Local<Value> argv[argc] { args[0] };
Local<Value> argv[argc] = { args[0] };
Local<Function> cons =
args.Data().As<Object>()->GetInternalField(0)
.As<Value>().As<Function>();
Local<Object> result =
cons->NewInstance(context, argc, argv).ToLocalChecked();
args.GetReturnValue().Set(result);
}
}
void MyObject::PlusOne(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
MyObject* obj = ObjectWrap::Unwrap<MyObject>(args.Holder());
obj->value_ += 1;
args.GetReturnValue().Set(Number::New(isolate, obj->value_));
}
} // namespace demo
To build this example, the myobject.cc file must be added to the
binding.gyp:
{
"targets": [
{
"target_name": "addon",
"sources": [
"addon.cc",
"myobject.cc"
]
}
]
}
Test it with:
// test.js
const addon = require('./build/Release/addon');
const obj = new addon.MyObject(10);
console.log(obj.plusOne());
// Prints: 11
console.log(obj.plusOne());
// Prints: 12
console.log(obj.plusOne());
// Prints: 13
The destructor for a wrapper object will run when the object is
garbage-collected. For destructor testing, there are command-line
flags that can be used to make it possible to force garbage collection.
These flags are provided by the underlying V8 JavaScript engine.
They are subject to change or removal at any time. They are not
documented by Node.js or V8, and they should never be used outside
of testing.
During shutdown of the process or worker threads destructors are
not called by the JS engine. Therefore it’s the responsibility of the
user to track these objects and ensure proper destruction to avoid
resource leaks.
Factory of wrapped objects
Alternatively, it is possible to use a factory pattern to avoid explicitly
creating object instances using the JavaScript new operator:
const obj = addon.createObject();
// instead of:
// const obj = new addon.Object();
First, the createObject() method is implemented in addon.cc:
// addon.cc
#include <node.h>
#include "myobject.h"
namespace demo {
using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Object;
using v8::String;
using v8::Value;
void CreateObject(const FunctionCallbackInfo<Value>& args) {
MyObject::NewInstance(args);
}
void InitAll(Local<Object> exports, Local<Object> module) {
MyObject::Init(exports->GetIsolate());
NODE_SET_METHOD(module, "exports", CreateObject);
}
NODE_MODULE(NODE_GYP_MODULE_NAME, InitAll)
} // namespace demo
In myobject.h, the static method NewInstance() is added to handle
instantiating the object. This method takes the place of using new in
JavaScript:
// myobject.h
#ifndef MYOBJECT_H
#define MYOBJECT_H
#include <node.h>
#include <node_object_wrap.h>
namespace demo {
class MyObject : public node::ObjectWrap {
public:
static void Init(v8::Isolate* isolate);
static void NewInstance(const v8::FunctionCallbackInfo<v8::Value>&
private:
explicit MyObject(double value = 0);
~MyObject();
static void New(const v8::FunctionCallbackInfo<v8::Value>& args);
static void PlusOne(const v8::FunctionCallbackInfo<v8::Value>& arg
static v8::Global<v8::Function> constructor;
double value_;
};
} // namespace demo
#endif
The implementation in myobject.cc is similar to the previous
example:
// myobject.cc
#include <node.h>
#include "myobject.h"
namespace demo {
using node::AddEnvironmentCleanupHook;
using v8::Context;
using v8::Function;
using v8::FunctionCallbackInfo;
using v8::FunctionTemplate;
using v8::Global;
using v8::Isolate;
using v8::Local;
using v8::Number;
using v8::Object;
using v8::String;
using v8::Value;
// Warning! This is not thread-safe, this addon cannot be used for w
// threads.
Global<Function> MyObject::constructor;
MyObject::MyObject(double value) : value_(value) {
}
MyObject::~MyObject() {
}
void MyObject::Init(Isolate* isolate) {
// Prepare constructor template
Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate, New);
tpl->SetClassName(String::NewFromUtf8(isolate, "MyObject").ToLocal
tpl->InstanceTemplate()->SetInternalFieldCount(1);
// Prototype
NODE_SET_PROTOTYPE_METHOD(tpl, "plusOne", PlusOne);
Local<Context> context = isolate->GetCurrentContext();
constructor.Reset(isolate, tpl->GetFunction(context).ToLocalChecke
AddEnvironmentCleanupHook(isolate, [](void*) {
constructor.Reset();
}, nullptr);
}
void MyObject::New(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
Local<Context> context = isolate->GetCurrentContext();
if (args.IsConstructCall()) {
// Invoked as constructor: `new MyObject(...)`
double value = args[0]->IsUndefined() ?
0 : args[0]->NumberValue(context).FromMaybe(0);
MyObject* obj = new MyObject(value);
obj->Wrap(args.This());
args.GetReturnValue().Set(args.This());
} else {
// Invoked as plain function `MyObject(...)`, turn into construc
const int argc = 1;
Local<Value> argv[argc] = { args[0] };
Local<Function> cons = Local<Function>::New(isolate, constructor
Local<Object> instance =
cons->NewInstance(context, argc, argv).ToLocalChecked();
args.GetReturnValue().Set(instance);
}
}
void MyObject::NewInstance(const FunctionCallbackInfo<Value>& args)
Isolate* isolate = args.GetIsolate();
const unsigned argc = 1;
Local<Value> argv[argc] = { args[0] };
Local<Function> cons = Local<Function>::New(isolate, constructor);
Local<Context> context = isolate->GetCurrentContext();
Local<Object> instance =
cons->NewInstance(context, argc, argv).ToLocalChecked();
args.GetReturnValue().Set(instance);
}
void MyObject::PlusOne(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
MyObject* obj = ObjectWrap::Unwrap<MyObject>(args.Holder());
obj->value_ += 1;
args.GetReturnValue().Set(Number::New(isolate, obj->value_));
}
} // namespace demo
Once again, to build this example, the myobject.cc file must be added
to the binding.gyp:
{
"targets": [
{
"target_name": "addon",
"sources": [
"addon.cc",
"myobject.cc"
]
}
]
}
Test it with:
// test.js
const createObject = require('./build/Release/addon');
const obj = createObject(10);
console.log(obj.plusOne());
// Prints: 11
console.log(obj.plusOne());
// Prints: 12
console.log(obj.plusOne());
// Prints: 13
const obj2 = createObject(20);
console.log(obj2.plusOne());
// Prints: 21
console.log(obj2.plusOne());
// Prints: 22
console.log(obj2.plusOne());
// Prints: 23
Passing wrapped objects around
In addition to wrapping and returning C++ objects, it is possible to
pass wrapped objects around by unwrapping them with the Node.js
helper function node::ObjectWrap::Unwrap. The following examples
shows a function add() that can take two MyObject objects as input
arguments:
// addon.cc
#include <node.h>
#include <node_object_wrap.h>
#include "myobject.h"
namespace demo {
using v8::Context;
using v8::FunctionCallbackInfo;
using v8::Isolate;
using v8::Local;
using v8::Number;
using v8::Object;
using v8::String;
using v8::Value;
void CreateObject(const FunctionCallbackInfo<Value>& args) {
MyObject::NewInstance(args);
}
void Add(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
Local<Context> context = isolate->GetCurrentContext();
MyObject* obj1 = node::ObjectWrap::Unwrap<MyObject>(
args[0]->ToObject(context).ToLocalChecked());
MyObject* obj2 = node::ObjectWrap::Unwrap<MyObject>(
args[1]->ToObject(context).ToLocalChecked());
double sum = obj1->value() + obj2->value();
args.GetReturnValue().Set(Number::New(isolate, sum));
}
void InitAll(Local<Object> exports) {
MyObject::Init(exports->GetIsolate());
NODE_SET_METHOD(exports, "createObject", CreateObject);
NODE_SET_METHOD(exports, "add", Add);
}
NODE_MODULE(NODE_GYP_MODULE_NAME, InitAll)
} // namespace demo
In myobject.h, a new public method is added to allow access to
private values after unwrapping the object.
// myobject.h
#ifndef MYOBJECT_H
#define MYOBJECT_H
#include <node.h>
#include <node_object_wrap.h>
namespace demo {
class MyObject : public node::ObjectWrap {
public:
static void Init(v8::Isolate* isolate);
static void NewInstance(const v8::FunctionCallbackInfo<v8::Value>&
inline double value() const { return value_; }
private:
explicit MyObject(double value = 0);
~MyObject();
static void New(const v8::FunctionCallbackInfo<v8::Value>& args);
static v8::Global<v8::Function> constructor;
double value_;
};
} // namespace demo
} // namespace demo
#endif
The implementation of myobject.cc is similar to before:
// myobject.cc
#include <node.h>
#include "myobject.h"
namespace demo {
using node::AddEnvironmentCleanupHook;
using v8::Context;
using v8::Function;
using v8::FunctionCallbackInfo;
using v8::FunctionTemplate;
using v8::Global;
using v8::Isolate;
using v8::Local;
using v8::Object;
using v8::String;
using v8::Value;
// Warning! This is not thread-safe, this addon cannot be used for w
// threads.
Global<Function> MyObject::constructor;
MyObject::MyObject(double value) : value_(value) {
}
MyObject::~MyObject() {
}
void MyObject::Init(Isolate* isolate) {
// Prepare constructor template
Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate, New);
tpl->SetClassName(String::NewFromUtf8(isolate, "MyObject").ToLocal
tpl->InstanceTemplate()->SetInternalFieldCount(1);
Local<Context> context = isolate->GetCurrentContext();
constructor.Reset(isolate, tpl->GetFunction(context).ToLocalChecke
AddEnvironmentCleanupHook(isolate, [](void*) {
constructor.Reset();
}, nullptr);
}
void MyObject::New(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
Local<Context> context = isolate->GetCurrentContext();
if (args.IsConstructCall()) {
// Invoked as constructor: `new MyObject(...)`
double value = args[0]->IsUndefined() ?
0 : args[0]->NumberValue(context).FromMaybe(0);
MyObject* obj = new MyObject(value);
obj->Wrap(args.This());
args.GetReturnValue().Set(args.This());
} else {
// Invoked as plain function `MyObject(...)`, turn into construc
const int argc = 1;
Local<Value> argv[argc] = { args[0] };
Local<Function> cons = Local<Function>::New(isolate, constructor
Local<Object> instance =
cons->NewInstance(context, argc, argv).ToLocalChecked();
args.GetReturnValue().Set(instance);
}
}
void MyObject::NewInstance(const FunctionCallbackInfo<Value>& args)
Isolate* isolate = args.GetIsolate();
const unsigned argc = 1;
Local<Value> argv[argc] = { args[0] };
Local<Function> cons = Local<Function>::New(isolate, constructor);
Local<Context> context = isolate->GetCurrentContext();
Local<Object> instance =
cons->NewInstance(context, argc, argv).ToLocalChecked();
args.GetReturnValue().Set(instance);
}
} // namespace demo
Test it with:
// test.js
const addon = require('./build/Release/addon');
const obj1 = addon.createObject(10);
const obj2 = addon.createObject(20);
const result = addon.add(obj1, obj2);
console.log(result);
// Prints: 30
Child process
Stability: 2 - Stable
The node:child_process module provides the ability to spawn
subprocesses in a manner that is similar, but not identical, to
popen(3). This capability is primarily provided by the
child_process.spawn() function:
const { spawn } = require('node:child_process');
const ls = spawn('ls', ['-lh', '/usr']);
ls.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
ls.stderr.on('data', (data) => {
console.error(`stderr: ${data}`);
});
ls.on('close', (code) => {
console.log(`child process exited with code ${code}`);
});
By default, pipes for stdin, stdout, and stderr are established between
the parent Node.js process and the spawned subprocess. These pipes
have limited (and platform-specific) capacity. If the subprocess
writes to stdout in excess of that limit without the output being
captured, the subprocess blocks waiting for the pipe buffer to accept
more data. This is identical to the behavior of pipes in the shell. Use
the { stdio: 'ignore' } option if the output will not be consumed.
The command lookup is performed using the options.env.PATH
environment variable if env is in the options object. Otherwise,
process.env.PATH is used. If options.env is set without PATH, lookup on
Unix is performed on a default search path search of /usr/bin:/bin
(see your operating system’s manual for execvpe/execvp), on
Windows the current processes environment variable PATH is used.
On Windows, environment variables are case-insensitive. Node.js
lexicographically sorts the env keys and uses the first one that case-
insensitively matches. Only first (in lexicographic order) entry will be
passed to the subprocess. This might lead to issues on Windows
when passing objects to the env option that have multiple variants of
the same key, such as PATH and Path.
The child_process.spawn() method spawns the child process
asynchronously, without blocking the Node.js event loop. The
child_process.spawnSync() function provides equivalent functionality
in a synchronous manner that blocks the event loop until the
spawned process either exits or is terminated.
For convenience, the node:child_process module provides a handful
of synchronous and asynchronous alternatives to
child_process.spawn() and child_process.spawnSync(). Each of these
alternatives are implemented on top of child_process.spawn() or
child_process.spawnSync().
child_process.exec(): spawns a shell and runs a command within
that shell, passing the stdout and stderr to a callback function
when complete.
child_process.execFile(): similar to child_process.exec() except
that it spawns the command directly without first spawning a
shell by default.
child_process.fork(): spawns a new Node.js process and invokes
a specified module with an IPC communication channel
established that allows sending messages between parent and
child.
child_process.execSync(): a synchronous version of
child_process.exec() that will block the Node.js event loop.
child_process.execFileSync(): a synchronous version of
child_process.execFile() that will block the Node.js event loop.
For certain use cases, such as automating shell scripts, the
synchronous counterparts may be more convenient. In many cases,
however, the synchronous methods can have significant impact on
performance due to stalling the event loop while spawned processes
complete.
Asynchronous process creation
The child_process.spawn(), child_process.fork(),
child_process.exec(), and child_process.execFile() methods all
follow the idiomatic asynchronous programming pattern typical of
other Node.js APIs.
Each of the methods returns a ChildProcess instance. These objects
implement the Node.js EventEmitter API, allowing the parent process
to register listener functions that are called when certain events
occur during the life cycle of the child process.
The child_process.exec() and child_process.execFile() methods
additionally allow for an optional callback function to be specified
that is invoked when the child process terminates.
Spawning .bat and .cmd files on Windows
The importance of the distinction between child_process.exec() and
child_process.execFile() can vary based on platform. On Unix-type
operating systems (Unix, Linux, macOS) child_process.execFile()
can be more efficient because it does not spawn a shell by default. On
Windows, however, .bat and .cmd files are not executable on their
own without a terminal, and therefore cannot be launched using
child_process.execFile(). When running on Windows, .bat and .cmd
files can be invoked using child_process.spawn() with the shell option
set, with child_process.exec(), or by spawning cmd.exe and passing
the .bat or .cmd file as an argument (which is what the shell option
and child_process.exec() do). In any case, if the script filename
contains spaces it needs to be quoted.
// On Windows Only...
const { spawn } = require('node:child_process');
const bat = spawn('cmd.exe', ['/c', 'my.bat']);
bat.stdout.on('data', (data) => {
console.log(data.toString());
});
bat.stderr.on('data', (data) => {
console.error(data.toString());
});
bat.on('exit', (code) => {
console.log(`Child exited with code ${code}`);
});
// OR...
const { exec, spawn } = require('node:child_process');
exec('my.bat', (err, stdout, stderr) => {
if (err) {
console.error(err);
return;
}
console.log(stdout);
});
// Script with spaces in the filename:
const bat = spawn('"my script.cmd"', ['a', 'b'], { shell: true });
// or:
exec('"my script.cmd" a b', (err, stdout, stderr) => {
// ...
});
child_process.exec(command[, options][,
callback])
command {string} The command to run, with space-separated
arguments.
options {Object}
cwd {string|URL} Current working directory of the child
process. Default: process.cwd().
env {Object} Environment key-value pairs. Default:
process.env.
encoding {string} Default: 'utf8'
shell {string} Shell to execute the command with. See Shell
requirements and Default Windows shell. Default: '/bin/sh'
on Unix, process.env.ComSpec on Windows.
signal {AbortSignal} allows aborting the child process using
an AbortSignal.
timeout {number} Default: 0
maxBuffer {number} Largest amount of data in bytes allowed
on stdout or stderr. If exceeded, the child process is
terminated and any output is truncated. See caveat at
maxBuffer and Unicode. Default: 1024 * 1024.
killSignal {string|integer} Default: 'SIGTERM'
uid {number} Sets the user identity of the process (see
setuid(2)).
gid {number} Sets the group identity of the process (see
setgid(2)).
windowsHide {boolean} Hide the subprocess console window
that would normally be created on Windows systems.
Default: false.
callback {Function} called with the output when process
terminates.
error {Error}
stdout {string|Buffer}
stderr {string|Buffer}
Returns: {ChildProcess}
Spawns a shell then executes the command within that shell, buffering
any generated output. The command string passed to the exec function
is processed directly by the shell and special characters (vary based
on shell) need to be dealt with accordingly:
const { exec } = require('node:child_process');
exec('"/path/to/test file/test.sh" arg1 arg2');
// Double quotes are used so that the space in the path is not inter
// a delimiter of multiple arguments.
exec('echo "The \\$HOME variable is $HOME"');
// The $HOME variable is escaped in the first instance, but not in t
Never pass unsanitized user input to this function. Any
input containing shell metacharacters may be used to
trigger arbitrary command execution.
If a callback function is provided, it is called with the arguments
(error, stdout, stderr). On success, error will be null. On error,
error will be an instance of Error. The error.code property will be the
exit code of the process. By convention, any exit code other than 0
indicates an error. error.signal will be the signal that terminated the
process.
The stdout and stderr arguments passed to the callback will contain
the stdout and stderr output of the child process. By default, Node.js
will decode the output as UTF-8 and pass strings to the callback. The
encoding option can be used to specify the character encoding used to
decode the stdout and stderr output. If encoding is 'buffer', or an
unrecognized character encoding, Buffer objects will be passed to the
callback instead.
const { exec } = require('node:child_process');
exec('cat *.js missing_file | wc -l', (error, stdout, stderr) => {
if (error) {
console.error(`exec error: ${error}`);
return;
}
console.log(`stdout: ${stdout}`);
console.error(`stderr: ${stderr}`);
});
If timeout is greater than 0, the parent will send the signal identified
by the killSignal property (the default is 'SIGTERM') if the child runs
longer than timeout milliseconds.
Unlike the exec(3) POSIX system call, child_process.exec() does not
replace the existing process and uses a shell to execute the command.
If this method is invoked as its util.promisify()ed version, it returns
a Promise for an Object with stdout and stderr properties. The
returned ChildProcess instance is attached to the Promise as a child
property. In case of an error (including any error resulting in an exit
code other than 0), a rejected promise is returned, with the same
error object given in the callback, but with two additional properties
stdout and stderr.
const util = require('node:util');
const exec = util.promisify(require('node:child_process').exec);
async function lsExample() {
const { stdout, stderr } = await exec('ls');
console.log('stdout:', stdout);
console.error('stderr:', stderr);
}
lsExample();
If the signal option is enabled, calling .abort() on the corresponding
AbortController is similar to calling .kill() on the child process
except the error passed to the callback will be an AbortError:
const { exec } = require('node:child_process');
const controller = new AbortController();
const { signal } = controller;
const child = exec('grep ssh', { signal }, (error) => {
console.error(error); // an AbortError
});
controller.abort();
child_process.execFile(file[, args][,
options][, callback])
file {string} The name or path of the executable file to run.
args {string[]} List of string arguments.
options {Object}
cwd {string|URL} Current working directory of the child
process.
env {Object} Environment key-value pairs. Default:
process.env.
encoding {string} Default: 'utf8'
timeout {number} Default: 0
maxBuffer {number} Largest amount of data in bytes allowed
on stdout or stderr. If exceeded, the child process is
terminated and any output is truncated. See caveat at
maxBuffer and Unicode. Default: 1024 * 1024.
killSignal {string|integer} Default: 'SIGTERM'
uid {number} Sets the user identity of the process (see
setuid(2)).
gid {number} Sets the group identity of the process (see
setgid(2)).
windowsHide {boolean} Hide the subprocess console window
that would normally be created on Windows systems.
Default: false.
windowsVerbatimArguments {boolean} No quoting or escaping of
arguments is done on Windows. Ignored on Unix. Default:
false.
shell {boolean|string} If true, runs command inside of a shell.
Uses '/bin/sh' on Unix, and process.env.ComSpec on
Windows. A different shell can be specified as a string. See
Shell requirements and Default Windows shell. Default:
false (no shell).
signal {AbortSignal} allows aborting the child process using
an AbortSignal.
callback {Function} Called with the output when process
terminates.
error {Error}
stdout {string|Buffer}
stderr {string|Buffer}
Returns: {ChildProcess}
The child_process.execFile() function is similar to
child_process.exec() except that it does not spawn a shell by default.
Rather, the specified executable file is spawned directly as a new
process making it slightly more efficient than child_process.exec().
The same options as child_process.exec() are supported. Since a
shell is not spawned, behaviors such as I/O redirection and file
globbing are not supported.
const { execFile } = require('node:child_process');
const child = execFile('node', ['--version'], (error, stdout, stderr
if (error) {
throw error;
}
console.log(stdout);
});
The stdout and stderr arguments passed to the callback will contain
the stdout and stderr output of the child process. By default, Node.js
will decode the output as UTF-8 and pass strings to the callback. The
encoding option can be used to specify the character encoding used to
decode the stdout and stderr output. If encoding is 'buffer', or an
unrecognized character encoding, Buffer objects will be passed to the
callback instead.
If this method is invoked as its util.promisify()ed version, it returns
a Promise for an Object with stdout and stderr properties. The
returned ChildProcess instance is attached to the Promise as a child
property. In case of an error (including any error resulting in an exit
code other than 0), a rejected promise is returned, with the same
error object given in the callback, but with two additional properties
stdout and stderr.
const util = require('node:util');
const execFile = util.promisify(require('node:child_process').execFi
async function getVersion() {
const { stdout } = await execFile('node', ['--version']);
console.log(stdout);
}
getVersion();
If the shell option is enabled, do not pass unsanitized user
input to this function. Any input containing shell
metacharacters may be used to trigger arbitrary command
execution.
If the signal option is enabled, calling .abort() on the corresponding
AbortController is similar to calling .kill() on the child process
except the error passed to the callback will be an AbortError:
const { execFile } = require('node:child_process');
const controller = new AbortController();
const { signal } = controller;
const child = execFile('node', ['--version'], { signal }, (error) =>
console.error(error); // an AbortError
});
controller.abort();
child_process.fork(modulePath[, args][,
options])
modulePath {string|URL} The module to run in the child.
args {string[]} List of string arguments.
options {Object}
cwd {string|URL} Current working directory of the child
process.
detached {boolean} Prepare child to run independently of its
parent process. Specific behavior depends on the platform,
see options.detached).
env {Object} Environment key-value pairs. Default:
process.env.
execPath {string} Executable used to create the child process.
execArgv {string[]} List of string arguments passed to the
executable. Default: process.execArgv.
gid {number} Sets the group identity of the process (see
setgid(2)).
serialization {string} Specify the kind of serialization used
for sending messages between processes. Possible values are
'json' and 'advanced'. See Advanced serialization for more
details. Default: 'json'.
signal {AbortSignal} Allows closing the child process using
an AbortSignal.
killSignal {string|integer} The signal value to be used when
the spawned process will be killed by timeout or abort signal.
Default: 'SIGTERM'.
silent {boolean} If true, stdin, stdout, and stderr of the child
will be piped to the parent, otherwise they will be inherited
from the parent, see the 'pipe' and 'inherit' options for
child_process.spawn()’s stdio for more details. Default:
false.
stdio {Array|string} See child_process.spawn()’s stdio. When
this option is provided, it overrides silent. If the array
variant is used, it must contain exactly one item with value
'ipc' or an error will be thrown. For instance [0, 1, 2,
'ipc'].
uid {number} Sets the user identity of the process (see
setuid(2)).
windowsVerbatimArguments {boolean} No quoting or escaping of
arguments is done on Windows. Ignored on Unix. Default:
false.
timeout {number} In milliseconds the maximum amount of
time the process is allowed to run. Default: undefined.
Returns: {ChildProcess}
The child_process.fork() method is a special case of
child_process.spawn() used specifically to spawn new Node.js
processes. Like child_process.spawn(), a ChildProcess object is
returned. The returned ChildProcess will have an additional
communication channel built-in that allows messages to be passed
back and forth between the parent and child. See subprocess.send()
for details.
Keep in mind that spawned Node.js child processes are independent
of the parent with exception of the IPC communication channel that
is established between the two. Each process has its own memory,
with their own V8 instances. Because of the additional resource
allocations required, spawning a large number of child Node.js
processes is not recommended.
By default, child_process.fork() will spawn new Node.js instances
using the process.execPath of the parent process. The execPath
property in the options object allows for an alternative execution path
to be used.
Node.js processes launched with a custom execPath will communicate
with the parent process using the file descriptor (fd) identified using
the environment variable NODE_CHANNEL_FD on the child process.
Unlike the fork(2) POSIX system call, child_process.fork() does not
clone the current process.
The shell option available in child_process.spawn() is not supported
by child_process.fork() and will be ignored if set.
If the signal option is enabled, calling .abort() on the corresponding
AbortController is similar to calling .kill() on the child process
except the error passed to the callback will be an AbortError:
if (process.argv[2] === 'child') {
setTimeout(() => {
set eout(() {
console.log(`Hello from ${process.argv[2]}!`);
}, 1_000);
} else {
const { fork } = require('node:child_process');
const controller = new AbortController();
const { signal } = controller;
const child = fork(__filename, ['child'], { signal });
child.on('error', (err) => {
// This will be called with err being an AbortError if the contr
});
controller.abort(); // Stops the child process
}
child_process.spawn(command[, args][,
options])
command {string} The command to run.
args {string[]} List of string arguments.
options {Object}
cwd {string|URL} Current working directory of the child
process.
env {Object} Environment key-value pairs. Default:
process.env.
argv0 {string} Explicitly set the value of argv[0] sent to the
child process. This will be set to command if not specified.
stdio {Array|string} Child’s stdio configuration (see
options.stdio).
detached {boolean} Prepare child to run independently of its
parent process. Specific behavior depends on the platform,
see options.detached).
uid {number} Sets the user identity of the process (see
setuid(2)).
gid {number} Sets the group identity of the process (see
setgid(2)).
serialization {string} Specify the kind of serialization used
for sending messages between processes. Possible values are
'json'and 'advanced'. See Advanced serialization for more
details. Default: 'json'.
shell {boolean|string} If true, runs command inside of a shell.
Uses '/bin/sh' on Unix, and process.env.ComSpec on
Windows. A different shell can be specified as a string. See
Shell requirements and Default Windows shell. Default:
false (no shell).
windowsVerbatimArguments {boolean} No quoting or escaping of
arguments is done on Windows. Ignored on Unix. This is set
to true automatically when shell is specified and is CMD.
Default: false.
windowsHide {boolean} Hide the subprocess console window
that would normally be created on Windows systems.
Default: false.
signal {AbortSignal} allows aborting the child process using
an AbortSignal.
timeout {number} In milliseconds the maximum amount of
time the process is allowed to run. Default: undefined.
killSignal {string|integer} The signal value to be used when
the spawned process will be killed by timeout or abort signal.
Default: 'SIGTERM'.
Returns: {ChildProcess}
The child_process.spawn() method spawns a new process using the
given command, with command-line arguments in args. If omitted, args
defaults to an empty array.
If the shell option is enabled, do not pass unsanitized user
input to this function. Any input containing shell
metacharacters may be used to trigger arbitrary command
execution.
A third argument may be used to specify additional options, with
these defaults:
const defaults = {
cwd: undefined,
env: process.env,
};
Use cwd to specify the working directory from which the process is
spawned. If not given, the default is to inherit the current working
directory. If given, but the path does not exist, the child process
emits an ENOENT error and exits immediately. ENOENT is also emitted
when the command does not exist.
Use env to specify environment variables that will be visible to the
new process, the default is process.env.
undefined values in env will be ignored.
Example of running ls -lh /usr, capturing stdout, stderr, and the
exit code:
const { spawn } = require('node:child_process');
const ls = spawn('ls', ['-lh', '/usr']);
ls.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
ls.stderr.on('data', (data) => {
console.error(`stderr: ${data}`);
});
ls.on('close', (code) => {
console.log(`child process exited with code ${code}`);
});
Example: A very elaborate way to run ps ax | grep ssh
const { spawn } = require('node:child_process');
const ps = spawn('ps', ['ax']);
const grep = spawn('grep', ['ssh']);
ps.stdout.on('data', (data) => {
grep.stdin.write(data);
});
ps.stderr.on('data', (data) => {
console.error(`ps stderr: ${data}`);
});
ps.on('close', (code) => {
if (code !== 0) {
console.log(`ps process exited with code ${code}`);
}
grep.stdin.end();
});
grep.stdout.on('data', (data) => {
console.log(data.toString());
});
grep.stderr.on('data', (data) => {
console.error(`grep stderr: ${data}`);
});
grep.on('close', (code) => {
if (code !== 0) {
console.log(`grep process exited with code ${code}`);
}
});
Example of checking for failed spawn:
const { spawn } = require('node:child_process');
const subprocess = spawn('bad_command');
subprocess.on('error', (err) => {
console.error('Failed to start subprocess.');
});
Certain platforms (macOS, Linux) will use the value of argv[0] for the
process title while others (Windows, SunOS) will use command.
Node.js overwrites argv[0] with process.execPath on startup, so
process.argv[0] in a Node.js child process will not match the argv0
parameter passed to spawn from the parent. Retrieve it with the
process.argv0 property instead.
If the signal option is enabled, calling .abort() on the corresponding
AbortController is similar to calling .kill() on the child process
except the error passed to the callback will be an AbortError:
const { spawn } = require('node:child_process');
const controller = new AbortController();
const { signal } = controller;
const grep = spawn('grep', ['ssh'], { signal });
grep.on('error', (err) => {
// This will be called with err being an AbortError if the control
});
controller.abort(); // Stops the child process
options.detached
On Windows, setting options.detached to true makes it possible for
the child process to continue running after the parent exits. The child
will have its own console window. Once enabled for a child process, it
cannot be disabled.
On non-Windows platforms, if options.detached is set to true, the
child process will be made the leader of a new process group and
session. Child processes may continue running after the parent exits
regardless of whether they are detached or not. See setsid(2) for
more information.
By default, the parent will wait for the detached child to exit. To
prevent the parent from waiting for a given subprocess to exit, use the
subprocess.unref() method. Doing so will cause the parent’s event
loop to not include the child in its reference count, allowing the
parent to exit independently of the child, unless there is an
established IPC channel between the child and the parent.
When using the detached option to start a long-running process, the
process will not stay running in the background after the parent exits
unless it is provided with a stdio configuration that is not connected
to the parent. If the parent’s stdio is inherited, the child will remain
attached to the controlling terminal.
Example of a long-running process, by detaching and also ignoring
its parent stdio file descriptors, in order to ignore the parent’s
termination:
const { spawn } = require('node:child_process');
const subprocess = spawn(process.argv[0], ['child_program.js'], {
detached: true,
stdio: 'ignore',
});
subprocess.unref();
Alternatively one can redirect the child process’ output into files:
const fs = require('node:fs');
const { spawn } = require('node:child_process');
const out = fs.openSync('./out.log', 'a');
const err = fs.openSync('./out.log', 'a');
const subprocess = spawn('prg', [], {
detached: true,
stdio: [ 'ignore', out, err ],
});
subprocess.unref();
options.stdio
The options.stdio option is used to configure the pipes that are
established between the parent and child process. By default, the
child’s stdin, stdout, and stderr are redirected to corresponding
subprocess.stdin, subprocess.stdout, and subprocess.stderr streams
on the ChildProcess object. This is equivalent to setting the
options.stdio equal to ['pipe', 'pipe', 'pipe'].
For convenience, options.stdio may be one of the following strings:
'pipe':equivalent to ['pipe', 'pipe', 'pipe'] (the default)
'overlapped': equivalent to ['overlapped', 'overlapped',
'overlapped']
'ignore':equivalent to ['ignore', 'ignore', 'ignore']
'inherit': equivalent to ['inherit', 'inherit', 'inherit'] or [0,
1, 2]
Otherwise, the value of options.stdio is an array where each index
corresponds to an fd in the child. The fds 0, 1, and 2 correspond to
stdin, stdout, and stderr, respectively. Additional fds can be specified
to create additional pipes between the parent and child. The value is
one of the following:
1. 'pipe': Create a pipe between the child process and the parent
process. The parent end of the pipe is exposed to the parent as a
property on the child_process object as subprocess.stdio[fd].
Pipes created for fds 0, 1, and 2 are also available as
subprocess.stdin, subprocess.stdout and subprocess.stderr,
respectively. These are not actual Unix pipes and therefore the
child process can not use them by their descriptor files,
e.g. /dev/fd/2 or /dev/stdout.
2. 'overlapped': Same as 'pipe' except that the FILE_FLAG_OVERLAPPED
flag is set on the handle. This is necessary for overlapped I/O on
the child process’s stdio handles. See the docs for more details.
This is exactly the same as 'pipe' on non-Windows systems.
3. 'ipc': Create an IPC channel for passing messages/file
descriptors between parent and child. A ChildProcess may have at
most one IPC stdio file descriptor. Setting this option enables the
subprocess.send() method. If the child is a Node.js process, the
presence of an IPC channel will enable process.send() and
process.disconnect() methods, as well as 'disconnect' and
'message' events within the child.
Accessing the IPC channel fd in any way other than
process.send() or using the IPC channel with a child process that
is not a Node.js instance is not supported.
4. 'ignore': Instructs Node.js to ignore the fd in the child. While
Node.js will always open fds 0, 1, and 2 for the processes it
spawns, setting the fd to 'ignore' will cause Node.js to open
/dev/null and attach it to the child’s fd.
5. 'inherit': Pass through the corresponding stdio stream to/from
the parent process. In the first three positions, this is equivalent
to process.stdin, process.stdout, and process.stderr, respectively.
In any other position, equivalent to 'ignore'.
6. {Stream} object: Share a readable or writable stream that refers
to a tty, file, socket, or a pipe with the child process. The stream’s
underlying file descriptor is duplicated in the child process to the
fd that corresponds to the index in the stdio array. The stream
must have an underlying descriptor (file streams do not until the
'open' event has occurred).
7. Positive integer: The integer value is interpreted as a file
descriptor that is open in the parent process. It is shared with the
child process, similar to how {Stream} objects can be shared.
Passing sockets is not supported on Windows.
8. null, undefined: Use default value. For stdio fds 0, 1, and 2 (in
other words, stdin, stdout, and stderr) a pipe is created. For fd 3
and up, the default is 'ignore'.
const { spawn } = require('node:child_process');
// Child will use parent's stdios.
spawn('prg', [], { stdio: 'inherit' });
// Spawn child sharing only stderr.
spawn('prg', [], { stdio: ['pipe', 'pipe', process.stderr] });
// Open an extra fd=4, to interact with programs presenting a
// startd-style interface.
spawn('prg', [], { stdio: ['pipe', null, null, null, 'pipe'] });
It is worth noting that when an IPC channel is established between
the parent and child processes, and the child is a Node.js process,
the child is launched with the IPC channel unreferenced (using
unref()) until the child registers an event handler for the
'disconnect' event or the 'message' event. This allows the child to exit
normally without the process being held open by the open IPC
channel.
On Unix-like operating systems, the child_process.spawn() method
performs memory operations synchronously before decoupling the
event loop from the child. Applications with a large memory
footprint may find frequent child_process.spawn() calls to be a
bottleneck. For more information, see V8 issue 7381.
See also: child_process.exec() and child_process.fork().
Synchronous process creation
The child_process.spawnSync(), child_process.execSync(), and
child_process.execFileSync() methods are synchronous and will
block the Node.js event loop, pausing execution of any additional
code until the spawned process exits.
Blocking calls like these are mostly useful for simplifying general-
purpose scripting tasks and for simplifying the loading/processing of
application configuration at startup.
child_process.execFileSync(file[, args][,
options])
file {string} The name or path of the executable file to run.
args {string[]} List of string arguments.
options {Object}
cwd {string|URL} Current working directory of the child
process.
input {string|Buffer|TypedArray|DataView} The value which
will be passed as stdin to the spawned process. If stdio[0] is
set to 'pipe', Supplying this value will override stdio[0].
stdio {string|Array} Child’s stdio configuration. stderr by
default will be output to the parent process’ stderr unless
stdio is specified. Default: 'pipe'.
env {Object} Environment key-value pairs. Default:
process.env.
uid {number} Sets the user identity of the process (see
setuid(2)).
gid {number} Sets the group identity of the process (see
setgid(2)).
timeout {number} In milliseconds the maximum amount of
time the process is allowed to run. Default: undefined.
killSignal {string|integer} The signal value to be used when
the spawned process will be killed. Default: 'SIGTERM'.
maxBuffer {number} Largest amount of data in bytes allowed
on stdout or stderr. If exceeded, the child process is
terminated. See caveat at maxBuffer and Unicode. Default:
1024 * 1024.
encoding {string} The encoding used for all stdio inputs and
outputs. Default: 'buffer'.
windowsHide {boolean} Hide the subprocess console window
that would normally be created on Windows systems.
Default: false.
shell {boolean|string} If true, runs command inside of a shell.
Uses '/bin/sh' on Unix, and process.env.ComSpec on
Windows. A different shell can be specified as a string. See
Shell requirements and Default Windows shell. Default:
false (no shell).
Returns: {Buffer|string} The stdout from the command.
The child_process.execFileSync() method is generally identical to
child_process.execFile() with the exception that the method will not
return until the child process has fully closed. When a timeout has
been encountered and killSignal is sent, the method won’t return
until the process has completely exited.
If the child process intercepts and handles the SIGTERM signal and
does not exit, the parent process will still wait until the child process
has exited.
If the process times out or has a non-zero exit code, this method will
throw an Error that will include the full result of the underlying
child_process.spawnSync().
If the shell option is enabled, do not pass unsanitized user
input to this function. Any input containing shell
metacharacters may be used to trigger arbitrary command
execution.
child_process.execSync(command[, options])
command {string} The command to run.
options {Object}
cwd {string|URL} Current working directory of the child
process.
input {string|Buffer|TypedArray|DataView} The value which
will be passed as stdin to the spawned process. If stdio[0] is
set to 'pipe', Supplying this value will override stdio[0].
stdio {string|Array} Child’s stdio configuration. stderr by
default will be output to the parent process’ stderr unless
stdio is specified. Default: 'pipe'.
env {Object} Environment key-value pairs. Default:
process.env.
shell {string} Shell to execute the command with. See Shell
requirements and Default Windows shell. Default: '/bin/sh'
on Unix, process.env.ComSpec on Windows.
uid {number} Sets the user identity of the process. (See
setuid(2)).
gid {number} Sets the group identity of the process. (See
setgid(2)).
timeout {number} In milliseconds the maximum amount of
time the process is allowed to run. Default: undefined.
killSignal {string|integer} The signal value to be used when
the spawned process will be killed. Default: 'SIGTERM'.
maxBuffer {number} Largest amount of data in bytes allowed
on stdout or stderr. If exceeded, the child process is
terminated and any output is truncated. See caveat at
maxBuffer and Unicode. Default: 1024 * 1024.
encoding {string} The encoding used for all stdio inputs and
outputs. Default: 'buffer'.
windowsHide {boolean} Hide the subprocess console window
that would normally be created on Windows systems.
Default: false.
Returns: {Buffer|string} The stdout from the command.
The child_process.execSync() method is generally identical to
child_process.exec() with the exception that the method will not
return until the child process has fully closed. When a timeout has
been encountered and killSignal is sent, the method won’t return
until the process has completely exited. If the child process
intercepts and handles the SIGTERM signal and doesn’t exit, the parent
process will wait until the child process has exited.
If the process times out or has a non-zero exit code, this method will
throw. The Error object will contain the entire result from
child_process.spawnSync().
Never pass unsanitized user input to this function. Any
input containing shell metacharacters may be used to
trigger arbitrary command execution.
child_process.spawnSync(command[, args][,
options])
command {string} The command to run.
args {string[]} List of string arguments.
options {Object}
cwd {string|URL} Current working directory of the child
process.
input {string|Buffer|TypedArray|DataView} The value which
will be passed as stdin to the spawned process. If stdio[0] is
set to 'pipe', Supplying this value will override stdio[0].
argv0 {string} Explicitly set the value of argv[0] sent to the
child process. This will be set to command if not specified.
stdio {string|Array} Child’s stdio configuration. Default:
'pipe'.
env {Object} Environment key-value pairs. Default:
process.env.
uid {number} Sets the user identity of the process (see
setuid(2)).
gid {number} Sets the group identity of the process (see
setgid(2)).
timeout {number} In milliseconds the maximum amount of
time the process is allowed to run. Default: undefined.
killSignal {string|integer} The signal value to be used when
the spawned process will be killed. Default: 'SIGTERM'.
maxBuffer {number} Largest amount of data in bytes allowed
on stdout or stderr. If exceeded, the child process is
terminated and any output is truncated. See caveat at
maxBuffer and Unicode. Default: 1024 * 1024.
encoding {string} The encoding used for all stdio inputs and
outputs. Default: 'buffer'.
shell {boolean|string} If true, runs command inside of a shell.
Uses '/bin/sh' on Unix, and process.env.ComSpec on
Windows. A different shell can be specified as a string. See
Shell requirements and Default Windows shell. Default:
false (no shell).
windowsVerbatimArguments {boolean} No quoting or escaping of
arguments is done on Windows. Ignored on Unix. This is set
to true automatically when shell is specified and is CMD.
Default: false.
windowsHide {boolean} Hide the subprocess console window
that would normally be created on Windows systems.
Default: false.
Returns: {Object}
pid {number} Pid of the child process.
output {Array} Array of results from stdio output.
stdout {Buffer|string} The contents of output[1].
stderr {Buffer|string} The contents of output[2].
status {number|null} The exit code of the subprocess, or null
if the subprocess terminated due to a signal.
signal {string|null} The signal used to kill the subprocess, or
null if the subprocess did not terminate due to a signal.
error {Error} The error object if the child process failed or
timed out.
The child_process.spawnSync() method is generally identical to
child_process.spawn() with the exception that the function will not
return until the child process has fully closed. When a timeout has
been encountered and killSignal is sent, the method won’t return
until the process has completely exited. If the process intercepts and
handles the SIGTERM signal and doesn’t exit, the parent process will
wait until the child process has exited.
If the shell option is enabled, do not pass unsanitized user
input to this function. Any input containing shell
metacharacters may be used to trigger arbitrary command
execution.
Class: ChildProcess
Extends: {EventEmitter}
Instances of the ChildProcess represent spawned child processes.
Instances of ChildProcess are not intended to be created directly.
Rather, use the child_process.spawn(), child_process.exec(),
child_process.execFile(), or child_process.fork() methods to create
instances of ChildProcess.
Event: 'close'
code {number} The exit code if the child exited on its own.
signal {string} The signal by which the child process was
terminated.
The 'close' event is emitted after a process has ended and the stdio
streams of a child process have been closed. This is distinct from the
'exit' event, since multiple processes might share the same stdio
streams. The 'close' event will always emit after 'exit' was already
emitted, or 'error' if the child failed to spawn.
const { spawn } = require('node:child_process');
const ls = spawn('ls', ['-lh', '/usr']);
ls.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
ls.on('close', (code) => {
console.log(`child process close all stdio with code ${code}`);
});
ls.on('exit', (code) => {
console.log(`child process exited with code ${code}`);
});
Event: 'disconnect'
The 'disconnect' event is emitted after calling the
subprocess.disconnect() method in parent process or
process.disconnect() in child process. After disconnecting it is no
longer possible to send or receive messages, and the
subprocess.connected property is false.
Event: 'error'
err {Error} The error.
The 'error' event is emitted whenever:
The process could not be spawned.
The process could not be killed.
Sending a message to the child process failed.
The child process was aborted via the signal option.
The 'exit' event may or may not fire after an error has occurred.
When listening to both the 'exit' and 'error' events, guard against
accidentally invoking handler functions multiple times.
See also subprocess.kill() and subprocess.send().
Event: 'exit'
code {number} The exit code if the child exited on its own.
signal {string} The signal by which the child process was
terminated.
The 'exit' event is emitted after the child process ends. If the
process exited, code is the final exit code of the process, otherwise
null. If the process terminated due to receipt of a signal, signal is the
string name of the signal, otherwise null. One of the two will always
be non-null.
When the 'exit' event is triggered, child process stdio streams might
still be open.
Node.js establishes signal handlers for SIGINT and SIGTERM and
Node.js processes will not terminate immediately due to receipt of
those signals. Rather, Node.js will perform a sequence of cleanup
actions and then will re-raise the handled signal.
See waitpid(2).
Event: 'message'
message {Object} A parsed JSON object or primitive value.
sendHandle {Handle} A net.Socket or net.Server object, or
undefined.
The 'message' event is triggered when a child process uses
process.send() to send messages.
The message goes through serialization and parsing. The resulting
message might not be the same as what is originally sent.
If the serialization option was set to 'advanced' used when spawning
the child process, the message argument can contain data that JSON
is not able to represent. See Advanced serialization for more details.
Event: 'spawn'
The 'spawn' event is emitted once the child process has spawned
successfully. If the child process does not spawn successfully, the
'spawn' event is not emitted and the 'error' event is emitted instead.
If emitted, the 'spawn' event comes before all other events and before
any data is received via stdout or stderr.
The 'spawn' event will fire regardless of whether an error occurs
within the spawned process. For example, if bash some-command
spawns successfully, the 'spawn' event will fire, though bash may fail
to spawn some-command. This caveat also applies when using { shell:
true }.
subprocess.channel
{Object} A pipe representing the IPC channel to the child
process.
The subprocess.channel property is a reference to the child’s IPC
channel. If no IPC channel exists, this property is undefined.
subprocess.channel.ref()
This method makes the IPC channel keep the event loop of the
parent process running if .unref() has been called before.
subprocess.channel.unref()
This method makes the IPC channel not keep the event loop of the
parent process running, and lets it finish even while the channel is
open.
subprocess.connected
{boolean} Set to false after subprocess.disconnect() is called.
The subprocess.connected property indicates whether it is still
possible to send and receive messages from a child process. When
subprocess.connected is false, it is no longer possible to send or
receive messages.
subprocess.disconnect()
Closes the IPC channel between parent and child, allowing the child
to exit gracefully once there are no other connections keeping it alive.
After calling this method the subprocess.connected and
process.connected properties in both the parent and child
(respectively) will be set to false, and it will be no longer possible to
pass messages between the processes.
The 'disconnect' event will be emitted when there are no messages in
the process of being received. This will most often be triggered
immediately after calling subprocess.disconnect().
When the child process is a Node.js instance (e.g. spawned using
child_process.fork()), the process.disconnect() method can be
invoked within the child process to close the IPC channel as well.
subprocess.exitCode
{integer}
The subprocess.exitCode property indicates the exit code of the child
process. If the child process is still running, the field will be null.
subprocess.kill([signal])
signal{number|string}
Returns: {boolean}
The subprocess.kill() method sends a signal to the child process. If
no argument is given, the process will be sent the 'SIGTERM' signal.
See signal(7) for a list of available signals. This function returns true
if kill(2) succeeds, and false otherwise.
const { spawn } = require('node:child_process');
const grep = spawn('grep', ['ssh']);
grep.on('close', (code, signal) => {
console.log(
`child process terminated due to receipt of signal ${signal}`);
});
// Send SIGHUP to process.
grep.kill('SIGHUP');
The ChildProcess object may emit an 'error' event if the signal
cannot be delivered. Sending a signal to a child process that has
already exited is not an error but may have unforeseen
consequences. Specifically, if the process identifier (PID) has been
reassigned to another process, the signal will be delivered to that
process instead which can have unexpected results.
While the function is called kill, the signal delivered to the child
process may not actually terminate the process.
See kill(2) for reference.
On Windows, where POSIX signals do not exist, the signal argument
will be ignored, and the process will be killed forcefully and abruptly
(similar to 'SIGKILL'). See Signal Events for more details.
On Linux, child processes of child processes will not be terminated
when attempting to kill their parent. This is likely to happen when
running a new process in a shell or with the use of the shell option of
ChildProcess:
'use strict';
const { spawn } = require('node:child_process');
const subprocess = spawn(
'sh',
[
'-c',
`node -e "setInterval(() => {
console.log(process.pid, 'is alive')
}, 500);"`,
], {
stdio: ['inherit', 'inherit', 'inherit'],
},
);
setTimeout(() => {
subprocess.kill(); // Does not terminate the Node.js process in th
}, 2000);
subprocess[Symbol.dispose]()
Stability: 1 - Experimental
Calls subprocess.kill() with 'SIGTERM'.
subprocess.killed
{boolean} Set to true after subprocess.kill() is used to
successfully send a signal to the child process.
The subprocess.killed property indicates whether the child process
successfully received a signal from subprocess.kill(). The killed
property does not indicate that the child process has been
terminated.
subprocess.pid
{integer|undefined}
Returns the process identifier (PID) of the child process. If the child
process fails to spawn due to errors, then the value is undefined and
error is emitted.
const { spawn } = require('node:child_process');
const grep = spawn('grep', ['ssh']);
console.log(`Spawned child pid: ${grep.pid}`);
grep.stdin.end();
subprocess.ref()
Calling subprocess.ref() after making a call to subprocess.unref() will
restore the removed reference count for the child process, forcing the
parent to wait for the child to exit before exiting itself.
const { spawn } = require('node:child_process');
const subprocess = spawn(process.argv[0], ['child_program.js'], {
detached: true,
stdio: 'ignore',
});
subprocess.unref();
subprocess.ref();
subprocess.send(message[, sendHandle[,
options]][, callback])
message {Object}
sendHandle {Handle}
options {Object} The options argument, if present, is an object
used to parameterize the sending of certain types of handles.
options supports the following properties:
keepOpen {boolean} A value that can be used when passing
instances of net.Socket. When true, the socket is kept open in
the sending process. Default: false.
callback {Function}
Returns: {boolean}
When an IPC channel has been established between the parent and
child ( i.e. when using child_process.fork()), the subprocess.send()
method can be used to send messages to the child process. When the
child process is a Node.js instance, these messages can be received
via the 'message' event.
The message goes through serialization and parsing. The resulting
message might not be the same as what is originally sent.
For example, in the parent script:
const cp = require('node:child_process');
const n = cp.fork(`${__dirname}/sub.js`);
n.on('message', (m) => {
console.log('PARENT got message:', m);
});
// Causes the child to print: CHILD got message: { hello: 'world' }
n.send({ hello: 'world' });
And then the child script, 'sub.js' might look like this:
process.on('message', (m) => {
console.log('CHILD got message:', m);
});
// Causes the parent to print: PARENT got message: { foo: 'bar', baz
process.send({ foo: 'bar', baz: NaN });
Child Node.js processes will have a process.send() method of their
own that allows the child to send messages back to the parent.
There is a special case when sending a {cmd: 'NODE_foo'} message.
Messages containing a NODE_ prefix in the cmd property are reserved
for use within Node.js core and will not be emitted in the child’s
'message' event. Rather, such messages are emitted using the
'internalMessage' event and are consumed internally by Node.js.
Applications should avoid using such messages or listening for
'internalMessage' events as it is subject to change without notice.
The optional sendHandle argument that may be passed to
subprocess.send() is for passing a TCP server or socket object to the
child process. The child will receive the object as the second
argument passed to the callback function registered on the 'message'
event. Any data that is received and buffered in the socket will not be
sent to the child.
The optional callback is a function that is invoked after the message
is sent but before the child may have received it. The function is
called with a single argument: null on success, or an Error object on
failure.
If no callback function is provided and the message cannot be sent,
an 'error' event will be emitted by the ChildProcess object. This can
happen, for instance, when the child process has already exited.
subprocess.send() will return false if the channel has closed or when
the backlog of unsent messages exceeds a threshold that makes it
unwise to send more. Otherwise, the method returns true. The
callback function can be used to implement flow control.
Example: sending a server object
The sendHandle argument can be used, for instance, to pass the
handle of a TCP server object to the child process as illustrated in the
example below:
const subprocess = require('node:child_process').fork('subprocess.js
// Open up the server object and send the handle.
const server = require('node:net').createServer();
server.on('connection', (socket) => {
socket.end('handled by parent');
});
server.listen(1337, () => {
subprocess.send('server', server);
});
The child would then receive the server object as:
process.on('message', (m, server) => {
if (m === 'server') {
server.on('connection', (socket) => {
socket.end('handled by child');
});
}
});
Once the server is now shared between the parent and child, some
connections can be handled by the parent and some by the child.
While the example above uses a server created using the node:net
module, node:dgram module servers use exactly the same workflow
with the exceptions of listening on a 'message' event instead of
'connection' and using server.bind() instead of server.listen(). This
is, however, only supported on Unix platforms.
Example: sending a socket object
Similarly, the sendHandler argument can be used to pass the handle of
a socket to the child process. The example below spawns two
children that each handle connections with “normal” or “special”
priority:
const { fork } = require('node:child_process');
const normal = fork('subprocess.js', ['normal']);
const special = fork('subprocess.js', ['special']);
// Open up the server and send sockets to child. Use pauseOnConnect
// the sockets from being read before they are sent to the child pro
const server = require('node:net').createServer({ pauseOnConnect: tr
server.on('connection', (socket) => {
// If this is special priority...
if (socket.remoteAddress === '74.125.127.100') {
special.send('socket', socket);
return;
}
// This is normal priority.
normal.send('socket', socket);
});
server.listen(1337);
The subprocess.js would receive the socket handle as the second
argument passed to the event callback function:
process.on('message', (m, socket) => {
if (m === 'socket') {
if (socket) {
// Check that the client socket exists.
// It is possible for the socket to be closed between the time
// sent and the time it is received in the child process.
socket.end(`Request handled with ${process.argv[2]} priority`)
}
}
});
Do not use .maxConnections on a socket that has been passed to a
subprocess. The parent cannot track when the socket is destroyed.
Any 'message' handlers in the subprocess should verify that socket
exists, as the connection may have been closed during the time it
takes to send the connection to the child.
subprocess.signalCode
{string|null}
The subprocess.signalCode property indicates the signal received by
the child process if any, else null.
subprocess.spawnargs
{Array}
The subprocess.spawnargs property represents the full list of
command-line arguments the child process was launched with.
subprocess.spawnfile
{string}
The subprocess.spawnfile property indicates the executable file name
of the child process that is launched.
For child_process.fork(), its value will be equal to process.execPath.
For child_process.spawn(), its value will be the name of the
executable file. For child_process.exec(), its value will be the name of
the shell in which the child process is launched.
subprocess.stderr
{stream.Readable|null|undefined}
A Readable Stream that represents the child process’s stderr.
If the child was spawned with stdio[2] set to anything other than
'pipe', then this will be null.
subprocess.stderr is an alias for subprocess.stdio[2]. Both properties
will refer to the same value.
The subprocess.stderr property can be null or undefined if the child
process could not be successfully spawned.
subprocess.stdin
{stream.Writable|null|undefined}
A Writable Stream that represents the child process’s stdin.
If a child process waits to read all of its input, the child will not
continue until this stream has been closed via end().
If the child was spawned with stdio[0] set to anything other than
'pipe', then this will be null.
subprocess.stdin is an alias for subprocess.stdio[0]. Both properties
will refer to the same value.
The subprocess.stdin property can be null or undefined if the child
process could not be successfully spawned.
subprocess.stdio
{Array}
A sparse array of pipes to the child process, corresponding with
positions in the stdio option passed to child_process.spawn() that
have been set to the value 'pipe'. subprocess.stdio[0],
subprocess.stdio[1], and subprocess.stdio[2] are also available as
subprocess.stdin, subprocess.stdout, and subprocess.stderr,
respectively.
In the following example, only the child’s fd 1 (stdout) is configured
as a pipe, so only the parent’s subprocess.stdio[1] is a stream, all
other values in the array are null.
const assert = require('node:assert');
const fs = require('node:fs');
const child_process = require('node:child_process');
const subprocess = child_process.spawn('ls', {
stdio: [
0, // Use parent's stdin for child.
'pipe', // Pipe child's stdout to parent.
fs.openSync('err.out', 'w'), // Direct child's stderr to a file.
],
});
assert.strictEqual(subprocess.stdio[0], null);
assert.strictEqual(subprocess.stdio[0], subprocess.stdin);
assert(subprocess.stdout);
assert.strictEqual(subprocess.stdio[1], subprocess.stdout);
assert.strictEqual(subprocess.stdio[2], null);
assert.strictEqual(subprocess.stdio[2], subprocess.stderr);
The subprocess.stdio property can be undefined if the child process
could not be successfully spawned.
subprocess.stdout
{stream.Readable|null|undefined}
A Readable Stream that represents the child process’s stdout.
If the child was spawned with stdio[1] set to anything other than
'pipe', then this will be null.
subprocess.stdout is an alias for subprocess.stdio[1]. Both properties
will refer to the same value.
const { spawn } = require('node:child_process');
const subprocess = spawn('ls');
subprocess.stdout.on('data', (data) => {
console.log(`Received chunk ${data}`);
});
The subprocess.stdout property can be null or undefined if the child
process could not be successfully spawned.
subprocess.unref()
By default, the parent will wait for the detached child to exit. To
prevent the parent from waiting for a given subprocess to exit, use the
subprocess.unref() method. Doing so will cause the parent’s event
loop to not include the child in its reference count, allowing the
parent to exit independently of the child, unless there is an
established IPC channel between the child and the parent.
const { spawn } = require('node:child_process');
const subprocess = spawn(process.argv[0], ['child_program.js'], {
detached: true,
stdio: 'ignore',
});
subprocess.unref();
maxBuffer and Unicode
The maxBuffer option specifies the largest number of bytes allowed on
stdout or stderr. If this value is exceeded, then the child process is
terminated. This impacts output that includes multibyte character
encodings such as UTF-8 or UTF-16. For instance, console.log('中文测
试') will send 13 UTF-8 encoded bytes to stdout although there are
only 4 characters.
Shell requirements
The shell should understand the -c switch. If the shell is 'cmd.exe', it
should understand the /d /s /c switches and command-line parsing
should be compatible.
Default Windows shell
Although Microsoft specifies %COMSPEC% must contain the path to
'cmd.exe' in the root environment, child processes are not always
subject to the same requirement. Thus, in child_process functions
where a shell can be spawned, 'cmd.exe' is used as a fallback if
process.env.ComSpec is unavailable.
Advanced serialization
Child processes support a serialization mechanism for IPC that is
based on the serialization API of the node:v8 module, based on the
HTML structured clone algorithm. This is generally more powerful
and supports more built-in JavaScript object types, such as BigInt,
Map and Set, ArrayBuffer and TypedArray, Buffer, Error, RegExp etc.
However, this format is not a full superset of JSON, and
e.g. properties set on objects of such built-in types will not be passed
on through the serialization step. Additionally, performance may not
be equivalent to that of JSON, depending on the structure of the
passed data. Therefore, this feature requires opting in by setting the
serialization option to 'advanced' when calling child_process.spawn()
or child_process.fork().
Console
Stability: 2 - Stable
The node:console module provides a simple debugging console that is
similar to the JavaScript console mechanism provided by web
browsers.
The module exports two specific components:
A Console class with methods such as console.log(),
console.error(), and console.warn() that can be used to write to
any Node.js stream.
A global console instance configured to write to process.stdout
and process.stderr. The global console can be used without
calling require('node:console').
Warning: The global console object’s methods are neither
consistently synchronous like the browser APIs they resemble, nor
are they consistently asynchronous like all other Node.js streams.
See the note on process I/O for more information.
Example using the global console:
console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(new Error('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr
Example using the Console class:
const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(new Error('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
const name = 'Will Robinson';
myConsole.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to err
Class: Console
The Console class can be used to create a simple logger with
configurable output streams and can be accessed using either
require('node:console').Console or console.Console (or their
destructured counterparts):
const { Console } = require('node:console');
const { Console } = console;
new Console(stdout[, stderr][,
ignoreErrors])
new Console(options)
options {Object}
stdout {stream.Writable}
stderr {stream.Writable}
ignoreErrors {boolean} Ignore errors when writing to the
underlying streams. Default: true.
colorMode {boolean|string} Set color support for this Console
instance. Setting to true enables coloring while inspecting
values. Setting to false disables coloring while inspecting
values. Setting to 'auto' makes color support depend on the
value of the isTTY property and the value returned by
getColorDepth() on the respective stream. This option can not
be used, if inspectOptions.colors is set as well. Default:
'auto'.
inspectOptions {Object} Specifies options that are passed
along to util.inspect().
groupIndentation {number} Set group indentation. Default:
2.
Creates a new Console with one or two writable stream instances.
stdout is a writable stream to print log or info output. stderr is used
for warning or error output. If stderr is not provided, stdout is used
for stderr.
const output = fs.createWriteStream('./stdout.log');
const errorOutput = fs.createWriteStream('./stderr.log');
// Custom simple logger
const logger = new Console({ stdout: output, stderr: errorOutput })
// use it like console
const count = 5;
logger.log('count: %d', count);
// In stdout.log: count 5
The global console is a special Console whose output is sent to
process.stdout and process.stderr. It is equivalent to calling:
new Console({ stdout: process.stdout, stderr: process.stderr });
console.assert(value[, ...message])
value {any} The value tested for being truthy.
...message {any} All arguments besides value are used as error
message.
console.assert() writes a message if value is falsy or omitted. It only
writes a message and does not otherwise affect execution. The output
always starts with "Assertion failed". If provided, message is
formatted using util.format().
If value is truthy, nothing happens.
console.assert(true, 'does nothing');
console.assert(false, 'Whoops %s work', 'didn\'t');
// Assertion failed: Whoops didn't work
console.assert();
// Assertion failed
console.clear()
When stdout is a TTY, calling console.clear() will attempt to clear
the TTY. When stdout is not a TTY, this method does nothing.
The specific operation of console.clear() can vary across operating
systems and terminal types. For most Linux operating systems,
console.clear() operates similarly to the clear shell command. On
Windows, console.clear() will clear only the output in the current
terminal viewport for the Node.js binary.
console.count([label])
label {string} The display label for the counter. Default:
'default'.
Maintains an internal counter specific to label and outputs to stdout
the number of times console.count() has been called with the given
label.
> console.count()
default: 1
undefined
> console.count('default')
default: 2
undefined
> console.count('abc')
abc: 1
undefined
> console.count('xyz')
xyz: 1
undefined
> console.count('abc')
abc: 2
undefined
> console.count()
default: 3
undefined
>
console.countReset([label])
label {string} The display label for the counter. Default:
'default'.
Resets the internal counter specific to label.
> console.count('abc');
abc: 1
undefined
> console.countReset('abc');
undefined
> console.count('abc');
abc: 1
undefined
>
console.debug(data[, ...args])
data {any}
...args {any}
The console.debug() function is an alias for console.log().
console.dir(obj[, options])
obj {any}
options {Object}
showHidden {boolean} If true then the object’s non-
enumerable and symbol properties will be shown too.
Default: false.
depth {number} Tells util.inspect() how many times to
recurse while formatting the object. This is useful for
inspecting large complicated objects. To make it recurse
indefinitely, pass null. Default: 2.
colors {boolean} If true, then the output will be styled with
ANSI color codes. Colors are customizable; see customizing
util.inspect() colors. Default: false.
Uses util.inspect() on obj and prints the resulting string to stdout.
This function bypasses any custom inspect() function defined on obj.
console.dirxml(...data)
...data {any}
This method calls console.log() passing it the arguments received.
This method does not produce any XML formatting.
console.error([data][, ...args])
data {any}
...args {any}
Prints to stderr with newline. Multiple arguments can be passed,
with the first used as the primary message and all additional used as
substitution values similar to printf(3) (the arguments are all passed
to util.format()).
const code = 5;
console.error('error #%d', code);
// Prints: error #5, to stderr
console.error('error', code);
// Prints: error 5, to stderr
If formatting elements (e.g. %d) are not found in the first string then
util.inspect() is called on each argument and the resulting string
values are concatenated. See util.format() for more information.
console.group([...label])
...label {any}
Increases indentation of subsequent lines by spaces for
groupIndentation length.
If one or more labels are provided, those are printed first without the
additional indentation.
console.groupCollapsed()
An alias for console.group().
console.groupEnd()
Decreases indentation of subsequent lines by spaces for
groupIndentation length.
console.info([data][, ...args])
data {any}
...args {any}
The console.info() function is an alias for console.log().
console.log([data][, ...args])
data {any}
...args {any}
Prints to stdout with newline. Multiple arguments can be passed,
with the first used as the primary message and all additional used as
substitution values similar to printf(3) (the arguments are all passed
to util.format()).
const count = 5;
console.log('count: %d', count);
// Prints: count: 5, to stdout
console.log('count:', count);
// Prints: count: 5, to stdout
See util.format() for more information.
console.table(tabularData[, properties])
tabularData {any}
properties {string[]} Alternate properties for constructing the
table.
Try to construct a table with the columns of the properties of
tabularData (or use properties) and rows of tabularData and log it.
Falls back to just logging the argument if it can’t be parsed as
tabular.
// These can't be parsed as tabular data
console.table(Symbol());
// Symbol()
console.table(undefined);
// undefined
console.table([{ a: 1, b: 'Y' }, { a: 'Z', b: 2 }]);
// ┌─────────┬─────┬─────┐
// │ (index) │ a │ b │
// ├─────────┼─────┼─────┤
// │ 0 │ 1 │ 'Y' │
// │ 1 │ 'Z' │ 2 │
// └─────────┴─────┴─────┘
console.table([{ a: 1, b: 'Y' }, { a: 'Z', b: 2 }], ['a']);
// ┌─────────┬─────┐
// │ (index) │ a │
// ├─────────┼─────┤
// │ 0 │ 1 │
// │ 1 │ 'Z' │
// └─────────┴─────┘
console.time([label])
label {string} Default: 'default'
Starts a timer that can be used to compute the duration of an
operation. Timers are identified by a unique label. Use the same
labelwhen calling console.timeEnd() to stop the timer and output the
elapsed time in suitable time units to stdout. For example, if the
elapsed time is 3869ms, console.timeEnd() displays “3.869s”.
console.timeEnd([label])
label {string} Default: 'default'
Stops a timer that was previously started by calling console.time()
and prints the result to stdout:
console.time('bunch-of-stuff');
// Do a bunch of stuff.
console.timeEnd('bunch-of-stuff');
// Prints: bunch-of-stuff: 225.438ms
console.timeLog([label][, ...data])
label {string} Default: 'default'
...data {any}
For a timer that was previously started by calling console.time(),
prints the elapsed time and other data arguments to stdout:
console.time('process');
const value = expensiveProcess1(); // Returns 42
console.timeLog('process', value);
// Prints "process: 365.227ms 42".
doExpensiveProcess2(value);
console.timeEnd('process');
console.trace([message][, ...args])
message {any}
...args {any}
Prints to stderr the string 'Trace: ', followed by the util.format()
formatted message and stack trace to the current position in the
code.
console.trace('Show me');
// Prints: (stack trace will vary based on where trace is called)
// Trace: Show me
// at repl:2:9
// at REPLServer.defaultEval (repl.js:248:27)
// at bound (domain.js:287:14)
// at REPLServer.runBound [as eval] (domain.js:300:12)
// at REPLServer.<anonymous> (repl.js:412:12)
// at emitOne (events.js:82:20)
// at REPLServer.emit (events.js:169:7)
// at REPLServer.Interface._onLine (readline.js:210:10)
// at REPLServer.Interface._line (readline.js:549:8)
// at REPLServer.Interface._ttyWrite (readline.js:826:14)
console.warn([data][, ...args])
data {any}
...args {any}
The console.warn() function is an alias for console.error().
Inspector only methods
The following methods are exposed by the V8 engine in the general
API but do not display anything unless used in conjunction with the
inspector (--inspect flag).
console.profile([label])
label {string}
This method does not display anything unless used in the inspector.
The console.profile() method starts a JavaScript CPU profile with
an optional label until console.profileEnd() is called. The profile is
then added to the Profile panel of the inspector.
console.profile('MyLabel');
// Some code
console.profileEnd('MyLabel');
// Adds the profile 'MyLabel' to the Profiles panel of the inspector
console.profileEnd([label])
label {string}
This method does not display anything unless used in the inspector.
Stops the current JavaScript CPU profiling session if one has been
started and prints the report to the Profiles panel of the inspector.
See console.profile() for an example.
If this method is called without a label, the most recently started
profile is stopped.
console.timeStamp([label])
label {string}
This method does not display anything unless used in the inspector.
The console.timeStamp() method adds an event with the label 'label'
to the Timeline panel of the inspector.
Cluster
Stability: 2 - Stable
Clusters of Node.js processes can be used to run multiple instances
of Node.js that can distribute workloads among their application
threads. When process isolation is not needed, use the worker_threads
module instead, which allows running multiple application threads
within a single Node.js instance.
The cluster module allows easy creation of child processes that all
share server ports.
import cluster from 'node:cluster';
import http from 'node:http';
import { availableParallelism } from 'node:os';
import process from 'node:process';
const numCPUs = availableParallelism();
if (cluster.isPrimary) {
console.log(`Primary ${process.pid} is running`);
// Fork workers.
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('exit', (worker, code, signal) => {
console.log(`worker ${worker.process.pid} died`);
});
} else {
// Workers can share any TCP connection
// In this case it is an HTTP server
http.createServer((req, res) => {
res.writeHead(200);
res.end('hello world\n');
}).listen(8000);
console.log(`Worker ${process.pid} started`);
}
const cluster = require('node:cluster');
const http = require('node:http');
const numCPUs = require('node:os').availableParallelism();
const process = require('node:process');
if (cluster.isPrimary) {
console.log(`Primary ${process.pid} is running`);
// Fork workers.
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('exit', (worker, code, signal) => {
console.log(`worker ${worker.process.pid} died`);
});
} else {
// Workers can share any TCP connection
// In this case it is an HTTP server
http.createServer((req, res) => {
res.writeHead(200);
res.end('hello world\n');
}).listen(8000);
console.log(`Worker ${process.pid} started`);
}
Running Node.js will now share port 8000 between the workers:
$ node server.js
Primary 3596 is running
Worker 4324 started
Worker 4520 started
Worker 6056 started
Worker 5644 started
On Windows, it is not yet possible to set up a named pipe server in a
worker.
How it works
The worker processes are spawned using the child_process.fork()
method, so that they can communicate with the parent via IPC and
pass server handles back and forth.
The cluster module supports two methods of distributing incoming
connections.
The first one (and the default one on all platforms except Windows)
is the round-robin approach, where the primary process listens on a
port, accepts new connections and distributes them across the
workers in a round-robin fashion, with some built-in smarts to avoid
overloading a worker process.
The second approach is where the primary process creates the listen
socket and sends it to interested workers. The workers then accept
incoming connections directly.
The second approach should, in theory, give the best performance. In
practice however, distribution tends to be very unbalanced due to
operating system scheduler vagaries. Loads have been observed
where over 70% of all connections ended up in just two processes,
out of a total of eight.
Because server.listen() hands off most of the work to the primary
process, there are three cases where the behavior between a normal
Node.js process and a cluster worker differs:
1. server.listen({fd: 7}) Because the message is passed to the
primary, file descriptor 7 in the parent will be listened on, and
the handle passed to the worker, rather than listening to the
worker’s idea of what the number 7 file descriptor references.
2. server.listen(handle) Listening on handles explicitly will cause
the worker to use the supplied handle, rather than talk to the
primary process.
3. server.listen(0) Normally, this will cause servers to listen on a
random port. However, in a cluster, each worker will receive the
same “random” port each time they do listen(0). In essence, the
port is random the first time, but predictable thereafter. To listen
on a unique port, generate a port number based on the cluster
worker ID.
Node.js does not provide routing logic. It is therefore important to
design an application such that it does not rely too heavily on in-
memory data objects for things like sessions and login.
Because workers are all separate processes, they can be killed or re-
spawned depending on a program’s needs, without affecting other
workers. As long as there are some workers still alive, the server will
continue to accept connections. If no workers are alive, existing
connections will be dropped and new connections will be refused.
Node.js does not automatically manage the number of workers,
however. It is the application’s responsibility to manage the worker
pool based on its own needs.
Although a primary use case for the node:cluster module is
networking, it can also be used for other use cases requiring worker
processes.
Class: Worker
Extends: {EventEmitter}
A Worker object contains all public information and method about a
worker. In the primary it can be obtained using cluster.workers. In a
worker it can be obtained using cluster.worker.
Event: 'disconnect'
Similar to the cluster.on('disconnect') event, but specific to this
worker.
cluster.fork().on('disconnect', () => {
// Worker has disconnected
});
Event: 'error'
This event is the same as the one provided by child_process.fork().
Within a worker, process.on('error') may also be used.
Event: 'exit'
code {number} The exit code, if it exited normally.
signal {string} The name of the signal (e.g. 'SIGHUP') that caused
the process to be killed.
Similar to the cluster.on('exit') event, but specific to this worker.
import cluster from 'node:cluster';
if (cluster.isPrimary) {
const worker = cluster.fork();
worker.on('exit', (code, signal) => {
if (signal) {
console.log(`worker was killed by signal: ${signal}`);
} else if (code !== 0) {
console.log(`worker exited with error code: ${code}`);
} else {
console.log('worker success!');
}
});
}
const cluster = require('node:cluster');
if (cluster.isPrimary) {
const worker = cluster.fork();
worker.on('exit', (code, signal) => {
if (signal) {
console.log(`worker was killed by signal: ${signal}`);
} else if (code !== 0) {
console.log(`worker exited with error code: ${code}`);
} else {
console.log('worker success!');
}
});
}
Event: 'listening'
address {Object}
Similar to the cluster.on('listening') event, but specific to this
worker.
cluster.fork().on('listening', (address) => {
// Worker is listening
});
cluster.fork().on('listening', (address) => {
// Worker is listening
});
It is not emitted in the worker.
Event: 'message'
message {Object}
handle {undefined|Object}
Similar to the 'message' event of cluster, but specific to this worker.
Within a worker, process.on('message') may also be used.
See process event: 'message'.
Here is an example using the message system. It keeps a count in the
primary process of the number of HTTP requests received by the
workers:
import cluster from 'node:cluster';
import http from 'node:http';
import { availableParallelism } from 'node:os';
import process from 'node:process';
if (cluster.isPrimary) {
// Keep track of http requests
let numReqs = 0;
setInterval(() => {
console.log(`numReqs = ${numReqs}`);
}, 1000);
// Count requests
function messageHandler(msg) {
if (msg.cmd && msg.cmd === 'notifyRequest') {
numReqs += 1;
}
}
// Start workers and listen for messages containing notifyRequest
const numCPUs = availableParallelism();
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
for (const id in cluster.workers) {
cluster.workers[id].on('message', messageHandler);
}
} else {
// Worker processes have a http server.
http.Server((req, res) => {
res.writeHead(200);
res.end('hello world\n');
// Notify primary about the request
process.send({ cmd: 'notifyRequest' });
}).listen(8000);
}
const cluster = require('node:cluster');
const http = require('node:http');
const process = require('node:process');
if (cluster.isPrimary) {
// Keep track of http requests
let numReqs = 0;
setInterval(() => {
console.log(`numReqs = ${numReqs}`);
}, 1000);
// Count requests
function messageHandler(msg) {
if (msg.cmd && msg.cmd === 'notifyRequest') {
numReqs += 1;
}
}
// Start workers and listen for messages containing notifyRequest
const numCPUs = require('node:os').availableParallelism();
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
for (const id in cluster.workers) {
cluster.workers[id].on('message', messageHandler);
}
} else {
// Worker processes have a http server.
http.Server((req, res) => {
res.writeHead(200);
res.end('hello world\n');
// Notify primary about the request
process.send({ cmd: 'notifyRequest' });
}).listen(8000);
}
Event: 'online'
Similar to the cluster.on('online') event, but specific to this worker.
cluster.fork().on('online', () => {
// Worker is online
});
It is not emitted in the worker.
worker.disconnect()
Returns: {cluster.Worker} A reference to worker.
In a worker, this function will close all servers, wait for the 'close'
event on those servers, and then disconnect the IPC channel.
In the primary, an internal message is sent to the worker causing it
to call .disconnect() on itself.
Causes .exitedAfterDisconnect to be set.
After a server is closed, it will no longer accept new connections, but
connections may be accepted by any other listening worker. Existing
connections will be allowed to close as usual. When no more
connections exist, see server.close(), the IPC channel to the worker
will close allowing it to die gracefully.
The above applies only to server connections, client connections are
not automatically closed by workers, and disconnect does not wait
for them to close before exiting.
In a worker, process.disconnect exists, but it is not this function; it is
disconnect().
Because long living server connections may block workers from
disconnecting, it may be useful to send a message, so application
specific actions may be taken to close them. It also may be useful to
implement a timeout, killing a worker if the 'disconnect' event has
not been emitted after some time.
if (cluster.isPrimary) {
const worker = cluster.fork();
let timeout;
worker.on('listening', (address) => {
worker.send('shutdown');
worker.disconnect();
timeout = setTimeout(() => {
worker.kill();
}, 2000);
});
worker.on('disconnect', () => {
clearTimeout(timeout);
});
} else if (cluster.isWorker) {
const net = require('node:net');
const server = net.createServer((socket) => {
// Connections never end
});
server.listen(8000);
process.on('message', (msg) => {
if (msg === 'shutdown') {
// Initiate graceful close of any connections to server
}
});
}
worker.exitedAfterDisconnect
{boolean}
This property is true if the worker exited due to .disconnect(). If the
worker exited any other way, it is false. If the worker has not exited,
it is undefined.
The boolean worker.exitedAfterDisconnect allows distinguishing
between voluntary and accidental exit, the primary may choose not
to respawn a worker based on this value.
cluster.on('exit', (worker, code, signal) => {
if (worker.exitedAfterDisconnect === true) {
console.log('Oh, it was just voluntary – no need to worry');
}
});
// kill worker
worker.kill();
worker.id
{integer}
Each new worker is given its own unique id, this id is stored in the id.
While a worker is alive, this is the key that indexes it in
cluster.workers.
worker.isConnected()
This function returns true if the worker is connected to its primary
via its IPC channel, false otherwise. A worker is connected to its
primary after it has been created. It is disconnected after the
'disconnect' event is emitted.
worker.isDead()
This function returns true if the worker’s process has terminated
(either because of exiting or being signaled). Otherwise, it returns
false.
import cluster from 'node:cluster';
import http from 'node:http';
import { availableParallelism } from 'node:os';
import process from 'node:process';
const numCPUs = availableParallelism();
if (cluster.isPrimary) {
console.log(`Primary ${process.pid} is running`);
// Fork workers.
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('fork', (worker) => {
console.log('worker is dead:', worker.isDead());
});
cluster.on('exit', (worker, code, signal) => {
console.log('worker is dead:', worker.isDead());
});
} else {
// Workers can share any TCP connection. In this case, it is an HT
http.createServer((req, res) => {
res.writeHead(200);
res.end(`Current process\n ${process.pid}`);
process.kill(process.pid);
}).listen(8000);
}
const cluster = require('node:cluster');
const http = require('node:http');
const numCPUs = require('node:os').availableParallelism();
const process = require('node:process');
if (cluster.isPrimary) {
console.log(`Primary ${process.pid} is running`);
// Fork workers.
// Fork workers.
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('fork', (worker) => {
console.log('worker is dead:', worker.isDead());
});
cluster.on('exit', (worker, code, signal) => {
console.log('worker is dead:', worker.isDead());
});
} else {
// Workers can share any TCP connection. In this case, it is an HT
http.createServer((req, res) => {
res.writeHead(200);
res.end(`Current process\n ${process.pid}`);
process.kill(process.pid);
}).listen(8000);
}
worker.kill([signal])
signal{string} Name of the kill signal to send to the worker
process. Default: 'SIGTERM'
This function will kill the worker. In the primary worker, it does this
by disconnecting the worker.process, and once disconnected, killing
with signal. In the worker, it does it by killing the process with
signal.
The kill() function kills the worker process without waiting for a
graceful disconnect, it has the same behavior as
worker.process.kill().
This method is aliased as worker.destroy() for backwards
compatibility.
In a worker, process.kill() exists, but it is not this function; it is
kill().
worker.process
{ChildProcess}
All workers are created using child_process.fork(), the returned
object from this function is stored as .process. In a worker, the global
process is stored.
See: Child Process module.
Workers will call process.exit(0) if the 'disconnect' event occurs on
process and .exitedAfterDisconnect is not true. This protects against
accidental disconnection.
worker.send(message[, sendHandle[,
options]][, callback])
message {Object}
sendHandle {Handle}
options {Object} The options argument, if present, is an object
used to parameterize the sending of certain types of handles.
options supports the following properties:
keepOpen {boolean} A value that can be used when passing
instances of net.Socket. When true, the socket is kept open in
the sending process. Default: false.
callback {Function}
Returns: {boolean}
Send a message to a worker or primary, optionally with a handle.
In the primary, this sends a message to a specific worker. It is
identical to ChildProcess.send().
In a worker, this sends a message to the primary. It is identical to
process.send().
This example will echo back all messages from the primary:
if (cluster.isPrimary) {
const worker = cluster.fork();
worker.send('hi there');
} else if (cluster.isWorker) {
process.on('message', (msg) => {
process.send(msg);
});
}
Event: 'disconnect'
worker {cluster.Worker}
Emitted after the worker IPC channel has disconnected. This can
occur when a worker exits gracefully, is killed, or is disconnected
manually (such as with worker.disconnect()).
There may be a delay between the 'disconnect' and 'exit' events.
These events can be used to detect if the process is stuck in a cleanup
or if there are long-living connections.
cluster.on('disconnect', (worker) => {
console.log(`The worker #${worker.id} has disconnected`);
});
Event: 'exit'
worker {cluster.Worker}
code {number} The exit code, if it exited normally.
signal {string} The name of the signal (e.g. 'SIGHUP') that caused
the process to be killed.
When any of the workers die the cluster module will emit the 'exit'
event.
This can be used to restart the worker by calling .fork() again.
cluster.on('exit', (worker, code, signal) => {
console.log('worker %d died (%s). restarting...',
worker.process.pid, signal || code);
cluster.fork();
});
See child_process event: 'exit'.
Event: 'fork'
worker {cluster.Worker}
When a new worker is forked the cluster module will emit a 'fork'
event. This can be used to log worker activity, and create a custom
timeout.
const timeouts = [];
function errorMsg() {
console.error('Something must be wrong with the connection ...');
}
cluster.on('fork', (worker) => {
timeouts[worker.id] = setTimeout(errorMsg, 2000);
});
cluster.on('listening', (worker, address) => {
clearTimeout(timeouts[worker.id]);
});
cluster.on('exit', (worker, code, signal) => {
clearTimeout(timeouts[worker.id]);
errorMsg();
});
Event: 'listening'
worker {cluster.Worker}
address {Object}
After calling listen() from a worker, when the 'listening' event is
emitted on the server, a 'listening' event will also be emitted on
cluster in the primary.
The event handler is executed with two arguments, the worker
contains the worker object and the address object contains the
following connection properties: address, port, and addressType. This
is very useful if the worker is listening on more than one address.
cluster.on('listening', (worker, address) => {
console.log(
`A worker is now connected to ${address.address}:${address.port}
});
The addressType is one of:
4 (TCPv4)
6 (TCPv6)
-1 (Unix domain socket)
'udp4' or 'udp6' (UDPv4 or UDPv6)
Event: 'message'
worker {cluster.Worker}
message {Object}
handle {undefined|Object}
Emitted when the cluster primary receives a message from any
worker.
See child_process event: 'message'.
Event: 'online'
worker {cluster.Worker}
After forking a new worker, the worker should respond with an
online message. When the primary receives an online message it will
emit this event. The difference between 'fork' and 'online' is that
fork is emitted when the primary forks a worker, and 'online' is
emitted when the worker is running.
cluster.on('online', (worker) => {
console.log('Yay, the worker responded after it was forked');
});
Event: 'setup'
settings {Object}
Emitted every time .setupPrimary() is called.
The settings object is the cluster.settings object at the time
.setupPrimary() was called and is advisory only, since multiple calls
to .setupPrimary() can be made in a single tick.
If accuracy is important, use cluster.settings.
cluster.disconnect([callback])
{Function} Called when all workers are disconnected
callback
and handles are closed.
Calls .disconnect() on each worker in cluster.workers.
When they are disconnected all internal handles will be closed,
allowing the primary process to die gracefully if no other event is
waiting.
The method takes an optional callback argument which will be called
when finished.
This can only be called from the primary process.
cluster.fork([env])
env{Object} Key/value pairs to add to worker process
environment.
Returns: {cluster.Worker}
Spawn a new worker process.
This can only be called from the primary process.
cluster.isMaster
Stability: 0 - Deprecated
Deprecated alias for cluster.isPrimary.
cluster.isPrimary
{boolean}
True if the process is a primary. This is determined by the
process.env.NODE_UNIQUE_ID. If process.env.NODE_UNIQUE_ID is
undefined, then isPrimary is true.
cluster.isWorker
{boolean}
True if the process is not a primary (it is the negation of
cluster.isPrimary).
cluster.schedulingPolicy
The scheduling policy, either cluster.SCHED_RR for round-robin or
cluster.SCHED_NONE to leave it to the operating system. This is a global
setting and effectively frozen once either the first worker is spawned,
or .setupPrimary() is called, whichever comes first.
SCHED_RR is the default on all operating systems except Windows.
Windows will change to SCHED_RR once libuv is able to effectively
distribute IOCP handles without incurring a large performance hit.
cluster.schedulingPolicy can also be set through the
NODE_CLUSTER_SCHED_POLICY environment variable. Valid values are
'rr' and 'none'.
cluster.settings
{Object}
execArgv {string[]} List of string arguments passed to the
Node.js executable. Default: process.execArgv.
exec {string} File path to worker file. Default:
process.argv[1].
args {string[]} String arguments passed to worker. Default:
process.argv.slice(2).
cwd {string} Current working directory of the worker process.
Default: undefined (inherits from parent process).
serialization {string} Specify the kind of serialization used
for sending messages between processes. Possible values are
'json' and 'advanced'. See Advanced serialization for
child_process for more details. Default: false.
silent {boolean} Whether or not to send output to parent’s
stdio. Default: false.
stdio {Array} Configures the stdio of forked processes.
Because the cluster module relies on IPC to function, this
configuration must contain an 'ipc' entry. When this option
is provided, it overrides silent. See child_process.spawn()’s
stdio.
uid {number} Sets the user identity of the process. (See
setuid(2).)
gid {number} Sets the group identity of the process. (See
setgid(2).)
inspectPort {number|Function} Sets inspector port of
worker. This can be a number, or a function that takes no
arguments and returns a number. By default each worker
gets its own port, incremented from the primary’s
process.debugPort.
windowsHide {boolean} Hide the forked processes console
window that would normally be created on Windows
systems. Default: false.
After calling .setupPrimary() (or .fork()) this settings object will
contain the settings, including the default values.
This object is not intended to be changed or set manually.
cluster.setupMaster([settings])
Stability: 0 - Deprecated
Deprecated alias for .setupPrimary().
cluster.setupPrimary([settings])
settings {Object} See cluster.settings.
setupPrimary is used to change the default ‘fork’ behavior. Once
called, the settings will be present in cluster.settings.
Any settings changes only affect future calls to .fork() and have no
effect on workers that are already running.
The only attribute of a worker that cannot be set via .setupPrimary()
is the env passed to .fork().
The defaults above apply to the first call only; the defaults for later
calls are the current values at the time of cluster.setupPrimary() is
called.
import cluster from 'node:cluster';
cluster.setupPrimary({
exec: 'worker.js',
args: ['--use', 'https'],
silent: true,
});
cluster.fork(); // https worker
cluster.setupPrimary({
exec: 'worker.js',
args: ['--use', 'http'],
});
cluster.fork(); // http worker
const cluster = require('node:cluster');
cluster.setupPrimary({
exec: 'worker.js',
args: ['--use', 'https'],
silent: true,
});
cluster.fork(); // https worker
cluster.setupPrimary({
exec: 'worker.js',
args: ['--use', 'http'],
});
cluster.fork(); // http worker
This can only be called from the primary process.
cluster.worker
{Object}
A reference to the current worker object. Not available in the primary
process.
import cluster from 'node:cluster';
if (cluster.isPrimary) {
console.log('I am primary');
cluster.fork();
cluster.fork();
} else if (cluster.isWorker) {
console.log(`I am worker #${cluster.worker.id}`);
}
const cluster = require('node:cluster');
if (cluster.isPrimary) {
console.log('I am primary');
cluster.fork();
cluster.fork();
} else if (cluster.isWorker) {
console.log(`I am worker #${cluster.worker.id}`);
}
cluster.workers
{Object}
A hash that stores the active worker objects, keyed by id field. This
makes it easy to loop through all the workers. It is only available in
the primary process.
A worker is removed from cluster.workers after the worker has
disconnected and exited. The order between these two events cannot
be determined in advance. However, it is guaranteed that the
removal from the cluster.workers list happens before the last
'disconnect' or 'exit' event is emitted.
import cluster from 'node:cluster';
for (const worker of Object.values(cluster.workers)) {
worker.send('big announcement to all workers');
}
const cluster = require('node:cluster');
for (const worker of Object.values(cluster.workers)) {
worker.send('big announcement to all workers');
}
Crypto
Stability: 2 - Stable
The node:crypto module provides cryptographic functionality that
includes a set of wrappers for OpenSSL’s hash, HMAC, cipher,
decipher, sign, and verify functions.
const { createHmac } = await import('node:crypto');
const secret = 'abcdefg';
const hash = createHmac('sha256', secret)
.update('I love cupcakes')
.digest('hex');
console.log(hash);
// Prints:
// c0fa1bc00531bd78ef38c628449c5102aeabd49b5dc3a2a516ea6ea959d6658
const { createHmac } = require('node:crypto');
const secret = 'abcdefg';
const hash = createHmac('sha256', secret)
.update('I love cupcakes')
.digest('hex');
console.log(hash);
// Prints:
// c0fa1bc00531bd78ef38c628449c5102aeabd49b5dc3a2a516ea6ea959d6658
Determining if crypto support is
unavailable
It is possible for Node.js to be built without including support for the
node:crypto module. In such cases, attempting to import from crypto
or calling require('node:crypto') will result in an error being thrown.
When using CommonJS, the error thrown can be caught using
try/catch:
let crypto;
try {
crypto = require('node:crypto');
} catch (err) {
console.error('crypto support is disabled!');
}
When using the lexical ESM import keyword, the error can only be
caught if a handler for process.on('uncaughtException') is registered
before any attempt to load the module is made (using, for instance, a
preload module).
When using ESM, if there is a chance that the code may be run on a
build of Node.js where crypto support is not enabled, consider using
the import() function instead of the lexical import keyword:
let crypto;
try {
crypto = await import('node:crypto');
} catch (err) {
console.error('crypto support is disabled!');
}
Class: Certificate
SPKAC is a Certificate Signing Request mechanism originally
implemented by Netscape and was specified formally as part of
HTML5’s keygen element.
<keygen> is deprecated since HTML 5.2 and new projects should not
use this element anymore.
The node:crypto module provides the Certificate class for working
with SPKAC data. The most common usage is handling output
generated by the HTML5 <keygen> element. Node.js uses OpenSSL’s
SPKAC implementation internally.
Static method:
Certificate.exportChallenge(spkac[,
encoding])
spkac {string|ArrayBuffer|Buffer|TypedArray|DataView}
encoding {string} The encoding of the spkac string.
Returns: {Buffer} The challenge component of the spkac data
structure, which includes a public key and a challenge.
const { Certificate } = await import('node:crypto');
const spkac = getSpkacSomehow();
const challenge = Certificate.exportChallenge(spkac);
console.log(challenge.toString('utf8'));
// Prints: the challenge as a UTF8 string
const { Certificate } = require('node:crypto');
const spkac = getSpkacSomehow();
const challenge = Certificate.exportChallenge(spkac);
console.log(challenge.toString('utf8'));
// Prints: the challenge as a UTF8 string
Static method:
Certificate.exportPublicKey(spkac[,
encoding])
spkac {string|ArrayBuffer|Buffer|TypedArray|DataView}
encoding {string} The encoding of the spkac string.
Returns: {Buffer} The public key component of the spkac data
structure, which includes a public key and a challenge.
const { Certificate } = await import('node:crypto');
const spkac = getSpkacSomehow();
const publicKey = Certificate.exportPublicKey(spkac);
console.log(publicKey);
// Prints: the public key as <Buffer ...>
const { Certificate } = require('node:crypto');
const spkac = getSpkacSomehow();
const publicKey = Certificate.exportPublicKey(spkac);
console.log(publicKey);
// Prints: the public key as <Buffer ...>
Static method:
Certificate.verifySpkac(spkac[, encoding])
spkac {string|ArrayBuffer|Buffer|TypedArray|DataView}
encoding {string} The encoding of the spkac string.
Returns: {boolean} true if the given spkac data structure is valid,
false otherwise.
import { Buffer } from 'node:buffer';
const { Certificate } = await import('node:crypto');
const spkac = getSpkacSomehow();
console.log(Certificate.verifySpkac(Buffer.from(spkac)));
// Prints: true or false
const { Buffer } = require('node:buffer');
const { Certificate } = require('node:crypto');
const spkac = getSpkacSomehow();
console.log(Certificate.verifySpkac(Buffer.from(spkac)));
// Prints: true or false
Legacy API
Stability: 0 - Deprecated
As a legacy interface, it is possible to create new instances of the
crypto.Certificate class as illustrated in the examples below.
new crypto.Certificate()
Instances of the Certificate class can be created using the new
keyword or by calling crypto.Certificate() as a function:
const { Certificate } = await import('node:crypto');
const cert1 = new Certificate();
const cert2 = Certificate();
const { Certificate } = require('node:crypto');
const cert1 = new Certificate();
const cert2 = Certificate();
certificate.exportChallenge(spkac[, encoding])
spkac {string|ArrayBuffer|Buffer|TypedArray|DataView}
encoding {string} The encoding of the spkac string.
Returns: {Buffer} The challenge component of the spkac data
structure, which includes a public key and a challenge.
const { Certificate } = await import('node:crypto');
const cert = Certificate();
const spkac = getSpkacSomehow();
const challenge = cert.exportChallenge(spkac);
console.log(challenge.toString('utf8'));
// Prints: the challenge as a UTF8 string
const { Certificate } = require('node:crypto');
const cert = Certificate();
const spkac = getSpkacSomehow();
const challenge = cert.exportChallenge(spkac);
console.log(challenge.toString('utf8'));
// Prints: the challenge as a UTF8 string
certificate.exportPublicKey(spkac[, encoding])
spkac {string|ArrayBuffer|Buffer|TypedArray|DataView}
encoding {string} The encoding of the spkac string.
Returns: {Buffer} The public key component of the spkac data
structure, which includes a public key and a challenge.
const { Certificate } = await import('node:crypto');
const cert = Certificate();
const spkac = getSpkacSomehow();
const publicKey = cert.exportPublicKey(spkac);
console.log(publicKey);
// Prints: the public key as <Buffer ...>
const { Certificate } = require('node:crypto');
const cert = Certificate();
const spkac = getSpkacSomehow();
const publicKey = cert.exportPublicKey(spkac);
console.log(publicKey);
// Prints: the public key as <Buffer ...>
certificate.verifySpkac(spkac[, encoding])
spkac {string|ArrayBuffer|Buffer|TypedArray|DataView}
encoding {string} The encoding of the spkac string.
Returns: {boolean} true if the given spkac data structure is valid,
false otherwise.
import { Buffer } from 'node:buffer';
const { Certificate } = await import('node:crypto');
const cert = Certificate();
const spkac = getSpkacSomehow();
console.log(cert.verifySpkac(Buffer.from(spkac)));
// Prints: true or false
const { Buffer } = require('node:buffer');
const { Certificate } = require('node:crypto');
const cert = Certificate();
const spkac = getSpkacSomehow();
console.log(cert.verifySpkac(Buffer.from(spkac)));
// Prints: true or false
Class: Cipher
Extends: {stream.Transform}
Instances of the Cipher class are used to encrypt data. The class can
be used in one of two ways:
As a stream that is both readable and writable, where plain
unencrypted data is written to produce encrypted data on the
readable side, or
Using the cipher.update() and cipher.final() methods to produce
the encrypted data.
The crypto.createCipheriv() method is used to create Cipher
instances. Cipher objects are not to be created directly using the new
keyword.
Example: Using Cipher objects as streams:
const {
scrypt,
randomFill,
createCipheriv,
} = await import('node:crypto');
const algorithm = 'aes-192-cbc';
const password = 'Password used to generate key';
// First, we'll generate the key. The key length is dependent on the
// In this case for aes192, it is 24 bytes (192 bits).
scrypt(password, 'salt', 24, (err, key) => {
if (err) throw err;
// Then, we'll generate a random initialization vector
randomFill(new Uint8Array(16), (err, iv) => {
if (err) throw err;
// Once we have the key and iv, we can create and use the cipher
const cipher = createCipheriv(algorithm, key, iv);
let encrypted = '';
cipher.setEncoding('hex');
cipher.on('data', (chunk) => encrypted += chunk);
cipher.on('end', () => console.log(encrypted));
cipher.write('some clear text data');
cipher.end();
});
});
const {
scrypt,
randomFill,
createCipheriv,
} = require('node:crypto');
const algorithm = 'aes-192-cbc';
const password = 'Password used to generate key';
// First, we'll generate the key. The key length is dependent on the
// In this case for aes192, it is 24 bytes (192 bits).
scrypt(password, 'salt', 24, (err, key) => {
if (err) throw err;
// Then, we'll generate a random initialization vector
randomFill(new Uint8Array(16), (err, iv) => {
if (err) throw err;
// Once we have the key and iv, we can create and use the cipher
const cipher = createCipheriv(algorithm, key, iv);
let encrypted = '';
cipher.setEncoding('hex');
cipher.on('data', (chunk) => encrypted += chunk);
cipher.on('end', () => console.log(encrypted));
cipher.write('some clear text data');
cipher.end();
});
});
Example: Using Cipher and piped streams:
import {
createReadStream,
createWriteStream,
} from 'node:fs';
import {
pipeline,
} from 'node:stream';
const {
scrypt,
randomFill,
createCipheriv,
} = await import('node:crypto');
const algorithm = 'aes-192-cbc';
const password = 'Password used to generate key';
// First, we'll generate the key. The key length is dependent on the
// In this case for aes192, it is 24 bytes (192 bits).
scrypt(password, 'salt', 24, (err, key) => {
if (err) throw err;
// Then, we'll generate a random initialization vector
randomFill(new Uint8Array(16), (err, iv) => {
if (err) throw err;
const cipher = createCipheriv(algorithm, key, iv);
const input = createReadStream('test.js');
const output = createWriteStream('test.enc');
pipeline(input, cipher, output, (err) => {
if (err) throw err;
});
});
});
const {
createReadStream,
createWriteStream,
} = require('node:fs');
const {
pipeline,
p p ,
} = require('node:stream');
const {
scrypt,
randomFill,
createCipheriv,
} = require('node:crypto');
const algorithm = 'aes-192-cbc';
const password = 'Password used to generate key';
// First, we'll generate the key. The key length is dependent on the
// In this case for aes192, it is 24 bytes (192 bits).
scrypt(password, 'salt', 24, (err, key) => {
if (err) throw err;
// Then, we'll generate a random initialization vector
randomFill(new Uint8Array(16), (err, iv) => {
if (err) throw err;
const cipher = createCipheriv(algorithm, key, iv);
const input = createReadStream('test.js');
const output = createWriteStream('test.enc');
pipeline(input, cipher, output, (err) => {
if (err) throw err;
});
});
});
Example: Using the cipher.update() and cipher.final() methods:
const {
scrypt,
randomFill,
createCipheriv,
} = await import('node:crypto');
const algorithm = 'aes-192-cbc';
const password = 'Password used to generate key';
// First, we'll generate the key. The key length is dependent on the
// In this case for aes192, it is 24 bytes (192 bits).
scrypt(password 'salt' 24 (err key) => {
scrypt(password, salt , 24, (err, key) => {
if (err) throw err;
// Then, we'll generate a random initialization vector
randomFill(new Uint8Array(16), (err, iv) => {
if (err) throw err;
const cipher = createCipheriv(algorithm, key, iv);
let encrypted = cipher.update('some clear text data', 'utf8', 'h
encrypted += cipher.final('hex');
console.log(encrypted);
});
});
const {
scrypt,
randomFill,
createCipheriv,
} = require('node:crypto');
const algorithm = 'aes-192-cbc';
const password = 'Password used to generate key';
// First, we'll generate the key. The key length is dependent on the
// In this case for aes192, it is 24 bytes (192 bits).
scrypt(password, 'salt', 24, (err, key) => {
if (err) throw err;
// Then, we'll generate a random initialization vector
randomFill(new Uint8Array(16), (err, iv) => {
if (err) throw err;
const cipher = createCipheriv(algorithm, key, iv);
let encrypted = cipher.update('some clear text data', 'utf8', 'h
encrypted += cipher.final('hex');
console.log(encrypted);
});
});
cipher.final([outputEncoding])
outputEncoding {string} The encoding of the return value.
Returns: {Buffer | string} Any remaining enciphered contents. If
outputEncoding is specified, a string is returned. If an
outputEncoding is not provided, a Buffer is returned.
Once the cipher.final() method has been called, the Cipher object
can no longer be used to encrypt data. Attempts to call cipher.final()
more than once will result in an error being thrown.
cipher.getAuthTag()
Returns: {Buffer} When using an authenticated encryption mode
(GCM, CCM, OCB, and chacha20-poly1305 are currently supported), the
cipher.getAuthTag() method returns a Buffer containing the
authentication tag that has been computed from the given data.
The cipher.getAuthTag() method should only be called after
encryption has been completed using the cipher.final() method.
If the authTagLength option was set during the cipher instance’s
creation, this function will return exactly authTagLength bytes.
cipher.setAAD(buffer[, options])
buffer {string|ArrayBuffer|Buffer|TypedArray|DataView}
options {Object} stream.transform options
plaintextLength {number}
encoding {string} The string encoding to use when buffer is a
string.
Returns: {Cipher} for method chaining.
When using an authenticated encryption mode (GCM, CCM, OCB, and
chacha20-poly1305 are currently supported), the cipher.setAAD()
method sets the value used for the additional authenticated data
(AAD) input parameter.
The plaintextLength option is optional for GCM and OCB. When using
CCM, the plaintextLength option must be specified and its value must
match the length of the plaintext in bytes. See CCM mode.
The cipher.setAAD() method must be called before cipher.update().
cipher.setAutoPadding([autoPadding])
autoPadding{boolean} Default: true
Returns: {Cipher} for method chaining.
When using block encryption algorithms, the Cipher class will
automatically add padding to the input data to the appropriate block
size. To disable the default padding call cipher.setAutoPadding(false).
When autoPadding is false, the length of the entire input data must be
a multiple of the cipher’s block size or cipher.final() will throw an
error. Disabling automatic padding is useful for non-standard
padding, for instance using 0x0 instead of PKCS padding.
The cipher.setAutoPadding() method must be called before
cipher.final().
cipher.update(data[, inputEncoding][,
outputEncoding])
data {string|Buffer|TypedArray|DataView}
inputEncoding {string} The encoding of the data.
outputEncoding {string} The encoding of the return value.
Returns: {Buffer | string}
Updates the cipher with data. If the inputEncoding argument is given,
the data argument is a string using the specified encoding. If the
inputEncoding argument is not given, data must be a Buffer,
TypedArray,or DataView. If data is a Buffer, TypedArray, or DataView,
then inputEncoding is ignored.
The outputEncoding specifies the output format of the enciphered
data. If the outputEncoding is specified, a string using the specified
encoding is returned. If no outputEncoding is provided, a Buffer is
returned.
The cipher.update() method can be called multiple times with new
data until cipher.final() is called. Calling cipher.update() after
cipher.final() will result in an error being thrown.
Class: Decipher
Extends: {stream.Transform}
Instances of the Decipher class are used to decrypt data. The class can
be used in one of two ways:
As a stream that is both readable and writable, where plain
encrypted data is written to produce unencrypted data on the
readable side, or
Using the decipher.update() and decipher.final() methods to
produce the unencrypted data.
The crypto.createDecipheriv() method is used to create Decipher
instances. Decipher objects are not to be created directly using the new
keyword.
Example: Using Decipher objects as streams:
import { Buffer } from 'node:buffer';
const {
scryptSync,
createDecipheriv,
} = await import('node:crypto');
l i h ' 192 b '
const algorithm = 'aes-192-cbc';
const password = 'Password used to generate key';
// Key length is dependent on the algorithm. In this case for aes192
// 24 bytes (192 bits).
// Use the async `crypto.scrypt()` instead.
const key = scryptSync(password, 'salt', 24);
// The IV is usually passed along with the ciphertext.
const iv = Buffer.alloc(16, 0); // Initialization vector.
const decipher = createDecipheriv(algorithm, key, iv);
let decrypted = '';
decipher.on('readable', () => {
let chunk;
while (null !== (chunk = decipher.read())) {
decrypted += chunk.toString('utf8');
}
});
decipher.on('end', () => {
console.log(decrypted);
// Prints: some clear text data
});
// Encrypted with same algorithm, key and iv.
const encrypted =
'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'
decipher.write(encrypted, 'hex');
decipher.end();
const {
scryptSync,
createDecipheriv,
} = require('node:crypto');
const { Buffer } = require('node:buffer');
const algorithm = 'aes-192-cbc';
const password = 'Password used to generate key';
// Key length is dependent on the algorithm. In this case for aes192
// 24 bytes (192 bits).
// Use the async `crypto.scrypt()` instead.
const key = scryptSync(password, 'salt', 24);
// The IV is usually passed along with the ciphertext.
const iv = Buffer.alloc(16, 0); // Initialization vector.
const decipher = createDecipheriv(algorithm, key, iv);
let decrypted = '';
decipher.on('readable', () => {
let chunk;
while (null !== (chunk = decipher.read())) {
decrypted += chunk.toString('utf8');
}
});
decipher.on('end', () => {
console.log(decrypted);
// Prints: some clear text data
});
// Encrypted with same algorithm, key and iv.
const encrypted =
'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'
decipher.write(encrypted, 'hex');
decipher.end();
Example: Using Decipher and piped streams:
import {
createReadStream,
createWriteStream,
} from 'node:fs';
import { Buffer } from 'node:buffer';
const {
scryptSync,
createDecipheriv,
} = await import('node:crypto');
const algorithm = 'aes-192-cbc';
const password = 'Password used to generate key';
// Use the async `crypto.scrypt()` instead.
const key = scryptSync(password, 'salt', 24);
// The IV is usually passed along with the ciphertext.
const iv = Buffer.alloc(16, 0); // Initialization vector.
const decipher = createDecipheriv(algorithm, key, iv);
const input = createReadStream('test.enc');
const output = createWriteStream('test.js');
input.pipe(decipher).pipe(output);
const {
createReadStream,
createWriteStream,
} = require('node:fs');
const {
scryptSync,
createDecipheriv,
} = require('node:crypto');
const { Buffer } = require('node:buffer');
const algorithm = 'aes-192-cbc';
const password = 'Password used to generate key';
// Use the async `crypto.scrypt()` instead.
const key = scryptSync(password, 'salt', 24);
// The IV is usually passed along with the ciphertext.
const iv = Buffer.alloc(16, 0); // Initialization vector.
const decipher = createDecipheriv(algorithm, key, iv);
const input = createReadStream('test.enc');
const output = createWriteStream('test.js');
input.pipe(decipher).pipe(output);
Example: Using the decipher.update() and decipher.final() methods:
import { Buffer } from 'node:buffer';
const {
scryptSync,
createDecipheriv,
} = await import('node:crypto');
const algorithm = 'aes-192-cbc';
const password = 'Password used to generate key';
// Use the async `crypto.scrypt()` instead.
const key = scryptSync(password, 'salt', 24);
// The IV is usually passed along with the ciphertext.
const iv = Buffer.alloc(16, 0); // Initialization vector.
const decipher = createDecipheriv(algorithm, key, iv);
// Encrypted using same algorithm, key and iv.
// Encrypted using same algorithm, key and iv.
const encrypted =
'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'
let decrypted = decipher.update(encrypted, 'hex', 'utf8');
decrypted += decipher.final('utf8');
console.log(decrypted);
// Prints: some clear text data
const {
scryptSync,
createDecipheriv,
} = require('node:crypto');
const { Buffer } = require('node:buffer');
const algorithm = 'aes-192-cbc';
const password = 'Password used to generate key';
// Use the async `crypto.scrypt()` instead.
const key = scryptSync(password, 'salt', 24);
// The IV is usually passed along with the ciphertext.
const iv = Buffer.alloc(16, 0); // Initialization vector.
const decipher = createDecipheriv(algorithm, key, iv);
// Encrypted using same algorithm, key and iv.
const encrypted =
'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'
let decrypted = decipher.update(encrypted, 'hex', 'utf8');
decrypted += decipher.final('utf8');
console.log(decrypted);
// Prints: some clear text data
decipher.final([outputEncoding])
outputEncoding {string} The encoding of the return value.
Returns: {Buffer | string} Any remaining deciphered contents. If
outputEncoding is specified, a string is returned. If an
outputEncoding is not provided, a Buffer is returned.
Once the decipher.final() method has been called, the Decipher
object can no longer be used to decrypt data. Attempts to call
decipher.final() more than once will result in an error being thrown.
decipher.setAAD(buffer[, options])
buffer {string|ArrayBuffer|Buffer|TypedArray|DataView}
options {Object} stream.transform options
plaintextLength {number}
encoding {string} String encoding to use when buffer is a
string.
Returns: {Decipher} for method chaining.
When using an authenticated encryption mode (GCM, CCM, OCB, and
chacha20-poly1305 are currently supported), the decipher.setAAD()
method sets the value used for the additional authenticated data
(AAD) input parameter.
The options argument is optional for GCM. When using CCM, the
plaintextLength option must be specified and its value must match
the length of the ciphertext in bytes. See CCM mode.
The decipher.setAAD() method must be called before
decipher.update().
When passing a string as the buffer, please consider caveats when
using strings as inputs to cryptographic APIs.
decipher.setAuthTag(buffer[, encoding])
buffer {string|Buffer|ArrayBuffer|TypedArray|DataView}
encoding {string} String encoding to use when buffer is a string.
Returns: {Decipher} for method chaining.
When using an authenticated encryption mode (GCM, CCM, OCB, and
chacha20-poly1305 are currently supported), the decipher.setAuthTag()
method is used to pass in the received authentication tag. If no tag is
provided, or if the cipher text has been tampered with,
decipher.final() will throw, indicating that the cipher text should be
discarded due to failed authentication. If the tag length is invalid
according to NIST SP 800-38D or does not match the value of the
authTagLength option, decipher.setAuthTag() will throw an error.
The decipher.setAuthTag() method must be called before
decipher.update() for CCM mode or before decipher.final() for GCM and
OCB modes and chacha20-poly1305. decipher.setAuthTag() can only be
called once.
When passing a string as the authentication tag, please consider
caveats when using strings as inputs to cryptographic APIs.
decipher.setAutoPadding([autoPadding])
autoPadding{boolean} Default: true
Returns: {Decipher} for method chaining.
When data has been encrypted without standard block padding,
calling decipher.setAutoPadding(false) will disable automatic padding
to prevent decipher.final() from checking for and removing padding.
Turning auto padding off will only work if the input data’s length is a
multiple of the ciphers block size.
The decipher.setAutoPadding() method must be called before
decipher.final().
decipher.update(data[, inputEncoding][,
outputEncoding])
data {string|Buffer|TypedArray|DataView}
inputEncoding {string} The encoding of the data string.
outputEncoding{string} The encoding of the return value.
Returns: {Buffer | string}
Updates the decipher with data. If the inputEncoding argument is
given, the data argument is a string using the specified encoding. If
the inputEncoding argument is not given, data must be a Buffer. If data
is a Buffer then inputEncoding is ignored.
The outputEncoding specifies the output format of the enciphered
data. If the outputEncoding is specified, a string using the specified
encoding is returned. If no outputEncoding is provided, a Buffer is
returned.
The decipher.update() method can be called multiple times with new
data until decipher.final() is called. Calling decipher.update() after
decipher.final() will result in an error being thrown.
Class: DiffieHellman
The DiffieHellman class is a utility for creating Diffie-Hellman key
exchanges.
Instances of the DiffieHellman class can be created using the
crypto.createDiffieHellman() function.
import assert from 'node:assert';
const {
createDiffieHellman,
} = await import('node:crypto');
// Generate Alice's keys...
const alice = createDiffieHellman(2048);
const aliceKey = alice.generateKeys();
// Generate Bob's keys...
const bob = createDiffieHellman(alice.getPrime(), alice.getGenerator
const bobKey = bob.generateKeys();
// Exchange and generate the secret...
const aliceSecret = alice.computeSecret(bobKey);
const bobSecret = bob.computeSecret(aliceKey);
// OK
assert.strictEqual(aliceSecret.toString('hex'), bobSecret.toString('
const assert = require('node:assert');
const {
createDiffieHellman,
} = require('node:crypto');
// Generate Alice's keys...
const alice = createDiffieHellman(2048);
const aliceKey = alice.generateKeys();
// Generate Bob's keys...
const bob = createDiffieHellman(alice.getPrime(), alice.getGenerator
const bobKey = bob.generateKeys();
// Exchange and generate the secret...
const aliceSecret = alice.computeSecret(bobKey);
const bobSecret = bob.computeSecret(aliceKey);
// OK
assert.strictEqual(aliceSecret.toString('hex'), bobSecret.toString('
diffieHellman.computeSecret(otherPublicKey[
, inputEncoding][, outputEncoding])
otherPublicKey
{string|ArrayBuffer|Buffer|TypedArray|DataView}
inputEncoding {string} The encoding of an otherPublicKey string.
outputEncoding {string} The encoding of the return value.
Returns: {Buffer | string}
Computes the shared secret using otherPublicKey as the other party’s
public key and returns the computed shared secret. The supplied key
is interpreted using the specified inputEncoding, and secret is encoded
using specified outputEncoding. If the inputEncoding is not provided,
otherPublicKey is expected to be a Buffer, TypedArray, or DataView.
If outputEncoding is given a string is returned; otherwise, a Buffer is
returned.
diffieHellman.generateKeys([encoding])
encoding{string} The encoding of the return value.
Returns: {Buffer | string}
Generates private and public Diffie-Hellman key values unless they
have been generated or computed already, and returns the public key
in the specified encoding. This key should be transferred to the other
party. If encoding is provided a string is returned; otherwise a Buffer
is returned.
This function is a thin wrapper around DH_generate_key(). In
particular, once a private key has been generated or set, calling this
function only updates the public key but does not generate a new
private key.
diffieHellman.getGenerator([encoding])
encoding{string} The encoding of the return value.
Returns: {Buffer | string}
Returns the Diffie-Hellman generator in the specified encoding. If
encoding is provided a string is returned; otherwise a Buffer is
returned.
diffieHellman.getPrime([encoding])
encoding{string} The encoding of the return value.
Returns: {Buffer | string}
Returns the Diffie-Hellman prime in the specified encoding. If
encoding is provided a string is returned; otherwise a Buffer is
returned.
diffieHellman.getPrivateKey([encoding])
encoding{string} The encoding of the return value.
Returns: {Buffer | string}
Returns the Diffie-Hellman private key in the specified encoding. If
encoding is provided a string is returned; otherwise a Buffer is
returned.
diffieHellman.getPublicKey([encoding])
encoding{string} The encoding of the return value.
Returns: {Buffer | string}
Returns the Diffie-Hellman public key in the specified encoding. If
encoding is provided a string is returned; otherwise a Buffer is
returned.
diffieHellman.setPrivateKey(privateKey[,
encoding])
privateKey {string|ArrayBuffer|Buffer|TypedArray|DataView}
encoding {string} The encoding of the privateKey string.
Sets the Diffie-Hellman private key. If the encoding argument is
provided, privateKey is expected to be a string. If no encoding is
provided, privateKey is expected to be a Buffer, TypedArray, or
DataView.
This function does not automatically compute the associated public
key. Either diffieHellman.setPublicKey() or
diffieHellman.generateKeys() can be used to manually provide the
public key or to automatically derive it.
diffieHellman.setPublicKey(publicKey[,
encoding])
publicKey {string|ArrayBuffer|Buffer|TypedArray|DataView}
encoding {string} The encoding of the publicKey string.
Sets the Diffie-Hellman public key. If the encoding argument is
provided, publicKey is expected to be a string. If no encoding is
provided, publicKey is expected to be a Buffer, TypedArray, or DataView.
diffieHellman.verifyError
A bit field containing any warnings and/or errors resulting from a
check performed during initialization of the DiffieHellman object.
The following values are valid for this property (as defined in
node:constants module):
DH_CHECK_P_NOT_SAFE_PRIME
DH_CHECK_P_NOT_PRIME
DH_UNABLE_TO_CHECK_GENERATOR
DH_NOT_SUITABLE_GENERATOR
Class: DiffieHellmanGroup
The DiffieHellmanGroup class takes a well-known modp group as its
argument. It works the same as DiffieHellman, except that it does not
allow changing its keys after creation. In other words, it does not
implement setPublicKey() or setPrivateKey() methods.
const { createDiffieHellmanGroup } = await import('node:crypto');
const dh = createDiffieHellmanGroup('modp16');
const { createDiffieHellmanGroup } = require('node:crypto');
const dh = createDiffieHellmanGroup('modp16');
The following groups are supported:
'modp14' (2048 bits, RFC 3526 Section 3)
'modp15' (3072 bits, RFC 3526 Section 4)
'modp16' (4096 bits, RFC 3526 Section 5)
'modp17' (6144 bits, RFC 3526 Section 6)
'modp18' (8192 bits, RFC 3526 Section 7)
The following groups are still supported but deprecated (see
Caveats):
'modp1' (768 bits, RFC 2409 Section 6.1)
'modp2' (1024 bits, RFC 2409 Section 6.2)
'modp5' (1536 bits, RFC 3526 Section 2)
These deprecated groups might be removed in future versions of
Node.js.
Class: ECDH
The ECDH class is a utility for creating Elliptic Curve Diffie-Hellman
(ECDH) key exchanges.
Instances of the ECDH class can be created using the
crypto.createECDH() function.
import assert from 'node:assert';
const {
createECDH,
} = await import('node:crypto');
// Generate Alice's keys...
const alice = createECDH('secp521r1');
const aliceKey = alice.generateKeys();
// Generate Bob's keys...
const bob = createECDH('secp521r1');
const bobKey = bob.generateKeys();
// Exchange and generate the secret...
const aliceSecret = alice.computeSecret(bobKey);
const bobSecret = bob.computeSecret(aliceKey);
assert.strictEqual(aliceSecret.toString('hex'), bobSecret.toString('
// OK
const assert = require('node:assert');
const {
createECDH,
} = require('node:crypto');
// Generate Alice's keys...
const alice = createECDH('secp521r1');
const aliceKey = alice.generateKeys();
// Generate Bob's keys...
const bob = createECDH('secp521r1');
const bobKey = bob.generateKeys();
// Exchange and generate the secret...
const aliceSecret = alice.computeSecret(bobKey);
const bobSecret = bob.computeSecret(aliceKey);
assert.strictEqual(aliceSecret.toString('hex'), bobSecret.toString('
// OK
Static method: ECDH.convertKey(key,
curve[, inputEncoding[, outputEncoding[,
format]]])
key {string|ArrayBuffer|Buffer|TypedArray|DataView}
curve {string}
inputEncoding {string} The encoding of the key string.
outputEncoding {string} The encoding of the return value.
format {string} Default: 'uncompressed'
Returns: {Buffer | string}
Converts the EC Diffie-Hellman public key specified by key and curve
to the format specified by format. The format argument specifies point
encoding and can be 'compressed', 'uncompressed' or 'hybrid'. The
supplied key is interpreted using the specified inputEncoding, and the
returned key is encoded using the specified outputEncoding.
Use crypto.getCurves() to obtain a list of available curve names. On
recent OpenSSL releases, openssl ecparam -list_curves will also
display the name and description of each available elliptic curve.
If format is not specified the point will be returned in 'uncompressed'
format.
If the inputEncoding is not provided, key is expected to be a Buffer,
TypedArray, or DataView.
Example (uncompressing a key):
const {
createECDH,
ECDH,
} = await import('node:crypto');
const ecdh = createECDH('secp256k1');
ecdh.generateKeys();
const compressedKey = ecdh.getPublicKey('hex', 'compressed');
const uncompressedKey = ECDH.convertKey(compressedKey,
'secp256k1',
'hex',
'hex',
'uncompressed');
// The converted key and the uncompressed public key should be the s
console.log(uncompressedKey === ecdh.getPublicKey('hex'));
const {
createECDH,
ECDH,
} = require('node:crypto');
const ecdh = createECDH('secp256k1');
ecdh.generateKeys();
const compressedKey = ecdh.getPublicKey('hex', 'compressed');
const uncompressedKey = ECDH.convertKey(compressedKey,
'secp256k1',
'hex',
'hex',
'uncompressed');
// The converted key and the uncompressed public key should be the s
console.log(uncompressedKey === ecdh.getPublicKey('hex'));
ecdh.computeSecret(otherPublicKey[,
inputEncoding][, outputEncoding])
otherPublicKey
{string|ArrayBuffer|Buffer|TypedArray|DataView}
inputEncoding {string} The encoding of the otherPublicKey string.
outputEncoding {string} The encoding of the return value.
Returns: {Buffer | string}
Computes the shared secret using otherPublicKey as the other party’s
public key and returns the computed shared secret. The supplied key
is interpreted using specified inputEncoding, and the returned secret
is encoded using the specified outputEncoding. If the inputEncoding is
not provided, otherPublicKey is expected to be a Buffer, TypedArray, or
DataView.
If outputEncoding is given a string will be returned; otherwise a Buffer
is returned.
ecdh.computeSecret will throw an ERR_CRYPTO_ECDH_INVALID_PUBLIC_KEY
error when otherPublicKey lies outside of the elliptic curve. Since
otherPublicKey is usually supplied from a remote user over an
insecure network, be sure to handle this exception accordingly.
ecdh.generateKeys([encoding[, format]])
encoding {string} The encoding of the return value.
format {string} Default: 'uncompressed'
Returns: {Buffer | string}
Generates private and public EC Diffie-Hellman key values, and
returns the public key in the specified format and encoding. This key
should be transferred to the other party.
The format argument specifies point encoding and can be
'compressed' or 'uncompressed'. If format is not specified, the point
will be returned in 'uncompressed' format.
If encoding is provided a string is returned; otherwise a Buffer is
returned.
ecdh.getPrivateKey([encoding])
encoding {string} The encoding of the return value.
Returns: {Buffer | string} The EC Diffie-Hellman in the specified
encoding.
If encoding is specified, a string is returned; otherwise a Buffer is
returned.
ecdh.getPublicKey([encoding][, format])
encoding {string} The encoding of the return value.
format {string} Default: 'uncompressed'
Returns: {Buffer | string} The EC Diffie-Hellman public key in
the specified encoding and format.
The format argument specifies point encoding and can be
'compressed' or 'uncompressed'. If format is not specified the point will
be returned in 'uncompressed' format.
If encoding is specified, a string is returned; otherwise a Buffer is
returned.
ecdh.setPrivateKey(privateKey[, encoding])
privateKey {string|ArrayBuffer|Buffer|TypedArray|DataView}
encoding {string} The encoding of the privateKey string.
Sets the EC Diffie-Hellman private key. If encoding is provided,
privateKey is expected to be a string; otherwise privateKey is expected
to be a Buffer, TypedArray, or DataView.
If privateKey is not valid for the curve specified when the ECDH object
was created, an error is thrown. Upon setting the private key, the
associated public point (key) is also generated and set in the ECDH
object.
ecdh.setPublicKey(publicKey[, encoding])
Stability: 0 - Deprecated
publicKey {string|ArrayBuffer|Buffer|TypedArray|DataView}
encoding {string} The encoding of the publicKey string.
Sets the EC Diffie-Hellman public key. If encoding is provided
publicKey is expected to be a string; otherwise a Buffer, TypedArray, or
DataView is expected.
There is not normally a reason to call this method because ECDH only
requires a private key and the other party’s public key to compute the
shared secret. Typically either ecdh.generateKeys() or
ecdh.setPrivateKey() will be called. The ecdh.setPrivateKey() method
attempts to generate the public point/key associated with the private
key being set.
Example (obtaining a shared secret):
const {
createECDH,
createHash,
} = await import('node:crypto');
const alice = createECDH('secp256k1');
const bob = createECDH('secp256k1');
// This is a shortcut way of specifying one of Alice's previous priv
// keys. It would be unwise to use such a predictable private key in
// application.
alice.setPrivateKey(
createHash('sha256').update('alice', 'utf8').digest(),
);
// Bob uses a newly generated cryptographically strong
// pseudorandom key pair
bob.generateKeys();
const aliceSecret = alice.computeSecret(bob.getPublicKey(), null, 'h
const bobSecret = bob.computeSecret(alice.getPublicKey(), null, 'hex
// aliceSecret and bobSecret should be the same shared secret value
console.log(aliceSecret === bobSecret);
const {
createECDH,
createHash,
} = require('node:crypto');
const alice = createECDH('secp256k1');
const bob = createECDH('secp256k1');
// This is a shortcut way of specifying one of Alice's previous priv
// keys. It would be unwise to use such a predictable private key in
// application.
alice.setPrivateKey(
createHash('sha256').update('alice', 'utf8').digest(),
);
// Bob uses a newly generated cryptographically strong
// pseudorandom key pair
bob.generateKeys();
const aliceSecret = alice.computeSecret(bob.getPublicKey(), null, 'h
const bobSecret = bob.computeSecret(alice.getPublicKey(), null, 'hex
// aliceSecret and bobSecret should be the same shared secret value
console.log(aliceSecret === bobSecret);
Class: Hash
Extends: {stream.Transform}
The Hash class is a utility for creating hash digests of data. It can be
used in one of two ways:
As a stream that is both readable and writable, where data is
written to produce a computed hash digest on the readable side,
or
Using the hash.update() and hash.digest() methods to produce
the computed hash.
The crypto.createHash() method is used to create Hash instances. Hash
objects are not to be created directly using the new keyword.
Example: Using Hash objects as streams:
const {
createHash,
} = await import('node:crypto');
const hash = createHash('sha256');
hash.on('readable', () => {
// Only one element is going to be produced by the
// hash stream.
const data = hash.read();
if (data) {
console.log(data.toString('hex'));
// Prints:
// 6a2da20943931e9834fc12cfe5bb47bbd9ae43489a30726962b576f4e39
}
});
hash.write('some data to hash');
hash.end();
const {
createHash,
} = require('node:crypto');
const hash = createHash('sha256');
hash.on('readable', () => {
// Only one element is going to be produced by the
// hash stream.
const data = hash.read();
if (data) {
console.log(data.toString('hex'));
// Prints:
// 6a2da20943931e9834fc12cfe5bb47bbd9ae43489a30726962b576f4e39
}
});
hash.write('some data to hash');
hash.end();
Example: Using Hash and piped streams:
import { createReadStream } from 'node:fs';
import { stdout } from 'node:process';
const { createHash } = await import('node:crypto');
const hash = createHash('sha256');
const input = createReadStream('test.js');
input.pipe(hash).setEncoding('hex').pipe(stdout);
const { createReadStream } = require('node:fs');
const { createHash } = require('node:crypto');
const { stdout } = require('node:process');
const hash = createHash('sha256');
const input = createReadStream('test.js');
input.pipe(hash).setEncoding('hex').pipe(stdout);
Example: Using the hash.update() and hash.digest() methods:
const {
createHash,
} = await import('node:crypto');
const hash = createHash('sha256');
hash.update('some data to hash');
console.log(hash.digest('hex'));
// Prints:
// 6a2da20943931e9834fc12cfe5bb47bbd9ae43489a30726962b576f4e3993e5
const {
createHash,
} = require('node:crypto');
const hash createHash('sha256');
const hash = createHash( sha256 );
hash.update('some data to hash');
console.log(hash.digest('hex'));
// Prints:
// 6a2da20943931e9834fc12cfe5bb47bbd9ae43489a30726962b576f4e3993e5
hash.copy([options])
options{Object} stream.transform options
Returns: {Hash}
Creates a new Hash object that contains a deep copy of the internal
state of the current Hash object.
The optional options argument controls stream behavior. For XOF
hash functions such as 'shake256', the outputLength option can be
used to specify the desired output length in bytes.
An error is thrown when an attempt is made to copy the Hash object
after its hash.digest() method has been called.
// Calculate a rolling hash.
const {
createHash,
} = await import('node:crypto');
const hash = createHash('sha256');
hash.update('one');
console.log(hash.copy().digest('hex'));
hash.update('two');
console.log(hash.copy().digest('hex'));
hash.update('three');
console.log(hash.copy().digest('hex'));
// Etc.
// Calculate a rolling hash.
const {
createHash,
} = require('node:crypto');
const hash = createHash('sha256');
hash.update('one');
console.log(hash.copy().digest('hex'));
hash.update('two');
console.log(hash.copy().digest('hex'));
hash.update('three');
console.log(hash.copy().digest('hex'));
// Etc.
hash.digest([encoding])
encoding{string} The encoding of the return value.
Returns: {Buffer | string}
Calculates the digest of all of the data passed to be hashed (using the
hash.update() method). If encoding is provided a string will be
returned; otherwise a Buffer is returned.
The Hash object can not be used again after hash.digest() method has
been called. Multiple calls will cause an error to be thrown.
hash.update(data[, inputEncoding])
data {string|Buffer|TypedArray|DataView}
inputEncoding {string} The encoding of the data string.
Updates the hash content with the given data, the encoding of which
is given in inputEncoding. If encoding is not provided, and the data is a
string, an encoding of 'utf8' is enforced. If data is a Buffer,
TypedArray, or DataView, then inputEncoding is ignored.
This can be called many times with new data as it is streamed.
Class: Hmac
Extends: {stream.Transform}
The Hmac class is a utility for creating cryptographic HMAC digests. It
can be used in one of two ways:
As a stream that is both readable and writable, where data is
written to produce a computed HMAC digest on the readable
side, or
Using the hmac.update() and hmac.digest() methods to produce
the computed HMAC digest.
The crypto.createHmac() method is used to create Hmac instances. Hmac
objects are not to be created directly using the new keyword.
Example: Using Hmac objects as streams:
const {
createHmac,
} = await import('node:crypto');
const hmac = createHmac('sha256', 'a secret');
hmac.on('readable', () => {
// Only one element is going to be produced by the
// hash stream.
const data = hmac.read();
if (data) {
console.log(data.toString('hex'));
// Prints:
// 7fd04df92f636fd450bc841c9418e5825c17f33ad9c87c518115a45971f
}
});
hmac.write('some data to hash');
hmac.end();
const {
createHmac,
} = require('node:crypto');
const hmac = createHmac('sha256', 'a secret');
hmac.on('readable', ()=> {
// Only one element is going to be produced by the
// hash stream.
const data = hmac.read();
if (data) {
console.log(data.toString('hex'));
// Prints:
// 7fd04df92f636fd450bc841c9418e5825c17f33ad9c87c518115a45971f
}
});
hmac.write('some data to hash');
hmac.end();
Example: Using Hmac and piped streams:
import { createReadStream } from 'node:fs';
import { stdout } from 'node:process';
const {
createHmac,
} = await import('node:crypto');
const hmac = createHmac('sha256', 'a secret');
const input = createReadStream('test.js');
input.pipe(hmac).pipe(stdout);
const {
createReadStream,
} = require('node:fs');
const {
createHmac,
} = require('node:crypto');
const { stdout } = require('node:process');
const hmac = createHmac('sha256', 'a secret');
const input = createReadStream('test.js');
input.pipe(hmac).pipe(stdout);
Example: Using the hmac.update() and hmac.digest() methods:
const {
createHmac,
} = await import('node:crypto');
const hmac = createHmac('sha256', 'a secret');
hmac.update('some data to hash');
console.log(hmac.digest('hex'));
// Prints:
// 7fd04df92f636fd450bc841c9418e5825c17f33ad9c87c518115a45971f7f77
const {
createHmac,
} = require('node:crypto');
const hmac = createHmac('sha256', 'a secret');
hmac.update('some data to hash');
console.log(hmac.digest('hex'));
// Prints:
// 7fd04df92f636fd450bc841c9418e5825c17f33ad9c87c518115a45971f7f77
hmac.digest([encoding])
encoding{string} The encoding of the return value.
Returns: {Buffer | string}
Calculates the HMAC digest of all of the data passed using
hmac.update(). If encoding is provided a string is returned; otherwise a
Buffer is returned;
The Hmac object can not be used again after hmac.digest() has been
called. Multiple calls to hmac.digest() will result in an error being
thrown.
hmac.update(data[, inputEncoding])
data {string|Buffer|TypedArray|DataView}
inputEncoding {string} The encoding of the data string.
Updates the Hmac content with the given data, the encoding of which
is given in inputEncoding. If encoding is not provided, and the data is a
string, an encoding of 'utf8' is enforced. If data is a Buffer,
TypedArray, or DataView, then inputEncoding is ignored.
This can be called many times with new data as it is streamed.
Class: KeyObject
Node.js uses a KeyObject class to represent a symmetric or
asymmetric key, and each kind of key exposes different functions.
The crypto.createSecretKey(), crypto.createPublicKey() and
crypto.createPrivateKey() methods are used to create KeyObject
instances. KeyObject objects are not to be created directly using the
new keyword.
Most applications should consider using the new KeyObject API
instead of passing keys as strings or Buffers due to improved security
features.
KeyObject instances can be passed to other threads via postMessage().
The receiver obtains a cloned KeyObject, and the KeyObject does not
need to be listed in the transferList argument.
Static method: KeyObject.from(key)
key{CryptoKey}
Returns: {KeyObject}
Example: Converting a CryptoKey instance to a KeyObject:
const { KeyObject } = await import('node:crypto');
const { subtle } = globalThis.crypto;
const key = await subtle.generateKey({
name: 'HMAC',
hash: 'SHA-256',
length: 256,
}, true, ['sign', 'verify']);
const keyObject = KeyObject.from(key);
console.log(keyObject.symmetricKeySize);
// Prints: 32 (symmetric key size in bytes)
const { KeyObject } = require('node:crypto');
const { subtle } = globalThis.crypto;
(async function() {
const key = await subtle.generateKey({
name: 'HMAC',
hash: 'SHA-256',
length: 256,
}, true, ['sign', 'verify']);
const keyObject = KeyObject.from(key);
console.log(keyObject.symmetricKeySize);
// Prints: 32 (symmetric key size in bytes)
})();
keyObject.asymmetricKeyDetails
{Object}
modulusLength:{number} Key size in bits (RSA, DSA).
publicExponent: {bigint} Public exponent (RSA).
hashAlgorithm: {string} Name of the message digest (RSA-
PSS).
mgf1HashAlgorithm: {string} Name of the message digest used
by MGF1 (RSA-PSS).
saltLength: {number} Minimal salt length in bytes (RSA-
PSS).
divisorLength: {number} Size of q in bits (DSA).
namedCurve: {string} Name of the curve (EC).
This property exists only on asymmetric keys. Depending on the type
of the key, this object contains information about the key. None of
the information obtained through this property can be used to
uniquely identify a key or to compromise the security of the key.
For RSA-PSS keys, if the key material contains a RSASSA-PSS-params
sequence, the hashAlgorithm, mgf1HashAlgorithm, and saltLength
properties will be set.
Other key details might be exposed via this API using additional
attributes.
keyObject.asymmetricKeyType
{string}
For asymmetric keys, this property represents the type of the key.
Supported key types are:
'rsa' (OID 1.2.840.113549.1.1.1)
'rsa-pss' (OID 1.2.840.113549.1.1.10)
'dsa' (OID 1.2.840.10040.4.1)
'ec' (OID 1.2.840.10045.2.1)
'x25519' (OID 1.3.101.110)
'x448' (OID 1.3.101.111)
'ed25519' (OID 1.3.101.112)
'ed448' (OID 1.3.101.113)
'dh' (OID 1.2.840.113549.1.3.1)
This property is undefined for unrecognized KeyObject types and
symmetric keys.
keyObject.export([options])
options:{Object}
Returns: {string | Buffer | Object}
For symmetric keys, the following encoding options can be used:
format: {string} Must be 'buffer' (default) or 'jwk'.
For public keys, the following encoding options can be used:
type:{string} Must be one of 'pkcs1' (RSA only) or 'spki'.
format: {string} Must be 'pem', 'der', or 'jwk'.
For private keys, the following encoding options can be used:
type: {string} Must be one of 'pkcs1' (RSA only), 'pkcs8' or
'sec1' (EC only).
format: {string} Must be 'pem', 'der', or 'jwk'.
cipher: {string} If specified, the private key will be encrypted with
the given cipher and passphrase using PKCS#5 v2.0 password
based encryption.
passphrase: {string | Buffer} The passphrase to use for
encryption, see cipher.
The result type depends on the selected encoding format, when PEM
the result is a string, when DER it will be a buffer containing the data
encoded as DER, when JWK it will be an object.
When JWK encoding format was selected, all other encoding options
are ignored.
PKCS#1, SEC1, and PKCS#8 type keys can be encrypted by using a
combination of the cipher and format options. The PKCS#8 type can
be used with any format to encrypt any key algorithm (RSA, EC, or
DH) by specifying a cipher. PKCS#1 and SEC1 can only be encrypted
by specifying a cipher when the PEM format is used. For maximum
compatibility, use PKCS#8 for encrypted private keys. Since PKCS#8
defines its own encryption mechanism, PEM-level encryption is not
supported when encrypting a PKCS#8 key. See RFC 5208 for
PKCS#8 encryption and RFC 1421 for PKCS#1 and SEC1 encryption.
keyObject.equals(otherKeyObject)
otherKeyObject: {KeyObject} A KeyObject with which to compare
keyObject.
Returns: {boolean}
Returns true or false depending on whether the keys have exactly the
same type, value, and parameters. This method is not constant time.
keyObject.symmetricKeySize
{number}
For secret keys, this property represents the size of the key in bytes.
This property is undefined for asymmetric keys.
keyObject.type
{string}
Depending on the type of this KeyObject, this property is either
'secret' for secret (symmetric) keys, 'public' for public
(asymmetric) keys or 'private' for private (asymmetric) keys.
Class: Sign
Extends: {stream.Writable}
The Sign class is a utility for generating signatures. It can be used in
one of two ways:
As a writable stream, where data to be signed is written and the
sign.sign() method is used to generate and return the signature,
or
Using the sign.update() and sign.sign() methods to produce the
signature.
The crypto.createSign() method is used to create Sign instances. The
argument is the string name of the hash function to use. Sign objects
are not to be created directly using the new keyword.
Example: Using Sign and Verify objects as streams:
const {
generateKeyPairSync,
createSign,
createVerify,
} = await import('node:crypto');
const { privateKey, publicKey } = generateKeyPairSync('ec', {
namedCurve: 'sect239k1',
});
const sign = createSign('SHA256');
sign.write('some data to sign');
sign.end();
const signature = sign.sign(privateKey, 'hex');
const verify = createVerify('SHA256');
verify.write('some data to sign');
verify.end();
console.log(verify.verify(publicKey, signature, 'hex'));
// Prints: true
const {
generateKeyPairSync,
createSign,
createVerify,
} = require('node:crypto');
const { privateKey, publicKey } = generateKeyPairSync('ec', {
namedCurve: 'sect239k1',
});
const sign = createSign('SHA256');
sign.write('some data to sign');
sign.end();
const signature = sign.sign(privateKey, 'hex');
const verify = createVerify('SHA256');
verify.write('some data to sign');
verify.end();
console.log(verify.verify(publicKey, signature, 'hex'));
// Prints: true
Example: Using the sign.update() and verify.update() methods:
const {
generateKeyPairSync,
createSign,
createVerify,
} = await import('node:crypto');
const { privateKey, publicKey } = generateKeyPairSync('rsa', {
modulusLength: 2048,
});
const sign = createSign('SHA256');
sign.update('some data to sign');
sign.end();
const signature = sign.sign(privateKey);
const verify = createVerify('SHA256');
verify.update('some data to sign');
verify.end();
console.log(verify.verify(publicKey, signature));
// Prints: true
const {
generateKeyPairSync,
createSign,
createVerify,
} = require('node:crypto');
const { privateKey, publicKey } = generateKeyPairSync('rsa', {
modulusLength: 2048,
});
const sign = createSign('SHA256');
sign.update('some data to sign');
sign.end();
const signature = sign.sign(privateKey);
const verify = createVerify('SHA256');
verify.update('some data to sign');
verify.end();
console.log(verify.verify(publicKey, signature));
// Prints: true
sign.sign(privateKey[, outputEncoding])
privateKey
{Object|string|ArrayBuffer|Buffer|TypedArray|DataView|KeyOb
ject|CryptoKey}
dsaEncoding {string}
padding {integer}
saltLength {integer}
outputEncoding {string} The encoding of the return value.
Returns: {Buffer | string}
Calculates the signature on all the data passed through using either
sign.update() or sign.write().
If privateKey is not a KeyObject, this function behaves as if privateKey
had been passed to crypto.createPrivateKey(). If it is an object, the
following additional properties can be passed:
dsaEncoding {string} For DSA and ECDSA, this option specifies
the format of the generated signature. It can be one of the
following:
'der' (default): DER-encoded ASN.1 signature structure
encoding (r, s).
'ieee-p1363': Signature format r || s as proposed in IEEE-
P1363.
padding {integer} Optional padding value for RSA, one of the
following:
crypto.constants.RSA_PKCS1_PADDING (default)
crypto.constants.RSA_PKCS1_PSS_PADDING
RSA_PKCS1_PSS_PADDING will use MGF1 with the same hash function
used to sign the message as specified in section 3.1 of RFC 4055,
unless an MGF1 hash function has been specified as part of the
key in compliance with section 3.3 of RFC 4055.
saltLength {integer} Salt length for when padding is
RSA_PKCS1_PSS_PADDING. The special value
crypto.constants.RSA_PSS_SALTLEN_DIGEST sets the salt length to the
digest size, crypto.constants.RSA_PSS_SALTLEN_MAX_SIGN (default)
sets it to the maximum permissible value.
If outputEncoding is provided a string is returned; otherwise a Buffer
is returned.
The Sign object can not be again used after sign.sign() method has
been called. Multiple calls to sign.sign() will result in an error being
thrown.
sign.update(data[, inputEncoding])
data {string|Buffer|TypedArray|DataView}
inputEncoding {string} The encoding of the data string.
Updates the Sign content with the given data, the encoding of which
is given in inputEncoding. If encoding is not provided, and the data is a
string, an encoding of 'utf8' is enforced. If data is a Buffer,
TypedArray, or DataView, then inputEncoding is ignored.
This can be called many times with new data as it is streamed.
Class: Verify
Extends: {stream.Writable}
The Verify class is a utility for verifying signatures. It can be used in
one of two ways:
As a writable stream where written data is used to validate
against the supplied signature, or
Using the verify.update() and verify.verify() methods to verify
the signature.
The crypto.createVerify() method is used to create Verify instances.
Verify objects are not to be created directly using the new keyword.
See Sign for examples.
verify.update(data[, inputEncoding])
data {string|Buffer|TypedArray|DataView}
inputEncoding {string} The encoding of the data string.
Updates the Verify content with the given data, the encoding of
which is given in inputEncoding. If inputEncoding is not provided, and
the data is a string, an encoding of 'utf8' is enforced. If data is a
Buffer, TypedArray, or DataView, then inputEncoding is ignored.
This can be called many times with new data as it is streamed.
verify.verify(object, signature[,
signatureEncoding])
object
{Object|string|ArrayBuffer|Buffer|TypedArray|DataView|KeyOb
ject|CryptoKey}
dsaEncoding {string}
padding {integer}
saltLength {integer}
signature {string|ArrayBuffer|Buffer|TypedArray|DataView}
signatureEncoding {string} The encoding of the signature string.
Returns: {boolean} true or false depending on the validity of the
signature for the data and public key.
Verifies the provided data using the given object and signature.
If object is not a KeyObject, this function behaves as if object had
been passed to crypto.createPublicKey(). If it is an object, the
following additional properties can be passed:
dsaEncoding {string} For DSA and ECDSA, this option specifies
the format of the signature. It can be one of the following:
'der' (default): DER-encoded ASN.1 signature structure
encoding (r, s).
'ieee-p1363': Signature format r || s as proposed in IEEE-
P1363.
padding {integer} Optional padding value for RSA, one of the
following:
crypto.constants.RSA_PKCS1_PADDING (default)
crypto.constants.RSA_PKCS1_PSS_PADDING
RSA_PKCS1_PSS_PADDING will use MGF1 with the same hash function
used to verify the message as specified in section 3.1 of RFC
4055, unless an MGF1 hash function has been specified as part of
the key in compliance with section 3.3 of RFC 4055.
saltLength {integer} Salt length for when padding is
RSA_PKCS1_PSS_PADDING. The special value
crypto.constants.RSA_PSS_SALTLEN_DIGEST sets the salt length to the
digest size, crypto.constants.RSA_PSS_SALTLEN_AUTO (default)
causes it to be determined automatically.
The signature argument is the previously calculated signature for the
data, in the signatureEncoding. If a signatureEncoding is specified, the
signature is expected to be a string; otherwise signature is expected
to be a Buffer, TypedArray, or DataView.
The verify object can not be used again after verify.verify() has
been called. Multiple calls to verify.verify() will result in an error
being thrown.
Because public keys can be derived from private keys, a private key
may be passed instead of a public key.
Class: X509Certificate
Encapsulates an X509 certificate and provides read-only access to its
information.
const { X509Certificate } = await import('node:crypto');
const x509 = new X509Certificate('{... pem encoded cert ...}');
console.log(x509.subject);
const { X509Certificate } = require('node:crypto');
const x509 = new X509Certificate('{... pem encoded cert ...}');
console.log(x509.subject);
new X509Certificate(buffer)
buffer{string|TypedArray|Buffer|DataView} A PEM or DER
encoded X509 Certificate.
x509.ca
Type: {boolean} Will be true if this is a Certificate Authority (CA)
certificate.
x509.checkEmail(email[, options])
email {string}
options {Object}
subject {string} 'default', 'always', or 'never'. Default:
'default'.
Returns: {string|undefined} Returns email if the certificate
matches, undefined if it does not.
Checks whether the certificate matches the given email address.
If the 'subject' option is undefined or set to 'default', the certificate
subject is only considered if the subject alternative name extension
either does not exist or does not contain any email addresses.
If the 'subject' option is set to 'always' and if the subject alternative
name extension either does not exist or does not contain a matching
email address, the certificate subject is considered.
If the 'subject' option is set to 'never', the certificate subject is never
considered, even if the certificate contains no subject alternative
names.
x509.checkHost(name[, options])
name {string}
options {Object}
subject {string} 'default', 'always', or 'never'. Default:
'default'.
wildcards {boolean} Default: true.
partialWildcards {boolean} Default: true.
multiLabelWildcards {boolean} Default: false.
singleLabelSubdomains {boolean} Default: false.
Returns: {string|undefined} Returns a subject name that
matches name, or undefined if no subject name matches name.
Checks whether the certificate matches the given host name.
If the certificate matches the given host name, the matching subject
name is returned. The returned name might be an exact match (e.g.,
foo.example.com) or it might contain wildcards (e.g., *.example.com).
Because host name comparisons are case-insensitive, the returned
subject name might also differ from the given name in capitalization.
If the 'subject' option is undefined or set to 'default', the certificate
subject is only considered if the subject alternative name extension
either does not exist or does not contain any DNS names. This
behavior is consistent with RFC 2818 (“HTTP Over TLS”).
If the 'subject' option is set to 'always' and if the subject alternative
name extension either does not exist or does not contain a matching
DNS name, the certificate subject is considered.
If the 'subject' option is set to 'never', the certificate subject is never
considered, even if the certificate contains no subject alternative
names.
x509.checkIP(ip)
ip {string}
Returns: {string|undefined} Returns ip if the certificate matches,
undefined if it does not.
Checks whether the certificate matches the given IP address (IPv4 or
IPv6).
Only RFC 5280 iPAddress subject alternative names are considered,
and they must match the given ip address exactly. Other subject
alternative names as well as the subject field of the certificate are
ignored.
x509.checkIssued(otherCert)
otherCert {X509Certificate}
Returns: {boolean}
Checks whether this certificate was issued by the given otherCert.
x509.checkPrivateKey(privateKey)
privateKey{KeyObject} A private key.
Returns: {boolean}
Checks whether the public key for this certificate is consistent with
the given private key.
x509.fingerprint
Type: {string}
The SHA-1 fingerprint of this certificate.
Because SHA-1 is cryptographically broken and because the security
of SHA-1 is significantly worse than that of algorithms that are
commonly used to sign certificates, consider using
x509.fingerprint256 instead.
x509.fingerprint256
Type: {string}
The SHA-256 fingerprint of this certificate.
x509.fingerprint512
Type: {string}
The SHA-512 fingerprint of this certificate.
Because computing the SHA-256 fingerprint is usually faster and
because it is only half the size of the SHA-512 fingerprint,
x509.fingerprint256 may be a better choice. While SHA-512
presumably provides a higher level of security in general, the security
of SHA-256 matches that of most algorithms that are commonly
used to sign certificates.
x509.infoAccess
Type: {string}
A textual representation of the certificate’s authority information
access extension.
This is a line feed separated list of access descriptions. Each line
begins with the access method and the kind of the access location,
followed by a colon and the value associated with the access location.
After the prefix denoting the access method and the kind of the
access location, the remainder of each line might be enclosed in
quotes to indicate that the value is a JSON string literal. For
backward compatibility, Node.js only uses JSON string literals
within this property when necessary to avoid ambiguity. Third-party
code should be prepared to handle both possible entry formats.
x509.issuer
Type: {string}
The issuer identification included in this certificate.
x509.issuerCertificate
Type: {X509Certificate}
The issuer certificate or undefined if the issuer certificate is not
available.
x509.extKeyUsage
Type: {string[]}
An array detailing the key extended usages for this certificate.
x509.publicKey
Type: {KeyObject}
The public key {KeyObject} for this certificate.
x509.raw
Type: {Buffer}
A Buffer containing the DER encoding of this certificate.
x509.serialNumber
Type: {string}
The serial number of this certificate.
Serial numbers are assigned by certificate authorities and do not
uniquely identify certificates. Consider using x509.fingerprint256 as a
unique identifier instead.
x509.subject
Type: {string}
The complete subject of this certificate.
x509.subjectAltName
Type: {string}
The subject alternative name specified for this certificate.
This is a comma-separated list of subject alternative names. Each
entry begins with a string identifying the kind of the subject
alternative name followed by a colon and the value associated with
the entry.
Earlier versions of Node.js incorrectly assumed that it is safe to split
this property at the two-character sequence ', ' (see CVE-2021-
44532). However, both malicious and legitimate certificates can
contain subject alternative names that include this sequence when
represented as a string.
After the prefix denoting the type of the entry, the remainder of each
entry might be enclosed in quotes to indicate that the value is a
JSON string literal. For backward compatibility, Node.js only uses
JSON string literals within this property when necessary to avoid
ambiguity. Third-party code should be prepared to handle both
possible entry formats.
x509.toJSON()
Type: {string}
There is no standard JSON encoding for X509 certificates. The
toJSON() method returns a string containing the PEM encoded
certificate.
x509.toLegacyObject()
Type: {Object}
Returns information about this certificate using the legacy certificate
object encoding.
x509.toString()
Type: {string}
Returns the PEM-encoded certificate.
x509.validFrom
Type: {string}
The date/time from which this certificate is considered valid.
x509.validTo
Type: {string}
The date/time until which this certificate is considered valid.
x509.verify(publicKey)
publicKey {KeyObject} A public key.
Returns: {boolean}
Verifies that this certificate was signed by the given public key. Does
not perform any other validation checks on the certificate.
node:crypto module methods and
properties
crypto.constants
{Object}
An object containing commonly used constants for crypto and
security related operations. The specific constants currently defined
are described in Crypto constants.
crypto.fips
Stability: 0 - Deprecated
Property for checking and controlling whether a FIPS compliant
crypto provider is currently in use. Setting to true requires a FIPS
build of Node.js.
This property is deprecated. Please use crypto.setFips() and
crypto.getFips() instead.
crypto.checkPrime(candidate[, options],
callback)
candidate
{ArrayBuffer|SharedArrayBuffer|TypedArray|Buffer|DataView|b
igint} A possible prime encoded as a sequence of big endian
octets of arbitrary length.
options {Object}
checks {number} The number of Miller-Rabin probabilistic
primality iterations to perform. When the value is 0 (zero), a
number of checks is used that yields a false positive rate of at
most 2-64 for random input. Care must be used when
selecting a number of checks. Refer to the OpenSSL
documentation for the BN_is_prime_ex function nchecks
options for more details. Default: 0
callback {Function}
err {Error} Set to an {Error} object if an error occurred
during check.
result {boolean} true if the candidate is a prime with an error
probability less than 0.25 ** options.checks.
Checks the primality of the candidate.
crypto.checkPrimeSync(candidate[, options])
candidate
{ArrayBuffer|SharedArrayBuffer|TypedArray|Buffer|DataView|b
igint} A possible prime encoded as a sequence of big endian
octets of arbitrary length.
options {Object}
checks {number} The number of Miller-Rabin probabilistic
primality iterations to perform. When the value is 0 (zero), a
number of checks is used that yields a false positive rate of at
most 2-64 for random input. Care must be used when
selecting a number of checks. Refer to the OpenSSL
documentation for the BN_is_prime_ex function nchecks
options for more details. Default: 0
Returns: {boolean} true if the candidate is a prime with an error
probability less than 0.25 ** options.checks.
Checks the primality of the candidate.
crypto.createCipheriv(algorithm, key, iv[,
options])
algorithm {string}
key
{string|ArrayBuffer|Buffer|TypedArray|DataView|KeyObject|Cr
yptoKey}
iv {string|ArrayBuffer|Buffer|TypedArray|DataView|null}
options {Object} stream.transform options
Returns: {Cipher}
Creates and returns a Cipher object, with the given algorithm, key and
initialization vector (iv).
The options argument controls stream behavior and is optional
except when a cipher in CCM or OCB mode (e.g. 'aes-128-ccm') is
used. In that case, the authTagLength option is required and specifies
the length of the authentication tag in bytes, see CCM mode. In GCM
mode, the authTagLength option is not required but can be used to set
the length of the authentication tag that will be returned by
getAuthTag() and defaults to 16 bytes. For chacha20-poly1305, the
authTagLength option defaults to 16 bytes.
The algorithm is dependent on OpenSSL, examples are 'aes192', etc.
On recent OpenSSL releases, openssl list -cipher-algorithms will
display the available cipher algorithms.
The key is the raw key used by the algorithm and iv is an initialization
vector. Both arguments must be 'utf8' encoded strings, Buffers,
TypedArray, or DataViews. The key may optionally be a KeyObject of type
secret. If the cipher does not need an initialization vector, iv may be
null.
When passing strings for key or iv, please consider caveats when
using strings as inputs to cryptographic APIs.
Initialization vectors should be unpredictable and unique; ideally,
they will be cryptographically random. They do not have to be secret:
IVs are typically just added to ciphertext messages unencrypted. It
may sound contradictory that something has to be unpredictable and
unique, but does not have to be secret; remember that an attacker
must not be able to predict ahead of time what a given IV will be.
crypto.createDecipheriv(algorithm, key,
iv[, options])
algorithm {string}
key
{string|ArrayBuffer|Buffer|TypedArray|DataView|KeyObject|Cr
yptoKey}
iv {string|ArrayBuffer|Buffer|TypedArray|DataView|null}
options{Object} stream.transform options
Returns: {Decipher}
Creates and returns a Decipher object that uses the given algorithm,
key and initialization vector (iv).
The options argument controls stream behavior and is optional
except when a cipher in CCM or OCB mode (e.g. 'aes-128-ccm') is
used. In that case, the authTagLength option is required and specifies
the length of the authentication tag in bytes, see CCM mode. In GCM
mode, the authTagLength option is not required but can be used to
restrict accepted authentication tags to those with the specified
length. For chacha20-poly1305, the authTagLength option defaults to 16
bytes.
The algorithm is dependent on OpenSSL, examples are 'aes192', etc.
On recent OpenSSL releases, openssl list -cipher-algorithms will
display the available cipher algorithms.
The key is the raw key used by the algorithm and iv is an initialization
vector. Both arguments must be 'utf8' encoded strings, Buffers,
TypedArray, or DataViews. The key may optionally be a KeyObject of type
secret. If the cipher does not need an initialization vector, iv may be
null.
When passing strings for key or iv, please consider caveats when
using strings as inputs to cryptographic APIs.
Initialization vectors should be unpredictable and unique; ideally,
they will be cryptographically random. They do not have to be secret:
IVs are typically just added to ciphertext messages unencrypted. It
may sound contradictory that something has to be unpredictable and
unique, but does not have to be secret; remember that an attacker
must not be able to predict ahead of time what a given IV will be.
crypto.createDiffieHellman(prime[,
primeEncoding][, generator][,
generatorEncoding])
prime {string|ArrayBuffer|Buffer|TypedArray|DataView}
primeEncoding {string} The encoding of the prime string.
generator
{number|string|ArrayBuffer|Buffer|TypedArray|DataView}
Default: 2
generatorEncoding {string} The encoding of the generator string.
Returns: {DiffieHellman}
Creates a DiffieHellman key exchange object using the supplied prime
and an optional specific generator.
The generator argument can be a number, string, or Buffer. If
generator is not specified, the value 2 is used.
If primeEncoding is specified, prime is expected to be a string;
otherwise a Buffer, TypedArray, or DataView is expected.
If generatorEncoding is specified, generator is expected to be a string;
otherwise a number, Buffer, TypedArray, or DataView is expected.
crypto.createDiffieHellman(primeLength[,
generator])
primeLength {number}
generator {number} Default: 2
Returns: {DiffieHellman}
Creates a DiffieHellman key exchange object and generates a prime of
primeLength bits using an optional specific numeric generator. If
generator is not specified, the value 2 is used.
crypto.createDiffieHellmanGroup(name)
name{string}
Returns: {DiffieHellmanGroup}
An alias for crypto.getDiffieHellman()
crypto.createECDH(curveName)
curveName {string}
Returns: {ECDH}
Creates an Elliptic Curve Diffie-Hellman (ECDH) key exchange object
using a predefined curve specified by the curveName string. Use
crypto.getCurves() to obtain a list of available curve names. On
recent OpenSSL releases, openssl ecparam -list_curves will also
display the name and description of each available elliptic curve.
crypto.createHash(algorithm[, options])
algorithm {string}
options {Object} stream.transform options
Returns: {Hash}
Creates and returns a Hash object that can be used to generate hash
digests using the given algorithm. Optional options argument controls
stream behavior. For XOF hash functions such as 'shake256', the
outputLength option can be used to specify the desired output length
in bytes.
The algorithm is dependent on the available algorithms supported by
the version of OpenSSL on the platform. Examples are 'sha256',
'sha512', etc. On recent releases of OpenSSL, openssl list -digest-
algorithms will display the available digest algorithms.
Example: generating the sha256 sum of a file
import {
createReadStream,
} from 'node:fs';
import { argv } from 'node:process';
const {
createHash,
} = await import('node:crypto');
const filename = argv[2];
const hash = createHash('sha256');
const input = createReadStream(filename);
input.on('readable', () => {
// Only one element is going to be produced by the
// hash stream.
const data = input.read();
if (data)
hash.update(data);
else {
console.log(`${hash.digest('hex')} ${filename}`);
}
});
const {
createReadStream,
} = require('node:fs');
const {
createHash,
} = require('node:crypto');
const { argv } = require('node:process');
const filename = argv[2];
const hash = createHash('sha256');
const input = createReadStream(filename);
input.on('readable', () => {
// Only one element is going to be produced by the
// hash stream.
const data = input.read();
if (data)
hash.update(data);
else {
console.log(`${hash.digest('hex')} ${filename}`);
}
});
crypto.createHmac(algorithm, key[,
options])
algorithm {string}
key
{string|ArrayBuffer|Buffer|TypedArray|DataView|KeyObject|Cr
yptoKey}
options {Object} stream.transform options
encoding {string} The string encoding to use when key is a
string.
Returns: {Hmac}
Creates and returns an Hmac object that uses the given algorithm and
key. Optional options argument controls stream behavior.
The algorithm is dependent on the available algorithms supported by
the version of OpenSSL on the platform. Examples are 'sha256',
'sha512', etc. On recent releases of OpenSSL, openssl list -digest-
algorithms will display the available digest algorithms.
The key is the HMAC key used to generate the cryptographic HMAC
hash. If it is a KeyObject, its type must be secret. If it is a string,
please consider caveats when using strings as inputs to cryptographic
APIs. If it was obtained from a cryptographically secure source of
entropy, such as crypto.randomBytes() or crypto.generateKey(), its
length should not exceed the block size of algorithm (e.g., 512 bits for
SHA-256).
Example: generating the sha256 HMAC of a file
import {
createReadStream,
} from 'node:fs';
import { argv } from 'node:process';
const {
createHmac,
} = await import('node:crypto');
const filename = argv[2];
const hmac = createHmac('sha256', 'a secret');
const input = createReadStream(filename);
input.on('readable', () => {
// Only one element is going to be produced by the
// hash stream.
const data = input.read();
if (data)
hmac.update(data);
else {
console.log(`${hmac.digest('hex')} ${filename}`);
}
});
const {
createReadStream,
} = require('node:fs');
const {
createHmac,
} = require('node:crypto');
const { argv } = require('node:process');
const filename = argv[2];
const hmac = createHmac('sha256', 'a secret');
const input = createReadStream(filename);
input.on('readable', () => {
// Only one element is going to be produced by the
// hash stream.
const data = input.read();
if (data)
hmac.update(data);
else {
console.log(`${hmac.digest('hex')} ${filename}`);
}
});
crypto.createPrivateKey(key)
key{Object|string|ArrayBuffer|Buffer|TypedArray|DataView}
key:
{string|ArrayBuffer|Buffer|TypedArray|DataView|Object}
The key material, either in PEM, DER, or JWK format.
format: {string} Must be 'pem', 'der', or ’'jwk'. Default:
'pem'.
type: {string} Must be 'pkcs1', 'pkcs8' or 'sec1'. This option
is required only if the format is 'der' and ignored otherwise.
passphrase: {string | Buffer} The passphrase to use for
decryption.
encoding: {string} The string encoding to use when key is a
string.
Returns: {KeyObject}
Creates and returns a new key object containing a private key. If key
is a string or Buffer, format is assumed to be 'pem'; otherwise, key
must be an object with the properties described above.
If the private key is encrypted, a passphrase must be specified. The
length of the passphrase is limited to 1024 bytes.
crypto.createPublicKey(key)
key {Object|string|ArrayBuffer|Buffer|TypedArray|DataView}
key:
{string|ArrayBuffer|Buffer|TypedArray|DataView|Object}
The key material, either in PEM, DER, or JWK format.
format: {string} Must be 'pem', 'der', or 'jwk'. Default:
'pem'.
type:
{string} Must be 'pkcs1' or 'spki'. This option is
required only if the format is 'der' and ignored otherwise.
encoding {string} The string encoding to use when key is a
string.
Returns: {KeyObject}
Creates and returns a new key object containing a public key. If key is
a string or Buffer, format is assumed to be 'pem'; if key is a KeyObject
with type 'private', the public key is derived from the given private
key; otherwise, key must be an object with the properties described
above.
If the format is 'pem', the 'key' may also be an X.509 certificate.
Because public keys can be derived from private keys, a private key
may be passed instead of a public key. In that case, this function
behaves as if crypto.createPrivateKey() had been called, except that
the type of the returned KeyObject will be 'public' and that the
private key cannot be extracted from the returned KeyObject.
Similarly, if a KeyObject with type 'private' is given, a new KeyObject
with type 'public' will be returned and it will be impossible to extract
the private key from the returned object.
crypto.createSecretKey(key[, encoding])
key {string|ArrayBuffer|Buffer|TypedArray|DataView}
encoding {string} The string encoding when key is a string.
Returns: {KeyObject}
Creates and returns a new key object containing a secret key for
symmetric encryption or Hmac.
crypto.createSign(algorithm[, options])
algorithm {string}
options{Object} stream.Writable options
Returns: {Sign}
Creates and returns a Sign object that uses the given algorithm. Use
crypto.getHashes() to obtain the names of the available digest
algorithms. Optional options argument controls the stream.Writable
behavior.
In some cases, a Sign instance can be created using the name of a
signature algorithm, such as 'RSA-SHA256', instead of a digest
algorithm. This will use the corresponding digest algorithm. This
does not work for all signature algorithms, such as 'ecdsa-with-
SHA256', so it is best to always use digest algorithm names.
crypto.createVerify(algorithm[, options])
algorithm {string}
options {Object} stream.Writable options
Returns: {Verify}
Creates and returns a Verify object that uses the given algorithm. Use
crypto.getHashes() to obtain an array of names of the available
signing algorithms. Optional options argument controls the
stream.Writable behavior.
In some cases, a Verify instance can be created using the name of a
signature algorithm, such as 'RSA-SHA256', instead of a digest
algorithm. This will use the corresponding digest algorithm. This
does not work for all signature algorithms, such as 'ecdsa-with-
SHA256', so it is best to always use digest algorithm names.
crypto.diffieHellman(options)
options:{Object}
privateKey: {KeyObject}
publicKey:
{KeyObject}
Returns: {Buffer}
Computes the Diffie-Hellman secret based on a privateKey and a
publicKey. Both keys must have the same asymmetricKeyType, which
must be one of 'dh' (for Diffie-Hellman), 'ec' (for ECDH), 'x448', or
'x25519' (for ECDH-ES).
crypto.generateKey(type, options, callback)
type:{string} The intended use of the generated secret key.
Currently accepted values are 'hmac' and 'aes'.
options: {Object}
length: {number} The bit length of the key to generate. This
must be a value greater than 0.
If type is 'hmac', the minimum is 8, and the maximum
length is 231-1. If the value is not a multiple of 8, the
generated key will be truncated to Math.floor(length / 8).
If type is 'aes', the length must be one of 128, 192, or 256.
callback: {Function}
err: {Error}
key: {KeyObject}
Asynchronously generates a new random secret key of the given
length. The type will determine which validations will be performed
on the length.
const {
generateKey,
} = await import('node:crypto');
generateKey('hmac', { length: 512 }, (err, key) => {
if (err) throw err;
console.log(key.export().toString('hex')); // 46e..........620
});
const {
generateKey,
} = require('node:crypto');
generateKey('hmac', { length: 512 }, (err, key) => {
if (err) throw err;
console.log(key.export().toString('hex')); // 46e..........620
});
The size of a generated HMAC key should not exceed the block size
of the underlying hash function. See crypto.createHmac() for more
information.
crypto.generateKeyPair(type, options,
callback)
type:{string} Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519',
'ed448', 'x25519', 'x448', or 'dh'.
options: {Object}
modulusLength: {number} Key size in bits (RSA, DSA).
publicExponent: {number} Public exponent (RSA). Default:
0x10001.
hashAlgorithm: {string} Name of the message digest (RSA-
PSS).
mgf1HashAlgorithm: {string} Name of the message digest used
by MGF1 (RSA-PSS).
saltLength: {number} Minimal salt length in bytes (RSA-
PSS).
divisorLength: {number} Size of q in bits (DSA).
namedCurve: {string} Name of the curve to use (EC).
prime: {Buffer} The prime parameter (DH).
primeLength: {number} Prime length in bits (DH).
generator: {number} Custom generator (DH). Default: 2.
groupName: {string} Diffie-Hellman group name (DH). See
crypto.getDiffieHellman().
paramEncoding: {string} Must be 'named' or 'explicit' (EC).
Default: 'named'.
publicKeyEncoding: {Object} See keyObject.export().
privateKeyEncoding: {Object} See keyObject.export().
callback: {Function}
err: {Error}
publicKey: {string | Buffer | KeyObject}
privateKey: {string | Buffer | KeyObject}
Generates a new asymmetric key pair of the given type. RSA, RSA-
PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently
supported.
If a publicKeyEncoding or privateKeyEncoding was specified, this
function behaves as if keyObject.export() had been called on its
result. Otherwise, the respective part of the key is returned as a
KeyObject.
It is recommended to encode public keys as 'spki' and private keys
as 'pkcs8' with encryption for long-term storage:
const {
generateKeyPair,
} = await import('node:crypto');
generateKeyPair('rsa', {
modulusLength: 4096,
publicKeyEncoding: {
type: 'spki',
format: 'pem',
},
privateKeyEncoding: {
type: 'pkcs8',
format: 'pem',
cipher: 'aes-256-cbc',
passphrase: 'top secret',
},
}, (err, publicKey, privateKey) => {
// Handle errors and use the generated key pair.
});
const {
generateKeyPair,
} = require('node:crypto');
generateKeyPair('rsa', {
modulusLength: 4096,
publicKeyEncoding: {
type: 'spki',
format: 'pem',
},
privateKeyEncoding: {
type: 'pkcs8',
format: 'pem',
cipher: 'aes-256-cbc',
passphrase: 'top secret',
},
}, (err, publicKey, privateKey) => {
// Handle errors and use the generated key pair.
});
On completion, callback will be called with err set to undefined and
publicKey / privateKey representing the generated key pair.
If this method is invoked as its util.promisify()ed version, it returns
a Promise for an Object with publicKey and privateKey properties.
crypto.generateKeyPairSync(type, options)
type:{string} Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519',
'ed448', 'x25519', 'x448', or 'dh'.
options: {Object}
modulusLength: {number} Key size in bits (RSA, DSA).
publicExponent: {number} Public exponent (RSA). Default:
0x10001.
hashAlgorithm: {string} Name of the message digest (RSA-
PSS).
mgf1HashAlgorithm: {string} Name of the message digest used
by MGF1 (RSA-PSS).
saltLength: {number} Minimal salt length in bytes (RSA-
PSS).
divisorLength:
{number} Size of q in bits (DSA).
namedCurve: {string} Name of the curve to use (EC).
prime: {Buffer} The prime parameter (DH).
primeLength: {number} Prime length in bits (DH).
generator: {number} Custom generator (DH). Default: 2.
groupName: {string} Diffie-Hellman group name (DH). See
crypto.getDiffieHellman().
paramEncoding: {string} Must be 'named' or 'explicit' (EC).
Default: 'named'.
publicKeyEncoding: {Object} See keyObject.export().
privateKeyEncoding: {Object} See keyObject.export().
Returns: {Object}
publicKey: {string | Buffer | KeyObject}
privateKey: {string | Buffer | KeyObject}
Generates a new asymmetric key pair of the given type. RSA, RSA-
PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently
supported.
If a publicKeyEncoding or privateKeyEncoding was specified, this
function behaves as if keyObject.export() had been called on its
result. Otherwise, the respective part of the key is returned as a
KeyObject.
When encoding public keys, it is recommended to use 'spki'. When
encoding private keys, it is recommended to use 'pkcs8' with a
strong passphrase, and to keep the passphrase confidential.
const {
generateKeyPairSync,
} = await import('node:crypto');
const {
publicKey,
privateKey,
} = generateKeyPairSync('rsa', {
modulusLength: 4096,
publicKeyEncoding: {
type: 'spki',
format: 'pem',
},
privateKeyEncoding: {
type: 'pkcs8',
format: 'pem',
cipher: 'aes-256-cbc',
passphrase: 'top secret',
},
});
const {
generateKeyPairSync,
} = require('node:crypto');
const {
publicKey,
privateKey,
} = generateKeyPairSync('rsa', {
modulusLength: 4096,
publicKeyEncoding: {
type: 'spki',
format: 'pem',
},
privateKeyEncoding: {
type: 'pkcs8',
format: 'pem',
cipher: 'aes-256-cbc',
passphrase: 'top secret',
},
});
The return value { publicKey, privateKey } represents the generated
key pair. When PEM encoding was selected, the respective key will
be a string, otherwise it will be a buffer containing the data encoded
as DER.
crypto.generateKeySync(type, options)
type:{string} The intended use of the generated secret key.
Currently accepted values are 'hmac' and 'aes'.
options: {Object}
length: {number} The bit length of the key to generate.
If type is 'hmac', the minimum is 8, and the maximum
length is 231-1. If the value is not a multiple of 8, the
generated key will be truncated to Math.floor(length / 8).
If type is 'aes', the length must be one of 128, 192, or 256.
Returns: {KeyObject}
Synchronously generates a new random secret key of the given
length. The type will determine which validations will be performed
on the length.
const {
generateKeySync,
} = await import('node:crypto');
const key = generateKeySync('hmac', { length: 512 });
console.log(key.export().toString('hex')); // e89..........41e
const {
generateKeySync,
} = require('node:crypto');
const key = generateKeySync('hmac', { length: 512 });
console.log(key.export().toString('hex')); // e89..........41e
The size of a generated HMAC key should not exceed the block size
of the underlying hash function. See crypto.createHmac() for more
information.
crypto.generatePrime(size[, options[,
callback]])
size {number} The size (in bits) of the prime to generate.
options {Object}
add
{ArrayBuffer|SharedArrayBuffer|TypedArray|Buffer|DataVie
w|bigint}
rem
{ArrayBuffer|SharedArrayBuffer|TypedArray|Buffer|DataVie
w|bigint}
safe {boolean} Default: false.
bigint {boolean} When true, the generated prime is returned
as a bigint.
callback {Function}
err {Error}
prime {ArrayBuffer|bigint}
Generates a pseudorandom prime of size bits.
If options.safe is true, the prime will be a safe prime – that is, (prime
- 1) / 2 will also be a prime.
The options.add and options.rem parameters can be used to enforce
additional requirements, e.g., for Diffie-Hellman:
If options.add and options.rem are both set, the prime will satisfy
the condition that prime % add = rem.
If only options.add is set and options.safe is not true, the prime
will satisfy the condition that prime % add = 1.
If only options.add is set and options.safe is set to true, the prime
will instead satisfy the condition that prime % add = 3. This is
necessary because prime % add = 1 for options.add > 2 would
contradict the condition enforced by options.safe.
options.rem is ignored if options.add is not given.
Both options.add and options.rem must be encoded as big-endian
sequences if given as an ArrayBuffer, SharedArrayBuffer, TypedArray,
Buffer, or DataView.
By default, the prime is encoded as a big-endian sequence of octets in
an {ArrayBuffer}. If the bigint option is true, then a {bigint} is
provided.
crypto.generatePrimeSync(size[, options])
size {number} The size (in bits) of the prime to generate.
options {Object}
add
{ArrayBuffer|SharedArrayBuffer|TypedArray|Buffer|DataVie
w|bigint}
rem
{ArrayBuffer|SharedArrayBuffer|TypedArray|Buffer|DataVie
w|bigint}
safe {boolean} Default: false.
bigint {boolean} When true, the generated prime is returned
as a bigint.
Returns: {ArrayBuffer|bigint}
Generates a pseudorandom prime of size bits.
If options.safe is true, the prime will be a safe prime – that is, (prime
- 1) / 2 will also be a prime.
The options.add and options.rem parameters can be used to enforce
additional requirements, e.g., for Diffie-Hellman:
If options.add and options.rem are both set, the prime will satisfy
the condition that prime % add = rem.
If only options.add is set and options.safe is not true, the prime
will satisfy the condition that prime % add = 1.
If only options.add is set and options.safe is set to true, the prime
will instead satisfy the condition that prime % add = 3. This is
necessary because prime % add = 1 for options.add > 2 would
contradict the condition enforced by options.safe.
options.rem is ignored if options.add is not given.
Both options.add and options.rem must be encoded as big-endian
sequences if given as an ArrayBuffer, SharedArrayBuffer, TypedArray,
Buffer, or DataView.
By default, the prime is encoded as a big-endian sequence of octets in
an {ArrayBuffer}. If the bigint option is true, then a {bigint} is
provided.
crypto.getCipherInfo(nameOrNid[, options])
nameOrNid: {string|number} The name or nid of the cipher to
query.
options:{Object}
keyLength: {number} A test key length.
ivLength: {number} A test IV length.
Returns: {Object}
name {string} The name of the cipher
nid {number} The nid of the cipher
blockSize {number} The block size of the cipher in bytes. This
property is omitted when mode is 'stream'.
ivLength {number} The expected or default initialization
vector length in bytes. This property is omitted if the cipher
does not use an initialization vector.
keyLength {number} The expected or default key length in
bytes.
mode {string} The cipher mode. One of 'cbc', 'ccm', 'cfb',
'ctr', 'ecb', 'gcm', 'ocb', 'ofb', 'stream', 'wrap', 'xts'.
Returns information about a given cipher.
Some ciphers accept variable length keys and initialization vectors.
By default, the crypto.getCipherInfo() method will return the default
values for these ciphers. To test if a given key length or iv length is
acceptable for given cipher, use the keyLength and ivLength options. If
the given values are unacceptable, undefined will be returned.
crypto.getCiphers()
Returns: {string[]} An array with the names of the supported
cipher algorithms.
const {
getCiphers,
} = await import('node:crypto');
console.log(getCiphers()); // ['aes-128-cbc', 'aes-128-ccm', ...]
const {
getCiphers,
} = require('node:crypto');
console.log(getCiphers()); // ['aes-128-cbc', 'aes-128-ccm', ...]
crypto.getCurves()
Returns: {string[]} An array with the names of the supported
elliptic curves.
const {
getCurves,
} = await import('node:crypto');
console.log(getCurves()); // ['Oakley-EC2N-3', 'Oakley-EC2N-4', ...]
const {
getCurves,
} = require('node:crypto');
console.log(getCurves()); // ['Oakley-EC2N-3', 'Oakley-EC2N-4', ...]
crypto.getDiffieHellman(groupName)
groupName {string}
Returns: {DiffieHellmanGroup}
Creates a predefined DiffieHellmanGroup key exchange object. The
supported groups are listed in the documentation for
DiffieHellmanGroup.
The returned object mimics the interface of objects created by
crypto.createDiffieHellman(), but will not allow changing the keys
(with diffieHellman.setPublicKey(), for example). The advantage of
using this method is that the parties do not have to generate nor
exchange a group modulus beforehand, saving both processor and
communication time.
Example (obtaining a shared secret):
const {
getDiffieHellman,
} = await import('node:crypto');
const alice = getDiffieHellman('modp14');
const bob = getDiffieHellman('modp14');
alice.generateKeys();
bob.generateKeys();
const aliceSecret = alice.computeSecret(bob.getPublicKey(), null, 'h
const bobSecret = bob.computeSecret(alice.getPublicKey(), null, 'hex
/* aliceSecret and bobSecret should be the same */
console.log(aliceSecret === bobSecret);
const {
getDiffieHellman,
} = require('node:crypto');
const alice = getDiffieHellman('modp14');
const bob = getDiffieHellman('modp14');
alice.generateKeys();
bob.generateKeys();
const aliceSecret = alice.computeSecret(bob.getPublicKey(), null, 'h
b b b b ( li bli () ll 'h
const bobSecret = bob.computeSecret(alice.getPublicKey(), null, 'hex
/* aliceSecret and bobSecret should be the same */
console.log(aliceSecret === bobSecret);
crypto.getFips()
Returns: {number} 1 if and only if a FIPS compliant crypto
provider is currently in use, 0 otherwise. A future semver-major
release may change the return type of this API to a {boolean}.
crypto.getHashes()
Returns: {string[]} An array of the names of the supported hash
algorithms, such as 'RSA-SHA256'. Hash algorithms are also called
“digest” algorithms.
const {
getHashes,
} = await import('node:crypto');
console.log(getHashes()); // ['DSA', 'DSA-SHA', 'DSA-SHA1', ...]
const {
getHashes,
} = require('node:crypto');
console.log(getHashes()); // ['DSA', 'DSA-SHA', 'DSA-SHA1', ...]
crypto.getRandomValues(typedArray)
typedArray {Buffer|TypedArray|DataView|ArrayBuffer}
Returns: {Buffer|TypedArray|DataView|ArrayBuffer} Returns
typedArray.
A convenient alias for crypto.webcrypto.getRandomValues(). This
implementation is not compliant with the Web Crypto spec, to write
web-compatible code use crypto.webcrypto.getRandomValues() instead.
crypto.hkdf(digest, ikm, salt, info,
keylen, callback)
digest {string} The digest algorithm to use.
ikm
{string|ArrayBuffer|Buffer|TypedArray|DataView|KeyObject}
The input keying material. Must be provided but can be zero-
length.
salt {string|ArrayBuffer|Buffer|TypedArray|DataView} The salt
value. Must be provided but can be zero-length.
info {string|ArrayBuffer|Buffer|TypedArray|DataView}
Additional info value. Must be provided but can be zero-length,
and cannot be more than 1024 bytes.
keylen {number} The length of the key to generate. Must be
greater than 0. The maximum allowable value is 255 times the
number of bytes produced by the selected digest function
(e.g. sha512 generates 64-byte hashes, making the maximum
HKDF output 16320 bytes).
callback {Function}
err {Error}
derivedKey {ArrayBuffer}
HKDF is a simple key derivation function defined in RFC 5869. The
given ikm, salt and info are used with the digest to derive a key of
keylen bytes.
The supplied callback function is called with two arguments: err and
derivedKey. If an errors occurs while deriving the key, err will be set;
otherwise err will be null. The successfully generated derivedKey will
be passed to the callback as an {ArrayBuffer}. An error will be
thrown if any of the input arguments specify invalid values or types.
import { Buffer } from 'node:buffer';
const {
hkdf,
} = await import('node:crypto');
hkdf('sha512', 'key', 'salt', 'info', 64, (err, derivedKey) => {
if (err) throw err;
console.log(Buffer.from(derivedKey).toString('hex')); // '24156e2
});
const {
hkdf,
} = require('node:crypto');
const { Buffer } = require('node:buffer');
hkdf('sha512', 'key', 'salt', 'info', 64, (err, derivedKey) => {
if (err) throw err;
console.log(Buffer.from(derivedKey).toString('hex')); // '24156e2
});
crypto.hkdfSync(digest, ikm, salt, info,
keylen)
digest {string} The digest algorithm to use.
ikm
{string|ArrayBuffer|Buffer|TypedArray|DataView|KeyObject}
The input keying material. Must be provided but can be zero-
length.
salt {string|ArrayBuffer|Buffer|TypedArray|DataView} The salt
value. Must be provided but can be zero-length.
info {string|ArrayBuffer|Buffer|TypedArray|DataView}
Additional info value. Must be provided but can be zero-length,
and cannot be more than 1024 bytes.
keylen {number} The length of the key to generate. Must be
greater than 0. The maximum allowable value is 255 times the
number of bytes produced by the selected digest function
(e.g. sha512 generates 64-byte hashes, making the maximum
HKDF output 16320 bytes).
Returns: {ArrayBuffer}
Provides a synchronous HKDF key derivation function as defined in
RFC 5869. The given ikm, salt and info are used with the digest to
derive a key of keylen bytes.
The successfully generated derivedKey will be returned as an
{ArrayBuffer}.
An error will be thrown if any of the input arguments specify invalid
values or types, or if the derived key cannot be generated.
import { Buffer } from 'node:buffer';
const {
hkdfSync,
} = await import('node:crypto');
const derivedKey = hkdfSync('sha512', 'key', 'salt', 'info', 64);
console.log(Buffer.from(derivedKey).toString('hex')); // '24156e2..
const {
hkdfSync,
} = require('node:crypto');
const { Buffer } = require('node:buffer');
const derivedKey = hkdfSync('sha512', 'key', 'salt', 'info', 64);
console.log(Buffer.from(derivedKey).toString('hex')); // '24156e2..
crypto.pbkdf2(password, salt, iterations,
keylen, digest, callback)
password {string|ArrayBuffer|Buffer|TypedArray|DataView}
salt {string|ArrayBuffer|Buffer|TypedArray|DataView}
iterations {number}
keylen {number}
digest {string}
callback {Function}
err {Error}
derivedKey {Buffer}
Provides an asynchronous Password-Based Key Derivation Function
2 (PBKDF2) implementation. A selected HMAC digest algorithm
specified by digest is applied to derive a key of the requested byte
length (keylen) from the password, salt and iterations.
The supplied callback function is called with two arguments: err and
derivedKey. If an error occurs while deriving the key, err will be set;
otherwise err will be null. By default, the successfully generated
derivedKey will be passed to the callback as a Buffer. An error will be
thrown if any of the input arguments specify invalid values or types.
The iterations argument must be a number set as high as possible.
The higher the number of iterations, the more secure the derived key
will be, but will take a longer amount of time to complete.
The salt should be as unique as possible. It is recommended that a
salt is random and at least 16 bytes long. See NIST SP 800-132 for
details.
When passing strings for password or salt, please consider caveats
when using strings as inputs to cryptographic APIs.
const {
pbkdf2,
} = await import('node:crypto');
pbkdf2('secret', 'salt', 100000, 64, 'sha512', (err, derivedKey) =>
if (err) throw err;
console.log(derivedKey.toString('hex')); // '3745e48...08d59ae'
});
const {
pbkdf2,
} = require('node:crypto');
} equ e( ode c ypto );
pbkdf2('secret', 'salt', 100000, 64, 'sha512', (err, derivedKey) =>
if (err) throw err;
console.log(derivedKey.toString('hex')); // '3745e48...08d59ae'
});
An array of supported digest functions can be retrieved using
crypto.getHashes().
This API uses libuv’s threadpool, which can have surprising and
negative performance implications for some applications; see the
UV_THREADPOOL_SIZE documentation for more information.
crypto.pbkdf2Sync(password, salt,
iterations, keylen, digest)
password {string|Buffer|TypedArray|DataView}
salt {string|Buffer|TypedArray|DataView}
iterations {number}
keylen {number}
digest {string}
Returns: {Buffer}
Provides a synchronous Password-Based Key Derivation Function 2
(PBKDF2) implementation. A selected HMAC digest algorithm
specified by digest is applied to derive a key of the requested byte
length (keylen) from the password, salt and iterations.
If an error occurs an Error will be thrown, otherwise the derived key
will be returned as a Buffer.
The iterations argument must be a number set as high as possible.
The higher the number of iterations, the more secure the derived key
will be, but will take a longer amount of time to complete.
The salt should be as unique as possible. It is recommended that a
salt is random and at least 16 bytes long. See NIST SP 800-132 for
details.
When passing strings for password or salt, please consider caveats
when using strings as inputs to cryptographic APIs.
const {
pbkdf2Sync,
} = await import('node:crypto');
const key = pbkdf2Sync('secret', 'salt', 100000, 64, 'sha512');
console.log(key.toString('hex')); // '3745e48...08d59ae'
const {
pbkdf2Sync,
} = require('node:crypto');
const key = pbkdf2Sync('secret', 'salt', 100000, 64, 'sha512');
console.log(key.toString('hex')); // '3745e48...08d59ae'
An array of supported digest functions can be retrieved using
crypto.getHashes().
crypto.privateDecrypt(privateKey, buffer)
privateKey
{Object|string|ArrayBuffer|Buffer|TypedArray|DataView|KeyOb
ject|CryptoKey}
oaepHash {string} The hash function to use for OAEP padding
and MGF1. Default: 'sha1'
oaepLabel {string|ArrayBuffer|Buffer|TypedArray|DataView}
The label to use for OAEP padding. If not specified, no label
is used.
padding {crypto.constants} An optional padding value defined
in crypto.constants, which may be:
crypto.constants.RSA_NO_PADDING,
crypto.constants.RSA_PKCS1_PADDING, or
crypto.constants.RSA_PKCS1_OAEP_PADDING.
buffer{string|ArrayBuffer|Buffer|TypedArray|DataView}
Returns: {Buffer} A new Buffer with the decrypted content.
Decrypts buffer with privateKey. buffer was previously encrypted
using the corresponding public key, for example using
crypto.publicEncrypt().
If privateKey is not a KeyObject, this function behaves as if privateKey
had been passed to crypto.createPrivateKey(). If it is an object, the
padding property can be passed. Otherwise, this function uses
RSA_PKCS1_OAEP_PADDING.
crypto.privateEncrypt(privateKey, buffer)
privateKey
{Object|string|ArrayBuffer|Buffer|TypedArray|DataView|KeyOb
ject|CryptoKey}
key
{string|ArrayBuffer|Buffer|TypedArray|DataView|KeyObject
|CryptoKey} A PEM encoded private key.
passphrase
{string|ArrayBuffer|Buffer|TypedArray|DataView} An
optional passphrase for the private key.
padding {crypto.constants} An optional padding value defined
in crypto.constants, which may be:
crypto.constants.RSA_NO_PADDING or
crypto.constants.RSA_PKCS1_PADDING.
encoding {string} The string encoding to use when buffer, key,
or passphrase are strings.
buffer {string|ArrayBuffer|Buffer|TypedArray|DataView}
Returns: {Buffer} A new Buffer with the encrypted content.
Encrypts buffer with privateKey. The returned data can be decrypted
using the corresponding public key, for example using
crypto.publicDecrypt().
If privateKey is not a KeyObject, this function behaves as if privateKey
had been passed to crypto.createPrivateKey(). If it is an object, the
padding property can be passed. Otherwise, this function uses
RSA_PKCS1_PADDING.
crypto.publicDecrypt(key, buffer)
key
{Object|string|ArrayBuffer|Buffer|TypedArray|DataView|KeyOb
ject|CryptoKey}
passphrase
{string|ArrayBuffer|Buffer|TypedArray|DataView} An
optional passphrase for the private key.
padding {crypto.constants} An optional padding value defined
in crypto.constants, which may be:
crypto.constants.RSA_NO_PADDING or
crypto.constants.RSA_PKCS1_PADDING.
encoding {string} The string encoding to use when buffer, key,
or passphrase are strings.
buffer {string|ArrayBuffer|Buffer|TypedArray|DataView}
Returns: {Buffer} A new Buffer with the decrypted content.
Decrypts buffer with key.buffer was previously encrypted using the
corresponding private key, for example using
crypto.privateEncrypt().
If key is not a KeyObject, this function behaves as if key had been
passed to crypto.createPublicKey(). If it is an object, the padding
property can be passed. Otherwise, this function uses
RSA_PKCS1_PADDING.
Because RSA public keys can be derived from private keys, a private
key may be passed instead of a public key.
crypto.publicEncrypt(key, buffer)
key
{Object|string|ArrayBuffer|Buffer|TypedArray|DataView|KeyOb
ject|CryptoKey}
key
{string|ArrayBuffer|Buffer|TypedArray|DataView|KeyObject
|CryptoKey} A PEM encoded public or private key,
{KeyObject}, or {CryptoKey}.
oaepHash {string} The hash function to use for OAEP padding
and MGF1. Default: 'sha1'
oaepLabel {string|ArrayBuffer|Buffer|TypedArray|DataView}
The label to use for OAEP padding. If not specified, no label
is used.
passphrase
{string|ArrayBuffer|Buffer|TypedArray|DataView} An
optional passphrase for the private key.
padding {crypto.constants} An optional padding value defined
in crypto.constants, which may be:
crypto.constants.RSA_NO_PADDING,
crypto.constants.RSA_PKCS1_PADDING, or
crypto.constants.RSA_PKCS1_OAEP_PADDING.
encoding {string} The string encoding to use when buffer, key,
oaepLabel, or passphrase are strings.
buffer {string|ArrayBuffer|Buffer|TypedArray|DataView}
Returns: {Buffer} A new Buffer with the encrypted content.
Encrypts the content of buffer with key and returns a new Buffer with
encrypted content. The returned data can be decrypted using the
corresponding private key, for example using
crypto.privateDecrypt().
If key is not a KeyObject, this function behaves as if key had been
passed to crypto.createPublicKey(). If it is an object, the padding
property can be passed. Otherwise, this function uses
RSA_PKCS1_OAEP_PADDING.
Because RSA public keys can be derived from private keys, a private
key may be passed instead of a public key.
crypto.randomBytes(size[, callback])
size {number} The number of bytes to generate. The size must
not be larger than 2**31 - 1.
callback {Function}
err {Error}
buf {Buffer}
Returns: {Buffer} if the callback function is not provided.
Generates cryptographically strong pseudorandom data. The size
argument is a number indicating the number of bytes to generate.
If a callback function is provided, the bytes are generated
asynchronously and the callback function is invoked with two
arguments: err and buf. If an error occurs, err will be an Error object;
otherwise it is null. The buf argument is a Buffer containing the
generated bytes.
// Asynchronous
const {
randomBytes,
} = await import('node:crypto');
randomBytes(256, (err, buf) => {
if (err) throw err;
console.log(`${buf.length} bytes of random data: ${buf.toString('h
});
// Asynchronous
const {
const {
randomBytes,
} = require('node:crypto');
randomBytes(256, (err, buf) => {
if (err) throw err;
console.log(`${buf.length} bytes of random data: ${buf.toString('h
});
If the callback function is not provided, the random bytes are
generated synchronously and returned as a Buffer. An error will be
thrown if there is a problem generating the bytes.
// Synchronous
const {
randomBytes,
} = await import('node:crypto');
const buf = randomBytes(256);
console.log(
`${buf.length} bytes of random data: ${buf.toString('hex')}`);
// Synchronous
const {
randomBytes,
} = require('node:crypto');
const buf = randomBytes(256);
console.log(
`${buf.length} bytes of random data: ${buf.toString('hex')}`);
The crypto.randomBytes() method will not complete until there is
sufficient entropy available. This should normally never take longer
than a few milliseconds. The only time when generating the random
bytes may conceivably block for a longer period of time is right after
boot, when the whole system is still low on entropy.
This API uses libuv’s threadpool, which can have surprising and
negative performance implications for some applications; see the
UV_THREADPOOL_SIZE documentation for more information.
The asynchronous version of crypto.randomBytes() is carried out in a
single threadpool request. To minimize threadpool task length
variation, partition large randomBytes requests when doing so as part
of fulfilling a client request.
crypto.randomFillSync(buffer[, offset][,
size])
buffer {ArrayBuffer|Buffer|TypedArray|DataView} Must be
supplied. The size of the provided buffer must not be larger than
2**31 - 1.
offset {number} Default: 0
size {number} Default: buffer.length - offset. The size must
not be larger than 2**31 - 1.
Returns: {ArrayBuffer|Buffer|TypedArray|DataView} The object
passed as buffer argument.
Synchronous version of crypto.randomFill().
import { Buffer } from 'node:buffer';
const { randomFillSync } = await import('node:crypto');
const buf = Buffer.alloc(10);
console.log(randomFillSync(buf).toString('hex'));
randomFillSync(buf, 5);
console.log(buf.toString('hex'));
// The above is equivalent to the following:
randomFillSync(buf, 5, 5);
console.log(buf.toString('hex'));
const { randomFillSync } = require('node:crypto');
const { Buffer } = require('node:buffer');
const buf = Buffer.alloc(10);
console.log(randomFillSync(buf).toString('hex'));
randomFillSync(buf, 5);
console.log(buf.toString('hex'));
// The above is equivalent to the following:
randomFillSync(buf, 5, 5);
console.log(buf.toString('hex'));
Any ArrayBuffer, TypedArray or DataView instance may be passed as
buffer.
import { Buffer } from 'node:buffer';
const { randomFillSync } = await import('node:crypto');
const a = new Uint32Array(10);
console.log(Buffer.from(randomFillSync(a).buffer,
a.byteOffset, a.byteLength).toString('hex'))
const b = new DataView(new ArrayBuffer(10));
console.log(Buffer.from(randomFillSync(b).buffer,
b.byteOffset, b.byteLength).toString('hex'))
const c = new ArrayBuffer(10);
console.log(Buffer.from(randomFillSync(c)).toString('hex'));
const { randomFillSync } = require('node:crypto');
const { Buffer } = require('node:buffer');
const a = new Uint32Array(10);
console.log(Buffer.from(randomFillSync(a).buffer,
a.byteOffset, a.byteLength).toString('hex'))
const b = new DataView(new ArrayBuffer(10));
console.log(Buffer.from(randomFillSync(b).buffer,
b.byteOffset, b.byteLength).toString('hex'))
const c = new ArrayBuffer(10);
console.log(Buffer.from(randomFillSync(c)).toString('hex'));
crypto.randomFill(buffer[, offset][, size],
callback)
buffer {ArrayBuffer|Buffer|TypedArray|DataView} Must be
supplied. The size of the provided buffer must not be larger than
2**31 - 1.
offset {number} Default: 0
size {number} Default: buffer.length - offset. The size must
not be larger than 2**31 - 1.
callback {Function} function(err, buf) {}.
This function is similar to crypto.randomBytes() but requires the first
argument to be a Buffer that will be filled. It also requires that a
callback is passed in.
If the callback function is not provided, an error will be thrown.
import { Buffer } from 'node:buffer';
const { randomFill } = await import('node:crypto');
const buf = Buffer.alloc(10);
randomFill(buf, (err, buf) => {
if (err) throw err;
console.log(buf.toString('hex'));
});
randomFill(buf, 5, (err, buf) => {
if (err) throw err;
console.log(buf.toString('hex'));
});
// The above is equivalent to the following:
randomFill(buf, 5, 5, (err, buf) => {
if (err) throw err;
console.log(buf.toString('hex'));
});
const { randomFill } = require('node:crypto');
const { Buffer } = require('node:buffer');
const buf = Buffer.alloc(10);
randomFill(buf, (err, buf) => {
if (err) throw err;
console.log(buf.toString('hex'));
});
randomFill(buf, 5, (err, buf) => {
if (err) throw err;
console.log(buf.toString('hex'));
});
// The above is equivalent to the following:
randomFill(buf, 5, 5, (err, buf) => {
if (err) throw err;
console.log(buf.toString('hex'));
});
Any ArrayBuffer, TypedArray, or DataView instance may be passed as
buffer.
While this includes instances of Float32Array and Float64Array, this
function should not be used to generate random floating-point
numbers. The result may contain +Infinity, -Infinity, and NaN, and
even if the array contains finite numbers only, they are not drawn
from a uniform random distribution and have no meaningful lower
or upper bounds.
import { Buffer } from 'node:buffer';
const { randomFill } = await import('node:crypto');
const a = new Uint32Array(10);
randomFill(a, (err, buf) => {
if (err) throw err;
console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength
.toString('hex'));
});
const b = new DataView(new ArrayBuffer(10));
randomFill(b, (err, buf) => {
if (err) throw err;
console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength
.toString('hex'));
});
const c = new ArrayBuffer(10);
randomFill(c, (err, buf) => {
if (err) throw err;
console.log(Buffer.from(buf).toString('hex'));
})
});
const { randomFill } = require('node:crypto');
const { Buffer } = require('node:buffer');
const a = new Uint32Array(10);
randomFill(a, (err, buf) => {
if (err) throw err;
console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength
.toString('hex'));
});
const b = new DataView(new ArrayBuffer(10));
randomFill(b, (err, buf) => {
if (err) throw err;
console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength
.toString('hex'));
});
const c = new ArrayBuffer(10);
randomFill(c, (err, buf) => {
if (err) throw err;
console.log(Buffer.from(buf).toString('hex'));
});
This API uses libuv’s threadpool, which can have surprising and
negative performance implications for some applications; see the
UV_THREADPOOL_SIZE documentation for more information.
The asynchronous version of crypto.randomFill() is carried out in a
single threadpool request. To minimize threadpool task length
variation, partition large randomFill requests when doing so as part of
fulfilling a client request.
crypto.randomInt([min, ]max[, callback])
min {integer} Start of random range (inclusive). Default: 0.
max {integer} End of random range (exclusive).
callback {Function} function(err, n) {}.
Return a random integer n such that min <= n < max. This
implementation avoids modulo bias.
The range (max - min) must be less than 248. min and max must be safe
integers.
If the callback function is not provided, the random integer is
generated synchronously.
// Asynchronous
const {
randomInt,
} = await import('node:crypto');
randomInt(3, (err, n) => {
if (err) throw err;
console.log(`Random number chosen from (0, 1, 2): ${n}`);
});
// Asynchronous
const {
randomInt,
} = require('node:crypto');
randomInt(3, (err, n) => {
if (err) throw err;
console.log(`Random number chosen from (0, 1, 2): ${n}`);
});
// Synchronous
const {
randomInt,
} = await import('node:crypto');
const n = randomInt(3);
console.log(`Random number chosen from (0, 1, 2): ${n}`);
// Synchronous
const {
randomInt,
} = require('node:crypto');
const n = randomInt(3);
console.log(`Random number chosen from (0, 1, 2): ${n}`);
// With `min` argument
const {
randomInt,
} = await import('node:crypto');
const n = randomInt(1, 7);
console.log(`The dice rolled: ${n}`);
// With `min` argument
const {
randomInt,
} = require('node:crypto');
const n = randomInt(1, 7);
console.log(`The dice rolled: ${n}`);
crypto.randomUUID([options])
options {Object}
disableEntropyCache{boolean} By default, to improve
performance, Node.js generates and caches enough random
data to generate up to 128 random UUIDs. To generate a
UUID without using the cache, set disableEntropyCache to
true. Default: false.
Returns: {string}
Generates a random RFC 4122 version 4 UUID. The UUID is
generated using a cryptographic pseudorandom number generator.
crypto.scrypt(password, salt, keylen[,
options], callback)
password {string|ArrayBuffer|Buffer|TypedArray|DataView}
salt {string|ArrayBuffer|Buffer|TypedArray|DataView}
keylen {number}
options {Object}
cost {number} CPU/memory cost parameter. Must be a
power of two greater than one. Default: 16384.
blockSize {number} Block size parameter. Default: 8.
parallelization {number} Parallelization parameter.
Default: 1.
N {number} Alias for cost. Only one of both may be specified.
r {number} Alias for blockSize. Only one of both may be
specified.
p {number} Alias for parallelization. Only one of both may
be specified.
maxmem {number} Memory upper bound. It is an error when
(approximately) 128 * N * r > maxmem. Default: 32 * 1024 *
1024.
callback {Function}
err {Error}
derivedKey {Buffer}
Provides an asynchronous scrypt implementation. Scrypt is a
password-based key derivation function that is designed to be
expensive computationally and memory-wise in order to make brute-
force attacks unrewarding.
The salt should be as unique as possible. It is recommended that a
salt is random and at least 16 bytes long. See NIST SP 800-132 for
details.
When passing strings for password or salt, please consider caveats
when using strings as inputs to cryptographic APIs.
The callback function is called with two arguments: err and
derivedKey. err is an exception object when key derivation fails,
otherwise err is null. derivedKey is passed to the callback as a Buffer.
An exception is thrown when any of the input arguments specify
invalid values or types.
const {
scrypt,
} = await import('node:crypto');
// Using the factory defaults.
scrypt('password', 'salt', 64, (err, derivedKey) => {
if (err) throw err;
console.log(derivedKey.toString('hex')); // '3745e48...08d59ae'
});
// Using a custom N parameter. Must be a power of two.
scrypt('password', 'salt', 64, { N: 1024 }, (err, derivedKey) => {
if (err) throw err;
console.log(derivedKey.toString('hex')); // '3745e48...aa39b34'
});
const {
scrypt,
} = require('node:crypto');
// Using the factory defaults.
scrypt('password', 'salt', 64, (err, derivedKey) => {
if (err) throw err;
console.log(derivedKey.toString('hex')); // '3745e48...08d59ae'
});
// Using a custom N parameter. Must be a power of two.
scrypt('password', 'salt', 64, { N: 1024 }, (err, derivedKey) => {
if (err) throw err;
console.log(derivedKey.toString('hex')); // '3745e48...aa39b34'
});
crypto.scryptSync(password, salt, keylen[,
options])
password {string|Buffer|TypedArray|DataView}
salt {string|Buffer|TypedArray|DataView}
keylen {number}
options {Object}
{number} CPU/memory cost parameter. Must be a
cost
power of two greater than one. Default: 16384.
blockSize {number} Block size parameter. Default: 8.
parallelization {number} Parallelization parameter.
Default: 1.
N {number} Alias for cost. Only one of both may be specified.
r {number} Alias for blockSize. Only one of both may be
specified.
p {number} Alias for parallelization. Only one of both may
be specified.
maxmem {number} Memory upper bound. It is an error when
(approximately) 128 * N * r > maxmem. Default: 32 * 1024 *
1024.
Returns: {Buffer}
Provides a synchronous scrypt implementation. Scrypt is a
password-based key derivation function that is designed to be
expensive computationally and memory-wise in order to make brute-
force attacks unrewarding.
The salt should be as unique as possible. It is recommended that a
salt is random and at least 16 bytes long. See NIST SP 800-132 for
details.
When passing strings for password or salt, please consider caveats
when using strings as inputs to cryptographic APIs.
An exception is thrown when key derivation fails, otherwise the
derived key is returned as a Buffer.
An exception is thrown when any of the input arguments specify
invalid values or types.
const {
scryptSync,
} = await import('node:crypto');
// Using the factory defaults.
const key1 = scryptSync('password', 'salt', 64);
console.log(key1.toString('hex')); // '3745e48...08d59ae'
// Using a custom N parameter. Must be a power of two.
const key2 = scryptSync('password', 'salt', 64, { N: 1024 });
console.log(key2.toString('hex')); // '3745e48...aa39b34'
const {
scryptSync,
} = require('node:crypto');
// Using the factory defaults.
const key1 = scryptSync('password', 'salt', 64);
console.log(key1.toString('hex')); // '3745e48...08d59ae'
// Using a custom N parameter. Must be a power of two.
const key2 = scryptSync('password', 'salt', 64, { N: 1024 });
console.log(key2.toString('hex')); // '3745e48...aa39b34'
crypto.secureHeapUsed()
Returns: {Object}
total {number} The total allocated secure heap size as
specified using the --secure-heap=n command-line flag.
min {number} The minimum allocation from the secure heap
as specified using the --secure-heap-min command-line flag.
used {number} The total number of bytes currently allocated
from the secure heap.
utilization {number} The calculated ratio of used to total
allocated bytes.
crypto.setEngine(engine[, flags])
engine {string}
flags {crypto.constants} Default:
crypto.constants.ENGINE_METHOD_ALL
Load and set the engine for some or all OpenSSL functions (selected
by flags).
engine could be either an id or a path to the engine’s shared library.
The optional flags argument uses ENGINE_METHOD_ALL by default. The
flags is a bit field taking one of or a mix of the following flags
(defined in crypto.constants):
crypto.constants.ENGINE_METHOD_RSA
crypto.constants.ENGINE_METHOD_DSA
crypto.constants.ENGINE_METHOD_DH
crypto.constants.ENGINE_METHOD_RAND
crypto.constants.ENGINE_METHOD_EC
crypto.constants.ENGINE_METHOD_CIPHERS
crypto.constants.ENGINE_METHOD_DIGESTS
crypto.constants.ENGINE_METHOD_PKEY_METHS
crypto.constants.ENGINE_METHOD_PKEY_ASN1_METHS
crypto.constants.ENGINE_METHOD_ALL
crypto.constants.ENGINE_METHOD_NONE
crypto.setFips(bool)
bool {boolean} true to enable FIPS mode.
Enables the FIPS compliant crypto provider in a FIPS-enabled
Node.js build. Throws an error if FIPS mode is not available.
crypto.sign(algorithm, data, key[,
callback])
algorithm {string | null | undefined}
data {ArrayBuffer|Buffer|TypedArray|DataView}
key
{Object|string|ArrayBuffer|Buffer|TypedArray|DataView|KeyOb
ject|CryptoKey}
callback {Function}
err{Error}
signature {Buffer}
Returns: {Buffer} if the callback function is not provided.
Calculates and returns the signature for data using the given private
key and algorithm. If algorithm is null or undefined, then the
algorithm is dependent upon the key type (especially Ed25519 and
Ed448).
If key is not a KeyObject, this function behaves as if key had been
passed to crypto.createPrivateKey(). If it is an object, the following
additional properties can be passed:
dsaEncoding {string} For DSA and ECDSA, this option specifies
the format of the generated signature. It can be one of the
following:
'der' (default): DER-encoded ASN.1 signature structure
encoding (r, s).
'ieee-p1363': Signature format r || s as proposed in IEEE-
P1363.
padding {integer} Optional padding value for RSA, one of the
following:
crypto.constants.RSA_PKCS1_PADDING (default)
crypto.constants.RSA_PKCS1_PSS_PADDING
RSA_PKCS1_PSS_PADDING will use MGF1 with the same hash function
used to sign the message as specified in section 3.1 of RFC 4055.
saltLength {integer} Salt length for when padding is
RSA_PKCS1_PSS_PADDING. The special value
crypto.constants.RSA_PSS_SALTLEN_DIGEST sets the salt length to the
digest size, crypto.constants.RSA_PSS_SALTLEN_MAX_SIGN (default)
sets it to the maximum permissible value.
If the callback function is provided this function uses libuv’s
threadpool.
crypto.subtle
Type: {SubtleCrypto}
A convenient alias for crypto.webcrypto.subtle.
crypto.timingSafeEqual(a, b)
a {ArrayBuffer|Buffer|TypedArray|DataView}
b {ArrayBuffer|Buffer|TypedArray|DataView}
Returns: {boolean}
This function compares the underlying bytes that represent the given
ArrayBuffer, TypedArray, or DataView instances using a constant-time
algorithm.
This function does not leak timing information that would allow an
attacker to guess one of the values. This is suitable for comparing
HMAC digests or secret values like authentication cookies or
capability urls.
aand b must both be Buffers, TypedArrays, or DataViews, and they must
have the same byte length. An error is thrown if a and b have
different byte lengths.
If at least one of a and b is a TypedArray with more than one byte per
entry, such as Uint16Array, the result will be computed using the
platform byte order.
When both of the inputs are Float32Arrays or Float64Arrays,
this function might return unexpected results due to IEEE
754 encoding of floating-point numbers. In particular,
neither x === y nor Object.is(x, y) implies that the byte
representations of two floating-point numbers x and y are
equal.
Use of crypto.timingSafeEqual does not guarantee that the
surrounding code is timing-safe. Care should be taken to ensure that
the surrounding code does not introduce timing vulnerabilities.
crypto.verify(algorithm, data, key,
signature[, callback])
algorithm {string|null|undefined}
data {ArrayBuffer| Buffer|TypedArray|DataView}
key
{Object|string|ArrayBuffer|Buffer|TypedArray|DataView|KeyOb
ject|CryptoKey}
signature {ArrayBuffer|Buffer|TypedArray|DataView}
callback {Function}
err {Error}
result {boolean}
Returns: {boolean} true or false depending on the validity of the
signature for the data and public key if the callback function is
not provided.
Verifies the given signature for data using the given key and
algorithm. If algorithm is null or undefined, then the algorithm is
dependent upon the key type (especially Ed25519 and Ed448).
If key is not a KeyObject, this function behaves as if key had been
passed to crypto.createPublicKey(). If it is an object, the following
additional properties can be passed:
dsaEncoding {string} For DSA and ECDSA, this option specifies
the format of the signature. It can be one of the following:
'der' (default): DER-encoded ASN.1 signature structure
encoding (r, s).
'ieee-p1363': Signature format r || s as proposed in IEEE-
P1363.
padding {integer} Optional padding value for RSA, one of the
following:
crypto.constants.RSA_PKCS1_PADDING (default)
crypto.constants.RSA_PKCS1_PSS_PADDING
RSA_PKCS1_PSS_PADDING will use MGF1 with the same hash function
used to sign the message as specified in section 3.1 of RFC 4055.
saltLength {integer} Salt length for when padding is
RSA_PKCS1_PSS_PADDING. The special value
crypto.constants.RSA_PSS_SALTLEN_DIGEST sets the salt length to the
digest size, crypto.constants.RSA_PSS_SALTLEN_MAX_SIGN (default)
sets it to the maximum permissible value.
The signature argument is the previously calculated signature for the
data.
Because public keys can be derived from private keys, a private key
or a public key may be passed for key.
If the callback function is provided this function uses libuv’s
threadpool.
crypto.webcrypto
Type: {Crypto} An implementation of the Web Crypto API standard.
See the Web Crypto API documentation for details.
Notes
Using strings as inputs to cryptographic
APIs
For historical reasons, many cryptographic APIs provided by Node.js
accept strings as inputs where the underlying cryptographic
algorithm works on byte sequences. These instances include
plaintexts, ciphertexts, symmetric keys, initialization vectors,
passphrases, salts, authentication tags, and additional authenticated
data.
When passing strings to cryptographic APIs, consider the following
factors.
Not all byte sequences are valid UTF-8 strings. Therefore, when a
byte sequence of length n is derived from a string, its entropy is
generally lower than the entropy of a random or pseudorandom n
byte sequence. For example, no UTF-8 string will result in the
byte sequence c0 af. Secret keys should almost exclusively be
random or pseudorandom byte sequences.
Similarly, when converting random or pseudorandom byte
sequences to UTF-8 strings, subsequences that do not represent
valid code points may be replaced by the Unicode replacement
character (U+FFFD). The byte representation of the resulting
Unicode string may, therefore, not be equal to the byte sequence
that the string was created from.
const original = [0xc0, 0xaf];
const bytesAsString = Buffer.from(original).toString('utf8');
const stringAsBytes = Buffer.from(bytesAsString, 'utf8');
console.log(stringAsBytes);
// Prints '<Buffer ef bf bd ef bf bd>'.
The outputs of ciphers, hash functions, signature algorithms, and
key derivation functions are pseudorandom byte sequences and
should not be used as Unicode strings.
When strings are obtained from user input, some Unicode
characters can be represented in multiple equivalent ways that
result in different byte sequences. For example, when passing a
user passphrase to a key derivation function, such as PBKDF2 or
scrypt, the result of the key derivation function depends on
whether the string uses composed or decomposed characters.
Node.js does not normalize character representations.
Developers should consider using String.prototype.normalize()
on user inputs before passing them to cryptographic APIs.
Legacy streams API (prior to Node.js
0.10)
The Crypto module was added to Node.js before there was the
concept of a unified Stream API, and before there were Buffer objects
for handling binary data. As such, many crypto classes have methods
not typically found on other Node.js classes that implement the
streams API (e.g. update(), final(), or digest()). Also, many methods
accepted and returned 'latin1' encoded strings by default rather
than Buffers. This default was changed after Node.js v0.8 to use
Buffer objects by default instead.
Support for weak or compromised
algorithms
The node:crypto module still supports some algorithms which are
already compromised and are not recommended for use. The API
also allows the use of ciphers and hashes with a small key size that
are too weak for safe use.
Users should take full responsibility for selecting the crypto
algorithm and key size according to their security requirements.
Based on the recommendations of NIST SP 800-131A:
MD5 and SHA-1 are no longer acceptable where collision
resistance is required such as digital signatures.
The key used with RSA, DSA, and DH algorithms is
recommended to have at least 2048 bits and that of the curve of
ECDSA and ECDH at least 224 bits, to be safe to use for several
years.
The DH groups of modp1, modp2 and modp5 have a key size smaller
than 2048 bits and are not recommended.
See the reference for other recommendations and details.
Some algorithms that have known weaknesses and are of little
relevance in practice are only available through the legacy provider,
which is not enabled by default.
CCM mode
CCM is one of the supported AEAD algorithms. Applications which
use this mode must adhere to certain restrictions when using the
cipher API:
The authentication tag length must be specified during cipher
creation by setting the authTagLength option and must be one of 4,
6, 8, 10, 12, 14 or 16 bytes.
The length of the initialization vector (nonce) N must be between
7 and 13 bytes (7 ≤ N ≤ 13).
The length of the plaintext is limited to 2 ** (8 * (15 - N)) bytes.
When decrypting, the authentication tag must be set via
setAuthTag() before calling update(). Otherwise, decryption will
fail and final() will throw an error in compliance with section
2.6 of RFC 3610.
Using stream methods such as write(data), end(data) or pipe() in
CCM mode might fail as CCM cannot handle more than one
chunk of data per instance.
When passing additional authenticated data (AAD), the length of
the actual message in bytes must be passed to setAAD() via the
plaintextLength option. Many crypto libraries include the
authentication tag in the ciphertext, which means that they
produce ciphertexts of the length plaintextLength +
authTagLength. Node.js does not include the authentication tag, so
the ciphertext length is always plaintextLength. This is not
necessary if no AAD is used.
As CCM processes the whole message at once, update() must be
called exactly once.
Even though calling update() is sufficient to encrypt/decrypt the
message, applications must call final() to compute or verify the
authentication tag.
import { Buffer } from 'node:buffer';
const {
createCipheriv,
createDecipheriv,
randomBytes,
} = await import('node:crypto');
const key = 'keykeykeykeykeykeykeykey';
const nonce = randomBytes(12);
const aad = Buffer.from('0123456789', 'hex');
const cipher = createCipheriv('aes-192-ccm', key, nonce, {
authTagLength: 16,
});
const plaintext = 'Hello world';
cipher.setAAD(aad, {
plaintextLength: Buffer.byteLength(plaintext),
});
const ciphertext = cipher.update(plaintext, 'utf8');
cipher.final();
const tag = cipher.getAuthTag();
// Now transmit { ciphertext, nonce, tag }.
const decipher = createDecipheriv('aes-192-ccm', key, nonce, {
authTagLength: 16,
});
decipher.setAuthTag(tag);
decipher.setAAD(aad, {
plaintextLength: ciphertext.length,
});
const receivedPlaintext = decipher.update(ciphertext, null, 'utf8')
try {
decipher.final();
} catch (err) {
throw new Error('Authentication failed!', { cause: err });
}
console.log(receivedPlaintext);
const { Buffer } = require('node:buffer');
const {
createCipheriv,
createDecipheriv,
randomBytes,
} = require('node:crypto');
const key = 'keykeykeykeykeykeykeykey';
const nonce = randomBytes(12);
const aad = Buffer.from('0123456789', 'hex');
const cipher = createCipheriv('aes-192-ccm', key, nonce, {
authTagLength: 16,
});
const plaintext = 'Hello world';
cipher.setAAD(aad, {
plaintextLength: Buffer.byteLength(plaintext),
});
const ciphertext = cipher.update(plaintext, 'utf8');
cipher.final();
const tag = cipher.getAuthTag();
// Now transmit { ciphertext, nonce, tag }.
const decipher = createDecipheriv('aes-192-ccm', key, nonce, {
authTagLength: 16,
});
decipher.setAuthTag(tag);
decipher.setAAD(aad, {
plaintextLength: ciphertext.length,
});
const receivedPlaintext = decipher.update(ciphertext, null, 'utf8')
try {
decipher.final();
} catch (err) {
throw new Error('Authentication failed!', { cause: err });
}
console.log(receivedPlaintext);
FIPS mode
When using OpenSSL 3, Node.js supports FIPS 140-2 when used
with an appropriate OpenSSL 3 provider, such as the FIPS provider
from OpenSSL 3 which can be installed by following the instructions
in OpenSSL’s FIPS README file.
For FIPS support in Node.js you will need:
A correctly installed OpenSSL 3 FIPS provider.
An OpenSSL 3 FIPS module configuration file.
An OpenSSL 3 configuration file that references the FIPS module
configuration file.
Node.js will need to be configured with an OpenSSL configuration
file that points to the FIPS provider. An example configuration file
looks like this:
nodejs_conf = nodejs_init
.include /<absolute path>/fipsmodule.cnf
[nodejs_init]
providers = provider_sect
[provider_sect]
default = default_sect
# The fips section name should match the section name inside the
# included fipsmodule.cnf.
fips = fips_sect
[default_sect]
activate = 1
where fipsmodule.cnf is the FIPS module configuration file generated
from the FIPS provider installation step:
openssl fipsinstall
Set the OPENSSL_CONF environment variable to point to your
configuration file and OPENSSL_MODULES to the location of the FIPS
provider dynamic library. e.g.
export OPENSSL_CONF=/<path to configuration file>/nodejs.cnf
export OPENSSL_MODULES=/<path to openssl lib>/ossl-modules
FIPS mode can then be enabled in Node.js either by:
Starting Node.js with --enable-fips or --force-fips command
line flags.
Programmatically calling crypto.setFips(true).
Optionally FIPS mode can be enabled in Node.js via the OpenSSL
configuration file. e.g.
nodejs_conf = nodejs_init
.include /<absolute path>/fipsmodule.cnf
[nodejs_init]
providers = provider_sect
alg_section = algorithm_sect
[provider_sect]
default = default_sect
# The fips section name should match the section name inside the
# included fipsmodule.cnf.
fips = fips_sect
[default_sect]
activate = 1
[algorithm_sect]
default_properties = fips=yes
Crypto constants
The following constants exported by crypto.constants apply to
various uses of the node:crypto, node:tls, and node:https modules and
are generally specific to OpenSSL.
OpenSSL options
See the list of SSL OP Flags for details.
Constant
SSL_OP_ALL Applies multiple bu
https://www.openss
for detail.
SSL_OP_ALLOW_NO_DHE_KEX Instructs OpenSSL t
TLS v1.3
SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION Allows legacy insecu
clients or servers. Se
https://www.openss
SSL_OP_CIPHER_SERVER_PREFERENCE Attempts to use the
selecting a cipher. B
https://www.openss
SSL_OP_CISCO_ANYCONNECT Instructs OpenSSL t
O SS
SSL_OP_COOKIE_EXCHANGE Instructs OpenSSL t
SSL_OP_CRYPTOPRO_TLSEXT_BUG Instructs OpenSSL t
the cryptopro draft.
SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS Instructs OpenSSL t
added in OpenSSL 0
SSL_OP_LEGACY_SERVER_CONNECT Allows initial conne
SSL_OP_NO_COMPRESSION Instructs OpenSSL t
SSL_OP_NO_ENCRYPT_THEN_MAC Instructs OpenSSL t
SSL_OP_NO_QUERY_MTU
SSL_OP_NO_RENEGOTIATION Instructs OpenSSL t
SSL_OP_NO_SESSION_RESUMPTION_ON_RENEGOTIATION Instructs OpenSSL t
renegotiation.
SSL_OP_NO_SSLv2 Instructs OpenSSL t
SSL_OP_NO_SSLv3 Instructs OpenSSL t
SSL_OP_NO_TICKET Instructs OpenSSL t
SSL_OP_NO_TLSv1 Instructs OpenSSL t
SSL_OP_NO_TLSv1_1 Instructs OpenSSL t
SSL_OP_NO_TLSv1_2 Instructs OpenSSL t
SSL_OP_NO_TLSv1_3 Instructs OpenSSL t
SSL_OP_PRIORITIZE_CHACHA Instructs OpenSSL s
does. This option ha
enabled.
SSL_OP_TLS_ROLLBACK_BUG Instructs OpenSSL t
OpenSSL engine constants
Constant Description
ENGINE_METHOD_RSA Limit engine usage to RSA
ENGINE_METHOD_DSA Limit engine usage to DSA
ENGINE_METHOD_DH Limit engine usage to DH
ENGINE_METHOD_RAND Limit engine usage to RAND
ENGINE_METHOD_EC Limit engine usage to EC
ENGINE_METHOD_CIPHERS Limit engine usage to CIPHERS
ENGINE_METHOD_DIGESTS Limit engine usage to DIGESTS
ENGINE_METHOD_PKEY_METHS Limit engine usage to
PKEY_METHDS
ENGINE_METHOD_PKEY_ASN1_METHS Limit engine usage to
PKEY_ASN1_METHS
ENGINE_METHOD_ALL
ENGINE_METHOD_NONE
Other OpenSSL constants
Constant Description
DH_CHECK_P_NOT_SAFE_PRIME
DH_CHECK_P_NOT_PRIME
DH_UNABLE_TO_CHECK_GENERATOR
DH_NOT_SUITABLE_GENERATOR
RSA_PKCS1_PADDING
RSA_SSLV23_PADDING
RSA_NO_PADDING
RSA_PKCS1_OAEP_PADDING
RSA_X931_PADDING
RSA_PKCS1_PSS_PADDING
RSA_PSS_SALTLEN_DIGEST Sets the salt length for
RSA_PKCS1_PSS_PADDING to the digest
size when signing or verifying.
RSA_PSS_SALTLEN_MAX_SIGN Sets the salt length for
RSA_PKCS1_PSS_PADDING to the
maximum permissible value when
signing data.
RSA_PSS_SALTLEN_AUTO Causes the salt length for
RSA_PKCS1_PSS_PADDING to be
determined automatically when
verifying a signature.
POINT_CONVERSION_COMPRESSED
POINT_CONVERSION_UNCOMPRESSED
POINT_CONVERSION_HYBRID
Node.js crypto constants
Constant Description
defaultCoreCipherList Specifies the built-in default cipher list
used by Node.js.
defaultCipherList Specifies the active default cipher list used
by the current Node.js process.
Debugger
Stability: 2 - Stable
Node.js includes a command-line debugging utility. The Node.js
debugger client is not a full-featured debugger, but simple stepping
and inspection are possible.
To use it, start Node.js with the inspect argument followed by the
path to the script to debug.
$ node inspect myscript.js
< Debugger listening on ws://127.0.0.1:9229/621111f9-ffcb-4e82-
b718-48a145fa5db8
< For help, see: https://nodejs.org/en/docs/inspector
<
connecting to 127.0.0.1:9229 ... ok
< Debugger attached.
<
ok
Break on start in myscript.js:2
1 // myscript.js
> 2 global.x = 5;
3 setTimeout(() => {
4 debugger;
debug>
The debugger automatically breaks on the first executable line. To
instead run until the first breakpoint (specified by a debugger
statement), set the NODE_INSPECT_RESUME_ON_START environment
variable to 1.
$ cat myscript.js
// myscript.js
global.x = 5;
setTimeout(() => {
debugger;
console.log('world');
}, 1000);
console.log('hello');
$ NODE_INSPECT_RESUME_ON_START=1 node inspect myscript.js
< Debugger listening on ws://127.0.0.1:9229/f1ed133e-7876-495b-
83ae-c32c6fc319c2
< For help, see: https://nodejs.org/en/docs/inspector
<
connecting to 127.0.0.1:9229 ... ok
< Debugger attached.
<
< hello
<
break in myscript.js:4
2 global.x = 5;
3 setTimeout(() => {
> 4 debugger;
5 console.log('world');
6 }, 1000);
debug> next
break in myscript.js:5
3 setTimeout(() => {
4 debugger;
> 5 console.log('world');
6 }, 1000);
7 console.log('hello');
debug> repl
Press Ctrl+C to leave debug repl
> x
5
> 2 + 2
4
debug> next
< world
<
break in myscript.js:6
4 debugger;
5 console.log('world');
> 6 }, 1000);
7 console.log('hello');
8
debug> .exit
$
The repl command allows code to be evaluated remotely. The next
command steps to the next line. Type help to see what other
commands are available.
Pressing enter without typing a command will repeat the previous
debugger command.
Watchers
It is possible to watch expression and variable values while
debugging. On every breakpoint, each expression from the watchers
list will be evaluated in the current context and displayed
immediately before the breakpoint’s source code listing.
To begin watching an expression, type watch('my_expression'). The
command watchers will print the active watchers. To remove a
watcher, type unwatch('my_expression').
Command reference
Stepping
cont, c: Continue execution
next, n: Step next
step, s: Step in
out, o: Step out
pause: Pause running code (like pause button in Developer Tools)
Breakpoints
setBreakpoint(), sb():Set breakpoint on current line
setBreakpoint(line), sb(line): Set breakpoint on specific line
setBreakpoint('fn()'), sb(...): Set breakpoint on a first
statement in function’s body
setBreakpoint('script.js', 1), sb(...): Set breakpoint on first
line of script.js
setBreakpoint('script.js', 1, 'num < 4'), sb(...): Set
conditional breakpoint on first line of script.js that only breaks
when num < 4 evaluates to true
clearBreakpoint('script.js', 1), cb(...): Clear breakpoint in
script.js on line 1
It is also possible to set a breakpoint in a file (module) that is not
loaded yet:
$ node inspect main.js
< Debugger listening on ws://127.0.0.1:9229/48a5b28a-550c-471b-
b5e1-d13dd7165df9
< For help, see: https://nodejs.org/en/docs/inspector
<
connecting to 127.0.0.1:9229 ... ok
< Debugger attached.
<
Break on start in main.js:1
> 1 const mod = require('./mod.js');
2 mod.hello();
3 mod.hello();
debug> setBreakpoint('mod.js', 22)
Warning: script 'mod.js' was not loaded yet.
debug> c
break in mod.js:22
20 // USE OR OTHER DEALINGS IN THE SOFTWARE.
21
>22 exports.hello = function() {
23 return 'hello from module';
24 };
debug>
It is also possible to set a conditional breakpoint that only breaks
when a given expression evaluates to true:
$ node inspect main.js
< Debugger listening on ws://127.0.0.1:9229/ce24daa8-3816-44d4-
b8ab-8273c8a66d35
< For help, see: https://nodejs.org/en/docs/inspector
<
connecting to 127.0.0.1:9229 ... ok
< Debugger attached.
Break on start in main.js:7
5 }
6
> 7 addOne(10);
8 addOne(-1);
9
debug> setBreakpoint('main.js', 4, 'num < 0')
1 'use strict';
2
3 function addOne(num) {
> 4 return num + 1;
5 }
6
7 addOne(10);
8 addOne(-1);
9
debug> cont
break in main.js:4
2
3 function addOne(num) {
> 4 return num + 1;
5 }
6
debug> exec('num')
-1
debug>
Information
backtrace, bt: Print backtrace of current execution frame
list(5): List scripts source code with 5 line context (5 lines
before and after)
watch(expr): Add expression to watch list
unwatch(expr): Remove expression from watch list
unwatch(index): Remove expression at specific index from watch
list
watchers: List all watchers and their values (automatically listed
on each breakpoint)
repl: Open debugger’s repl for evaluation in debugging script’s
context
exec expr, p expr: Execute an expression in debugging script’s
context and print its value
profile: Start CPU profiling session
profileEnd: Stop current CPU profiling session
profiles: List all completed CPU profiling sessions
profiles[n].save(filepath = 'node.cpuprofile'): Save CPU
profiling session to disk as JSON
takeHeapSnapshot(filepath = 'node.heapsnapshot'): Take a heap
snapshot and save to disk as JSON
Execution control
run:Run script (automatically runs on debugger’s start)
restart: Restart script
kill: Kill script
Various
scripts: List all loaded scripts
version: Display V8’s version
Advanced usage
V8 inspector integration for Node.js
V8 Inspector integration allows attaching Chrome DevTools to
Node.js instances for debugging and profiling. It uses the Chrome
DevTools Protocol.
V8 Inspector can be enabled by passing the --inspect flag when
starting a Node.js application. It is also possible to supply a custom
port with that flag, e.g. --inspect=9222 will accept DevTools
connections on port 9222.
To break on the first line of the application code, pass the --inspect-
brk flag instead of --inspect.
$ node --inspect index.js
Debugger listening on ws://127.0.0.1:9229/dc9010dd-f8b8-4ac5-a510-
c1a114ec7d29
For help, see: https://nodejs.org/en/docs/inspector
(In the example above, the UUID dc9010dd-f8b8-4ac5-a510-
c1a114ec7d29 at the end of the URL is generated on the fly, it varies
in different debugging sessions.)
If the Chrome browser is older than 66.0.3345.0, use inspector.html
instead of js_app.html in the above URL.
Chrome DevTools doesn’t support debugging worker threads yet.
ndb can be used to debug them.
DNS
Stability: 2 - Stable
The node:dns module enables name resolution. For example, use it to
look up IP addresses of host names.
Although named for the Domain Name System (DNS), it does not
always use the DNS protocol for lookups. dns.lookup() uses the
operating system facilities to perform name resolution. It may not
need to perform any network communication. To perform name
resolution the way other applications on the same system do, use
dns.lookup().
const dns = require('node:dns');
dns.lookup('example.org', (err, address, family) => {
console.log('address: %j family: IPv%s', address, family);
});
// address: "93.184.216.34" family: IPv4
All other functions in the node:dns module connect to an actual DNS
server to perform name resolution. They will always use the network
to perform DNS queries. These functions do not use the same set of
configuration files used by dns.lookup() (e.g. /etc/hosts). Use these
functions to always perform DNS queries, bypassing other name-
resolution facilities.
const dns = require('node:dns');
dns.resolve4('archive.org', (err, addresses) => {
if (err) throw err;
console.log(`addresses: ${JSON.stringify(addresses)}`);
addresses.forEach((a) => {
dns.reverse(a, (err, hostnames) => {
if (err) {
throw err;
}
console.log(`reverse for ${a}: ${JSON.stringify(hostnames)}`);
});
});
});
See the Implementation considerations section for more
information.
Class: dns.Resolver
An independent resolver for DNS requests.
Creating a new resolver uses the default server settings. Setting the
servers used for a resolver using resolver.setServers() does not affect
other resolvers:
const { Resolver } = require('node:dns');
const resolver = new Resolver();
resolver.setServers(['4.4.4.4']);
// This request will use the server at 4.4.4.4, independent of globa
resolver.resolve4('example.org', (err, addresses) => {
// ...
});
The following methods from the node:dns module are available:
resolver.getServers()
resolver.resolve()
resolver.resolve4()
resolver.resolve6()
resolver.resolveAny()
resolver.resolveCaa()
resolver.resolveCname()
resolver.resolveMx()
resolver.resolveNaptr()
resolver.resolveNs()
resolver.resolvePtr()
resolver.resolveSoa()
resolver.resolveSrv()
resolver.resolveTxt()
resolver.reverse()
resolver.setServers()
Resolver([options])
Create a new resolver.
options {Object}
timeout {integer} Query timeout in milliseconds, or -1 to use
the default timeout.
tries {integer} The number of tries the resolver will try
contacting each name server before giving up. Default: 4
resolver.cancel()
Cancel all outstanding DNS queries made by this resolver. The
corresponding callbacks will be called with an error with code
ECANCELLED.
resolver.setLocalAddress([ipv4][, ipv6])
ipv4 {string} A string representation of an IPv4 address.
Default: '0.0.0.0'
ipv6 {string} A string representation of an IPv6 address.
Default: '::0'
The resolver instance will send its requests from the specified IP
address. This allows programs to specify outbound interfaces when
used on multi-homed systems.
If a v4 or v6 address is not specified, it is set to the default and the
operating system will choose a local address automatically.
The resolver will use the v4 local address when making requests to
IPv4 DNS servers, and the v6 local address when making requests to
IPv6 DNS servers. The rrtype of resolution requests has no impact on
the local address used.
dns.getServers()
Returns: {string[]}
Returns an array of IP address strings, formatted according to RFC
5952, that are currently configured for DNS resolution. A string will
include a port section if a custom port is used.
[
'4.4.4.4',
'2001:4860:4860::8888',
'4.4.4.4:1053',
'[2001:4860:4860::8888]:1053',
]
dns.lookup(hostname[, options],
callback)
hostname {string}
options {integer | Object}
family {integer|string} The record family. Must be 4, 6, or 0.
For backward compatibility reasons,'IPv4' and 'IPv6' are
interpreted as 4 and 6 respectively. The value 0 indicates that
IPv4 and IPv6 addresses are both returned. Default: 0.
hints {number} One or more supported getaddrinfo flags.
Multiple flags may be passed by bitwise ORing their values.
all {boolean} When true, the callback returns all resolved
addresses in an array. Otherwise, returns a single address.
Default: false.
verbatim {boolean} When true, the callback receives IPv4 and
IPv6 addresses in the order the DNS resolver returned them.
When false, IPv4 addresses are placed before IPv6
addresses. Default: true (addresses are not reordered).
Default value is configurable using
dns.setDefaultResultOrder() or --dns-result-order.
callback {Function}
err {Error}
address {string} A string representation of an IPv4 or IPv6
address.
family {integer} 4 or 6, denoting the family of address, or 0 if
the address is not an IPv4 or IPv6 address. 0 is a likely
indicator of a bug in the name resolution service used by the
operating system.
Resolves a host name (e.g. 'nodejs.org') into the first found A (IPv4)
or AAAA (IPv6) record. All option properties are optional. If options
is an integer, then it must be 4 or 6 – if options is 0 or not provided,
then IPv4 and IPv6 addresses are both returned if found.
With the all option set to true, the arguments for callback change to
(err, addresses), with addresses being an array of objects with the
properties address and family.
On error, err is an Error object, where err.code is the error code.
Keep in mind that err.code will be set to 'ENOTFOUND' not only when
the host name does not exist but also when the lookup fails in other
ways such as no available file descriptors.
dns.lookup() does not necessarily have anything to do with the DNS
protocol. The implementation uses an operating system facility that
can associate names with addresses and vice versa. This
implementation can have subtle but important consequences on the
behavior of any Node.js program. Please take some time to consult
the Implementation considerations section before using
dns.lookup().
Example usage:
const dns = require('node:dns');
const options = {
family: 6,
hints: dns.ADDRCONFIG | dns.V4MAPPED,
};
dns.lookup('example.com', options, (err, address, family) =>
console.log('address: %j family: IPv%s', address, family));
// address: "2606:2800:220:1:248:1893:25c8:1946" family: IPv6
// When options.all is true, the result will be an Array.
options.all = true;
dns.lookup('example.com', options, (err, addresses) =>
console.log('addresses: %j', addresses));
// addresses: [{"address":"2606:2800:220:1:248:1893:25c8:1946","fami
If this method is invoked as its util.promisify()ed version, and all is
not set to true, it returns a Promise for an Object with address and
family properties.
Supported getaddrinfo flags
The following flags can be passed as hints to dns.lookup().
dns.ADDRCONFIG: Limits returned address types to the types of
non-loopback addresses configured on the system. For example,
IPv4 addresses are only returned if the current system has at
least one IPv4 address configured.
dns.V4MAPPED: If the IPv6 family was specified, but no IPv6
addresses were found, then return IPv4 mapped IPv6 addresses.
It is not supported on some operating systems (e.g. FreeBSD
10.1).
dns.ALL: If dns.V4MAPPED is specified, return resolved IPv6
addresses as well as IPv4 mapped IPv6 addresses.
dns.lookupService(address, port,
callback)
address {string}
port {number}
callback {Function}
err {Error}
hostname {string} e.g. example.com
service {string} e.g. http
Resolves the given address and port into a host name and service
using the operating system’s underlying getnameinfo implementation.
If address is not a valid IP address, a TypeError will be thrown. The
port will be coerced to a number. If it is not a legal port, a TypeError
will be thrown.
On an error, err is an Error object, where err.code is the error code.
const dns = require('node:dns');
dns.lookupService('127.0.0.1', 22, (err, hostname, service) => {
console.log(hostname, service);
// Prints: localhost ssh
});
If this method is invoked as its util.promisify()ed version, it returns
a Promise for an Object with hostname and service properties.
dns.resolve(hostname[, rrtype],
callback)
hostname {string} Host name to resolve.
rrtype {string} Resource record type. Default: 'A'.
callback {Function}
err {Error}
records {string[] | Object[] | Object}
Uses the DNS protocol to resolve a host name (e.g. 'nodejs.org') into
an array of the resource records. The callback function has
arguments (err, records). When successful, records will be an array
of resource records. The type and structure of individual results
varies based on rrtype:
rrtype records contains Result Shorthand
type method
'A' IPv4 addresses {string} dns.resolve4()
(default)
'AAAA' IPv6 addresses {string} dns.resolve6()
'ANY' any records {Object} dns.resolveAny()
'CAA' CA authorization {Object} dns.resolveCaa()
records
'CNAME' canonical name {string} dns.resolveCname()
records
'MX' mail exchange records {Object} dns.resolveMx()
rrtype records contains Result Shorthand
type method
'NAPTR' name authority pointer {Object} dns.resolveNaptr()
records
'NS' name server records {string} dns.resolveNs()
'PTR' pointer records {string} dns.resolvePtr()
'SOA' start of authority {Object} dns.resolveSoa()
records
'SRV' service records {Object} dns.resolveSrv()
'TXT' text records {string[]} dns.resolveTxt()
On error, err is an Error object, where err.code is one of the DNS
error codes.
dns.resolve4(hostname[, options],
callback)
hostname {string} Host name to resolve.
options {Object}
ttl {boolean} Retrieves the Time-To-Live value (TTL) of each
record. When true, the callback receives an array of {
address: '1.2.3.4', ttl: 60 } objects rather than an array of
strings, with the TTL expressed in seconds.
callback {Function}
err {Error}
addresses {string[] | Object[]}
Uses the DNS protocol to resolve a IPv4 addresses (A records) for the
hostname. The addresses argument passed to the callback function will
contain an array of IPv4 addresses (e.g. ['74.125.79.104',
'74.125.79.105', '74.125.79.106']).
dns.resolve6(hostname[, options],
callback)
hostname {string} Host name to resolve.
options {Object}
ttl {boolean} Retrieve the Time-To-Live value (TTL) of each
record. When true, the callback receives an array of {
address: '0:1:2:3:4:5:6:7', ttl: 60 } objects rather than an
array of strings, with the TTL expressed in seconds.
callback {Function}
err {Error}
addresses {string[] | Object[]}
Uses the DNS protocol to resolve IPv6 addresses (AAAA records) for
the hostname. The addresses argument passed to the callback function
will contain an array of IPv6 addresses.
dns.resolveAny(hostname, callback)
hostname {string}
callback {Function}
err {Error}
ret {Object[]}
Uses the DNS protocol to resolve all records (also known as ANY or *
query). The ret argument passed to the callback function will be an
array containing various types of records. Each object has a property
type that indicates the type of the current record. And depending on
the type, additional properties will be present on the object:
Type Properties
'A' address/ttl
Type Properties
'AAAA' address/ttl
'CNAME' value
'MX' Refer to dns.resolveMx()
'NAPTR' Refer to dns.resolveNaptr()
'NS' value
'PTR' value
'SOA' Refer to dns.resolveSoa()
'SRV' Refer to dns.resolveSrv()
'TXT' This type of record contains an array property called
entries which refers to dns.resolveTxt(), e.g. { entries:
['...'], type: 'TXT' }
Here is an example of the ret object passed to the callback:
[ { type: 'A', address: '127.0.0.1', ttl: 299 },
{ type: 'CNAME', value: 'example.com' },
{ type: 'MX', exchange: 'alt4.aspmx.l.example.com', priority: 50 }
{ type: 'NS', value: 'ns1.example.com' },
{ type: 'TXT', entries: [ 'v=spf1 include:_spf.example.com ~all' ]
{ type: 'SOA',
nsname: 'ns1.example.com',
hostmaster: 'admin.example.com',
serial: 156696742,
refresh: 900,
retry: 900,
expire: 1800,
minttl: 60 } ]
DNS server operators may choose not to respond to ANY queries. It
may be better to call individual methods like dns.resolve4(),
dns.resolveMx(), and so on. For more details, see RFC 8482.
dns.resolveCname(hostname,
callback)
hostname {string}
callback {Function}
err {Error}
addresses {string[]}
Uses the DNS protocol to resolve CNAME records for the hostname. The
addresses argument passed to the callback function will contain an
array of canonical name records available for the hostname
(e.g. ['bar.example.com']).
dns.resolveCaa(hostname, callback)
hostname {string}
callback {Function}
err {Error}
records {Object[]}
Uses the DNS protocol to resolve CAA records for the hostname. The
addresses argument passed to the callback function will contain an
array of certification authority authorization records available for the
hostname (e.g. [{critical: 0, iodef: 'mailto:[email protected]'},
{critical: 128, issue: 'pki.example.com'}]).
dns.resolveMx(hostname, callback)
hostname {string}
callback {Function}
err {Error}
addresses {Object[]}
Uses the DNS protocol to resolve mail exchange records (MX records)
for the hostname. The addresses argument passed to the callback
function will contain an array of objects containing both a priority
and exchange property (e.g. [{priority: 10, exchange:
'mx.example.com'}, ...]).
dns.resolveNaptr(hostname,
callback)
hostname {string}
callback {Function}
err {Error}
addresses {Object[]}
Uses the DNS protocol to resolve regular expression-based records
(NAPTR records) for the hostname. The addresses argument passed to
the callback function will contain an array of objects with the
following properties:
flags
service
regexp
replacement
order
preference
{
flags: 's',
service: 'SIP+D2U',
regexp: '',
replacement: '_sip._udp.example.com',
order: 30,
preference: 100
}
dns.resolveNs(hostname, callback)
hostname {string}
callback {Function}
err {Error}
addresses {string[]}
Uses the DNS protocol to resolve name server records (NS records)
for the hostname. The addresses argument passed to the callback
function will contain an array of name server records available for
hostname (e.g. ['ns1.example.com', 'ns2.example.com']).
dns.resolvePtr(hostname, callback)
hostname {string}
callback {Function}
err {Error}
addresses {string[]}
Uses the DNS protocol to resolve pointer records (PTR records) for
the hostname. The addresses argument passed to the callback function
will be an array of strings containing the reply records.
dns.resolveSoa(hostname, callback)
hostname {string}
callback {Function}
err {Error}
address {Object}
Uses the DNS protocol to resolve a start of authority record (SOA
record) for the hostname. The address argument passed to the callback
function will be an object with the following properties:
nsname
hostmaster
serial
refresh
retry
expire
minttl
{
nsname: 'ns.example.com',
hostmaster: 'root.example.com',
serial: 2013101809,
refresh: 10000,
retry: 2400,
expire: 604800,
minttl: 3600
}
dns.resolveSrv(hostname, callback)
hostname {string}
callback {Function}
err {Error}
addresses {Object[]}
Uses the DNS protocol to resolve service records (SRV records) for the
hostname. The addresses argument passed to the callback function will
be an array of objects with the following properties:
priority
weight
port
name
{
priority: 10,
weight: 5,
port: 21223,
name: 'service.example.com'
}
dns.resolveTxt(hostname, callback)
hostname {string}
callback {Function}
err {Error}
records <string[][]>
Uses the DNS protocol to resolve text queries (TXT records) for the
hostname. The records argument passed to the callback function is a
two-dimensional array of the text records available for hostname (e.g.
[ ['v=spf1 ip4:0.0.0.0 ', '~all' ] ]). Each sub-array contains TXT
chunks of one record. Depending on the use case, these could be
either joined together or treated separately.
dns.reverse(ip, callback)
ip {string}
callback {Function}
err {Error}
hostnames {string[]}
Performs a reverse DNS query that resolves an IPv4 or IPv6 address
to an array of host names.
On error, err is an Error object, where err.code is one of the DNS
error codes.
dns.setDefaultResultOrder(order)
order {string} must be 'ipv4first' or 'verbatim'.
Set the default value of verbatim in dns.lookup() and
dnsPromises.lookup(). The value could be:
ipv4first: sets default verbatim false.
verbatim: sets default verbatim true.
The default is verbatim and dns.setDefaultResultOrder() have higher
priority than --dns-result-order. When using worker threads,
dns.setDefaultResultOrder() from the main thread won’t affect the
default dns orders in workers.
dns.getDefaultResultOrder()
Get the default value for verbatim in dns.lookup() and
dnsPromises.lookup(). The value could be:
ipv4first: for verbatim defaulting to false.
verbatim: for verbatim defaulting to true.
dns.setServers(servers)
servers {string[]} array of RFC 5952 formatted addresses
Sets the IP address and port of servers to be used when performing
DNS resolution. The servers argument is an array of RFC 5952
formatted addresses. If the port is the IANA default DNS port (53) it
can be omitted.
dns.setServers([
'4.4.4.4',
'[2001:4860:4860::8888]',
'4.4.4.4:1053',
'[2001:4860:4860::8888]:1053',
]);
An error will be thrown if an invalid address is provided.
The dns.setServers() method must not be called while a DNS query is
in progress.
The dns.setServers() method affects only dns.resolve(), dns.resolve*
() and dns.reverse() (and specifically not dns.lookup()).
This method works much like resolve.conf. That is, if attempting to
resolve with the first server provided results in a NOTFOUND error, the
resolve() method will not attempt to resolve with subsequent servers
provided. Fallback DNS servers will only be used if the earlier ones
time out or result in some other error.
DNS promises API
The dns.promises API provides an alternative set of asynchronous
DNS methods that return Promise objects rather than using callbacks.
The API is accessible via require('node:dns').promises or
require('node:dns/promises').
Class: dnsPromises.Resolver
An independent resolver for DNS requests.
Creating a new resolver uses the default server settings. Setting the
servers used for a resolver using resolver.setServers() does not affect
other resolvers:
const { Resolver } = require('node:dns').promises;
const resolver = new Resolver();
resolver.setServers(['4.4.4.4']);
// This request will use the server at 4.4.4.4, independent of globa
resolver.resolve4('example.org').then((addresses) => {
// ...
});
});
// Alternatively, the same code can be written using async-await sty
(async function() {
const addresses = await resolver.resolve4('example.org');
})();
The following methods from the dnsPromises API are available:
resolver.getServers()
resolver.resolve()
resolver.resolve4()
resolver.resolve6()
resolver.resolveAny()
resolver.resolveCaa()
resolver.resolveCname()
resolver.resolveMx()
resolver.resolveNaptr()
resolver.resolveNs()
resolver.resolvePtr()
resolver.resolveSoa()
resolver.resolveSrv()
resolver.resolveTxt()
resolver.reverse()
resolver.setServers()
resolver.cancel()
Cancel all outstanding DNS queries made by this resolver. The
corresponding promises will be rejected with an error with the code
ECANCELLED.
dnsPromises.getServers()
Returns: {string[]}
Returns an array of IP address strings, formatted according to RFC
5952, that are currently configured for DNS resolution. A string will
include a port section if a custom port is used.
[
'4.4.4.4',
'2001:4860:4860::8888',
'4.4.4.4:1053',
'[2001:4860:4860::8888]:1053',
]
dnsPromises.lookup(hostname[, options])
hostname {string}
options {integer | Object}
family {integer} The record family. Must be 4, 6, or 0. The
value 0 indicates that IPv4 and IPv6 addresses are both
returned. Default: 0.
hints {number} One or more supported getaddrinfo flags.
Multiple flags may be passed by bitwise ORing their values.
all {boolean} When true, the Promise is resolved with all
addresses in an array. Otherwise, returns a single address.
Default: false.
verbatim {boolean} When true, the Promise is resolved with
IPv4 and IPv6 addresses in the order the DNS resolver
returned them. When false, IPv4 addresses are placed before
IPv6 addresses. Default: currently false (addresses are
reordered) but this is expected to change in the not too
distant future. Default value is configurable using
dns.setDefaultResultOrder() or --dns-result-order. New code
should use { verbatim: true }.
Resolves a host name (e.g. 'nodejs.org') into the first found A (IPv4)
or AAAA (IPv6) record. All option properties are optional. If options
is an integer, then it must be 4 or 6 – if options is not provided, then
IPv4 and IPv6 addresses are both returned if found.
With the all option set to true, the Promise is resolved with addresses
being an array of objects with the properties address and family.
On error, the Promise is rejected with an Error object, where err.code
is the error code. Keep in mind that err.code will be set to 'ENOTFOUND'
not only when the host name does not exist but also when the lookup
fails in other ways such as no available file descriptors.
dnsPromises.lookup() does not necessarily have anything to do with
the DNS protocol. The implementation uses an operating system
facility that can associate names with addresses and vice versa. This
implementation can have subtle but important consequences on the
behavior of any Node.js program. Please take some time to consult
the Implementation considerations section before using
dnsPromises.lookup().
Example usage:
const dns = require('node:dns');
const dnsPromises = dns.promises;
const options = {
family: 6,
hints: dns.ADDRCONFIG | dns.V4MAPPED,
};
dnsPromises.lookup('example.com', options).then((result) => {
console.log('address: %j family: IPv%s', result.address, result.fa
// address: "2606:2800:220:1:248:1893:25c8:1946" family: IPv6
});
// When options.all is true, the result will be an Array.
options.all = true;
dnsPromises.lookup('example.com', options).then((result) => {
console.log('addresses: %j', result);
// addresses: [{"address":"2606:2800:220:1:248:1893:25c8:1946","fa
});
dnsPromises.lookupService(address, port)
address {string}
port {number}
Resolves the given address and port into a host name and service
using the operating system’s underlying getnameinfo implementation.
If address is not a valid IP address, a TypeError will be thrown. The
port will be coerced to a number. If it is not a legal port, a TypeError
will be thrown.
On error, the Promise is rejected with an Error object, where err.code
is the error code.
const dnsPromises = require('node:dns').promises;
dnsPromises.lookupService('127.0.0.1', 22).then((result) => {
console.log(result.hostname, result.service);
// Prints: localhost ssh
});
dnsPromises.resolve(hostname[, rrtype])
hostname {string} Host name to resolve.
rrtype {string} Resource record type. Default: 'A'.
Uses the DNS protocol to resolve a host name (e.g. 'nodejs.org') into
an array of the resource records. When successful, the Promise is
resolved with an array of resource records. The type and structure of
individual results vary based on rrtype:
rrtype records Result Shorthand method
contains type
'A' IPv4 addresses {string} dnsPromises.resolve4()
(default)
'AAAA' IPv6 addresses {string} dnsPromises.resolve6()
'ANY' any records {Object} dnsPromises.resolveAny()
rrtype records Result Shorthand method
contains type
'CAA' CA {Object} dnsPromises.resolveCaa()
authorization
records
'CNAME' canonical name {string} dnsPromises.resolveCname()
records
'MX' mail exchange {Object} dnsPromises.resolveMx()
records
'NAPTR' name authority {Object} dnsPromises.resolveNaptr()
pointer records
'NS' name server {string} dnsPromises.resolveNs()
records
'PTR' pointer records {string} dnsPromises.resolvePtr()
'SOA' start of {Object} dnsPromises.resolveSoa()
authority
records
'SRV' service records {Object} dnsPromises.resolveSrv()
'TXT' text records {string[]} dnsPromises.resolveTxt()
On error, the Promise is rejected with an Error object, where err.code
is one of the DNS error codes.
dnsPromises.resolve4(hostname[, options])
hostname {string} Host name to resolve.
options {Object}
ttl {boolean} Retrieve the Time-To-Live value (TTL) of each
record. When true, the Promise is resolved with an array of {
address: '1.2.3.4', ttl: 60 } objects rather than an array of
strings, with the TTL expressed in seconds.
Uses the DNS protocol to resolve IPv4 addresses (A records) for the
hostname. On success, the Promise is resolved with an array of IPv4
addresses (e.g. ['74.125.79.104', '74.125.79.105', '74.125.79.106']).
dnsPromises.resolve6(hostname[, options])
hostname {string} Host name to resolve.
options {Object}
ttl {boolean} Retrieve the Time-To-Live value (TTL) of each
record. When true, the Promise is resolved with an array of {
address: '0:1:2:3:4:5:6:7', ttl: 60 } objects rather than an
array of strings, with the TTL expressed in seconds.
Uses the DNS protocol to resolve IPv6 addresses (AAAA records) for
the hostname. On success, the Promise is resolved with an array of IPv6
addresses.
dnsPromises.resolveAny(hostname)
hostname {string}
Uses the DNS protocol to resolve all records (also known as ANY or *
query). On success, the Promise is resolved with an array containing
various types of records. Each object has a property type that
indicates the type of the current record. And depending on the type,
additional properties will be present on the object:
Type Properties
'A' address/ttl
'AAAA' address/ttl
'CNAME' value
'MX' Refer to dnsPromises.resolveMx()
Type Properties
'NAPTR' Refer to dnsPromises.resolveNaptr()
'NS' value
'PTR' value
'SOA' Refer to dnsPromises.resolveSoa()
'SRV' Refer to dnsPromises.resolveSrv()
'TXT' This type of record contains an array property called
entries which refers to dnsPromises.resolveTxt(), e.g. {
entries: ['...'], type: 'TXT' }
Here is an example of the result object:
[ { type: 'A', address: '127.0.0.1', ttl: 299 },
{ type: 'CNAME', value: 'example.com' },
{ type: 'MX', exchange: 'alt4.aspmx.l.example.com', priority: 50 }
{ type: 'NS', value: 'ns1.example.com' },
{ type: 'TXT', entries: [ 'v=spf1 include:_spf.example.com ~all' ]
{ type: 'SOA',
nsname: 'ns1.example.com',
hostmaster: 'admin.example.com',
serial: 156696742,
refresh: 900,
retry: 900,
expire: 1800,
minttl: 60 } ]
dnsPromises.resolveCaa(hostname)
hostname {string}
Uses the DNS protocol to resolve CAA records for the hostname. On
success, the Promise is resolved with an array of objects containing
available certification authority authorization records available for
the hostname (e.g. [{critical: 0, iodef: 'mailto:[email protected]'},
{critical: 128, issue: 'pki.example.com'}]).
dnsPromises.resolveCname(hostname)
hostname {string}
Uses the DNS protocol to resolve CNAME records for the hostname. On
success, the Promise is resolved with an array of canonical name
records available for the hostname (e.g. ['bar.example.com']).
dnsPromises.resolveMx(hostname)
hostname {string}
Uses the DNS protocol to resolve mail exchange records (MX records)
for the hostname. On success, the Promise is resolved with an array of
objects containing both a priority and exchange property (e.g.
[{priority: 10, exchange: 'mx.example.com'}, ...]).
dnsPromises.resolveNaptr(hostname)
hostname {string}
Uses the DNS protocol to resolve regular expression-based records
(NAPTR records) for the hostname. On success, the Promise is resolved
with an array of objects with the following properties:
flags
service
regexp
replacement
order
preference
{
flags: 's',
service: 'SIP+D2U',
regexp: '',
replacement: '_sip._udp.example.com',
order: 30,
preference: 100
}
dnsPromises.resolveNs(hostname)
hostname {string}
Uses the DNS protocol to resolve name server records (NS records)
for the hostname. On success, the Promise is resolved with an array of
name server records available for hostname (e.g. ['ns1.example.com',
'ns2.example.com']).
dnsPromises.resolvePtr(hostname)
hostname {string}
Uses the DNS protocol to resolve pointer records (PTR records) for
the hostname. On success, the Promise is resolved with an array of
strings containing the reply records.
dnsPromises.resolveSoa(hostname)
hostname {string}
Uses the DNS protocol to resolve a start of authority record (SOA
record) for the hostname. On success, the Promise is resolved with an
object with the following properties:
nsname
hostmaster
serial
refresh
retry
expire
minttl
{
nsname: 'ns.example.com',
hostmaster: 'root.example.com',
serial: 2013101809,
refresh: 10000,
retry: 2400,
expire: 604800,
minttl: 3600
}
dnsPromises.resolveSrv(hostname)
hostname {string}
Uses the DNS protocol to resolve service records (SRV records) for the
hostname. On success, the Promise is resolved with an array of objects
with the following properties:
priority
weight
port
name
{
priority: 10,
weight: 5,
port: 21223,
name: 'service.example.com'
}
dnsPromises.resolveTxt(hostname)
hostname {string}
Uses the DNS protocol to resolve text queries (TXT records) for the
hostname. On success, the Promise is resolved with a two-dimensional
array of the text records available for hostname (e.g. [ ['v=spf1
ip4:0.0.0.0 ', '~all' ] ]). Each sub-array contains TXT chunks of
one record. Depending on the use case, these could be either joined
together or treated separately.
dnsPromises.reverse(ip)
ip {string}
Performs a reverse DNS query that resolves an IPv4 or IPv6 address
to an array of host names.
On error, the Promise is rejected with an Error object, where err.code
is one of the DNS error codes.
dnsPromises.setDefaultResultOrder(order)
order {string} must be 'ipv4first' or 'verbatim'.
Set the default value of verbatim in dns.lookup() and
dnsPromises.lookup(). The value could be:
ipv4first: sets default verbatim false.
verbatim: sets default verbatim true.
The default is verbatim and dnsPromises.setDefaultResultOrder() have
higher priority than --dns-result-order. When using worker threads,
dnsPromises.setDefaultResultOrder() from the main thread won’t
affect the default dns orders in workers.
dnsPromises.getDefaultResultOrder()
Get the value of dnsOrder.
dnsPromises.setServers(servers)
servers {string[]} array of RFC 5952 formatted addresses
Sets the IP address and port of servers to be used when performing
DNS resolution. The servers argument is an array of RFC 5952
formatted addresses. If the port is the IANA default DNS port (53) it
can be omitted.
dnsPromises.setServers([
'4.4.4.4',
'[2001:4860:4860::8888]',
'4.4.4.4:1053',
'[2001:4860:4860::8888]:1053',
]);
An error will be thrown if an invalid address is provided.
The dnsPromises.setServers() method must not be called while a DNS
query is in progress.
This method works much like resolve.conf. That is, if attempting to
resolve with the first server provided results in a NOTFOUND error, the
resolve() method will not attempt to resolve with subsequent servers
provided. Fallback DNS servers will only be used if the earlier ones
time out or result in some other error.
Error codes
Each DNS query can return one of the following error codes:
dns.NODATA: DNS server returned an answer with no data.
dns.FORMERR: DNS server claims query was misformatted.
dns.SERVFAIL: DNS server returned general failure.
dns.NOTFOUND: Domain name not found.
dns.NOTIMP: DNS server does not implement the requested
operation.
dns.REFUSED: DNS server refused query.
dns.BADQUERY: Misformatted DNS query.
dns.BADNAME: Misformatted host name.
dns.BADFAMILY: Unsupported address family.
dns.BADRESP: Misformatted DNS reply.
dns.CONNREFUSED: Could not contact DNS servers.
dns.TIMEOUT: Timeout while contacting DNS servers.
dns.EOF: End of file.
dns.FILE: Error reading file.
dns.NOMEM: Out of memory.
dns.DESTRUCTION: Channel is being destroyed.
dns.BADSTR: Misformatted string.
dns.BADFLAGS: Illegal flags specified.
dns.NONAME: Given host name is not numeric.
dns.BADHINTS: Illegal hints flags specified.
dns.NOTINITIALIZED: c-ares library initialization not yet
performed.
dns.LOADIPHLPAPI: Error loading iphlpapi.dll.
dns.ADDRGETNETWORKPARAMS: Could not find GetNetworkParams
function.
dns.CANCELLED: DNS query cancelled.
The dnsPromises API also exports the above error codes, e.g.,
dnsPromises.NODATA.
Implementation considerations
Although dns.lookup() and the various dns.resolve*()/dns.reverse()
functions have the same goal of associating a network name with a
network address (or vice versa), their behavior is quite different.
These differences can have subtle but significant consequences on
the behavior of Node.js programs.
dns.lookup()
Under the hood, dns.lookup() uses the same operating system
facilities as most other programs. For instance, dns.lookup() will
almost always resolve a given name the same way as the ping
command. On most POSIX-like operating systems, the behavior of
the dns.lookup() function can be modified by changing settings in
nsswitch.conf(5) and/or resolv.conf(5), but changing these files will
change the behavior of all other programs running on the same
operating system.
Though the call to dns.lookup() will be asynchronous from
JavaScript’s perspective, it is implemented as a synchronous call to
getaddrinfo(3) that runs on libuv’s threadpool. This can have
surprising negative performance implications for some applications,
see the UV_THREADPOOL_SIZE documentation for more information.
Various networking APIs will call dns.lookup() internally to resolve
host names. If that is an issue, consider resolving the host name to
an address using dns.resolve() and using the address instead of a
host name. Also, some networking APIs (such as socket.connect()
and dgram.createSocket()) allow the default resolver, dns.lookup(), to
be replaced.
dns.resolve(), dns.resolve*(), and
dns.reverse()
These functions are implemented quite differently than dns.lookup().
They do not use getaddrinfo(3) and they always perform a DNS
query on the network. This network communication is always done
asynchronously and does not use libuv’s threadpool.
As a result, these functions cannot have the same negative impact on
other processing that happens on libuv’s threadpool that
dns.lookup() can have.
They do not use the same set of configuration files that dns.lookup()
uses. For instance, they do not use the configuration from /etc/hosts.
Domain
Stability: 0 - Deprecated
This module is pending deprecation. Once a replacement API
has been finalized, this module will be fully deprecated. Most
developers should not have cause to use this module. Users who
absolutely must have the functionality that domains provide may rely
on it for the time being but should expect to have to migrate to a
different solution in the future.
Domains provide a way to handle multiple different IO operations as
a single group. If any of the event emitters or callbacks registered to a
domain emit an 'error' event, or throw an error, then the domain
object will be notified, rather than losing the context of the error in
the process.on('uncaughtException') handler, or causing the program
to exit immediately with an error code.
Warning: Don’t ignore errors!
Domain error handlers are not a substitute for closing down a
process when an error occurs.
By the very nature of how throw works in JavaScript, there is almost
never any way to safely “pick up where it left off”, without leaking
references, or creating some other sort of undefined brittle state.
The safest way to respond to a thrown error is to shut down the
process. Of course, in a normal web server, there may be many open
connections, and it is not reasonable to abruptly shut those down
because an error was triggered by someone else.
The better approach is to send an error response to the request that
triggered the error, while letting the others finish in their normal
time, and stop listening for new requests in that worker.
In this way, domain usage goes hand-in-hand with the cluster module,
since the primary process can fork a new worker when a worker
encounters an error. For Node.js programs that scale to multiple
machines, the terminating proxy or service registry can take note of
the failure, and react accordingly.
For example, this is not a good idea:
// XXX WARNING! BAD IDEA!
const d = require('node:domain').create();
d.on('error', (er) => {
// The error won't crash the process, but what it does is worse!
// Though we've prevented abrupt process restarting, we are leakin
// a lot of resources if this ever happens.
// This is no better than process.on('uncaughtException')!
console.log(`error, but oh well ${er.message}`);
});
d.run(() => {
require('node:http').createServer((req, res) => {
handleRequest(req, res);
}).listen(PORT);
});
By using the context of a domain, and the resilience of separating our
program into multiple worker processes, we can react more
appropriately, and handle errors with much greater safety.
// Much better!
const cluster = require('node:cluster');
const PORT = +process.env.PORT || 1337;
if (cluster.isPrimary) {
if (cluster.isPrimary) {
// A more realistic scenario would have more than 2 workers,
// and perhaps not put the primary and worker in the same file.
//
// It is also possible to get a bit fancier about logging, and
// implement whatever custom logic is needed to prevent DoS
// attacks and other bad behavior.
//
// See the options in the cluster documentation.
//
// The important thing is that the primary does very little,
// increasing our resilience to unexpected errors.
cluster.fork();
cluster.fork();
cluster.on('disconnect', (worker) => {
console.error('disconnect!');
cluster.fork();
});
} else {
// the worker
//
// This is where we put our bugs!
const domain = require('node:domain');
// See the cluster documentation for more details about using
// worker processes to serve requests. How it works, caveats, etc.
const server = require('node:http').createServer((req, res) => {
const d = domain.create();
d.on('error', (er) => {
console.error(`error ${er.stack}`);
// We're in dangerous territory!
// By definition, something unexpected occurred,
// which we probably didn't want.
// Anything can happen now! Be very careful!
try {
// Make sure we close down within 30 seconds
const killtimer = setTimeout(() => {
process.exit(1);
}, 30000);
}, 30000);
// But don't keep the process open just for that!
killtimer.unref();
// Stop taking new requests.
server.close();
// Let the primary know we're dead. This will trigger a
// 'disconnect' in the cluster primary, and then it will for
// a new worker.
cluster.worker.disconnect();
// Try to send an error to the request that triggered the pr
res.statusCode = 500;
res.setHeader('content-type', 'text/plain');
res.end('Oops, there was a problem!\n');
} catch (er2) {
// Oh well, not much we can do at this point.
console.error(`Error sending 500! ${er2.stack}`);
}
});
// Because req and res were created before this domain existed,
// we need to explicitly add them.
// See the explanation of implicit vs explicit binding below.
d.add(req);
d.add(res);
// Now run the handler function in the domain.
d.run(() => {
handleRequest(req, res);
});
});
server.listen(PORT);
}
// This part is not important. Just an example routing thing.
// Put fancy application logic here.
function handleRequest(req, res) {
switch (req.url) {
case '/error':
// We do some async stuff, and then...
setTimeout(() => {
// Whoops!
flerb.bark();
}, timeout);
}, timeout);
break;
default:
res.end('ok');
}
}
Additions to Error objects
Any time an Error object is routed through a domain, a few extra
fields are added to it.
error.domain The domain that first handled the error.
error.domainEmitter The event emitter that emitted an 'error'
event with the error object.
error.domainBound The callback function which was bound to the
domain, and passed an error as its first argument.
error.domainThrown A boolean indicating whether the error was
thrown, emitted, or passed to a bound callback function.
Implicit binding
If domains are in use, then all new EventEmitter objects (including
Stream objects, requests, responses, etc.) will be implicitly bound to
the active domain at the time of their creation.
Additionally, callbacks passed to low-level event loop requests (such
as to fs.open(), or other callback-taking methods) will automatically
be bound to the active domain. If they throw, then the domain will
catch the error.
In order to prevent excessive memory usage, Domain objects
themselves are not implicitly added as children of the active domain.
If they were, then it would be too easy to prevent request and
response objects from being properly garbage collected.
To nest Domain objects as children of a parent Domain they must be
explicitly added.
Implicit binding routes thrown errors and 'error' events to the
Domain’s 'error' event, but does not register the EventEmitter on the
Domain. Implicit binding only takes care of thrown errors and 'error'
events.
Explicit binding
Sometimes, the domain in use is not the one that ought to be used for
a specific event emitter. Or, the event emitter could have been
created in the context of one domain, but ought to instead be bound
to some other domain.
For example, there could be one domain in use for an HTTP server,
but perhaps we would like to have a separate domain to use for each
request.
That is possible via explicit binding.
// Create a top-level domain for the server
const domain = require('node:domain');
const http = require('node:http');
const serverDomain = domain.create();
serverDomain.run(() => {
// Server is created in the scope of serverDomain
http.createServer((req, res) => {
// Req and res are also created in the scope of serverDomain
// however, we'd prefer to have a separate domain for each reque
// create it first thing, and add req and res to it.
const reqd = domain.create();
reqd.add(req);
reqd.add(res);
reqd.on('error', (er) => {
console.error('Error', er, req.url);
try {
res.writeHead(500);
res.end('Error occurred, sorry.');
} catch (er2) {
console.error('Error sending 500', er2, req.url);
}
});
}).listen(1337);
});
domain.create()
Returns: {Domain}
Class: Domain
Extends: {EventEmitter}
The Domain class encapsulates the functionality of routing errors and
uncaught exceptions to the active Domain object.
To handle the errors that it catches, listen to its 'error' event.
domain.members
{Array}
An array of timers and event emitters that have been explicitly added
to the domain.
domain.add(emitter)
emitter {EventEmitter|Timer} emitter or timer to be added to the
domain
Explicitly adds an emitter to the domain. If any event handlers called
by the emitter throw an error, or if the emitter emits an 'error'
event, it will be routed to the domain’s 'error' event, just like with
implicit binding.
This also works with timers that are returned from setInterval() and
setTimeout(). If their callback function throws, it will be caught by the
domain 'error' handler.
If the Timer or EventEmitter was already bound to a domain, it is
removed from that one, and bound to this one instead.
domain.bind(callback)
callback{Function} The callback function
Returns: {Function} The bound function
The returned function will be a wrapper around the supplied callback
function. When the returned function is called, any errors that are
thrown will be routed to the domain’s 'error' event.
const d = domain.create();
function readSomeFile(filename, cb) {
fs.readFile(filename, 'utf8', d.bind((er, data) => {
// If this throws, it will also be passed to the domain.
return cb(er, data ? JSON.parse(data) : null);
}));
}
d.on('error', (er) => {
// An error occurred somewhere. If we throw it now, it will crash
// with the normal line number and stack message.
});
domain.enter()
The enter() method is plumbing used by the run(), bind(), and
intercept() methods to set the active domain. It sets domain.active
and process.domain to the domain, and implicitly pushes the domain
onto the domain stack managed by the domain module (see
domain.exit() for details on the domain stack). The call to enter()
delimits the beginning of a chain of asynchronous calls and I/O
operations bound to a domain.
Calling enter() changes only the active domain, and does not alter
the domain itself. enter() and exit() can be called an arbitrary
number of times on a single domain.
domain.exit()
The exit() method exits the current domain, popping it off the
domain stack. Any time execution is going to switch to the context of
a different chain of asynchronous calls, it’s important to ensure that
the current domain is exited. The call to exit() delimits either the
end of or an interruption to the chain of asynchronous calls and I/O
operations bound to a domain.
If there are multiple, nested domains bound to the current execution
context, exit() will exit any domains nested within this domain.
Calling exit() changes only the active domain, and does not alter the
domain itself. enter() and exit() can be called an arbitrary number
of times on a single domain.
domain.intercept(callback)
callback{Function} The callback function
Returns: {Function} The intercepted function
This method is almost identical to domain.bind(callback). However,
in addition to catching thrown errors, it will also intercept Error
objects sent as the first argument to the function.
In this way, the common if (err) return callback(err); pattern can
be replaced with a single error handler in a single place.
const d = domain.create();
function readSomeFile(filename, cb) {
fs.readFile(filename, 'utf8', d.intercept((data) => {
// Note, the first argument is never passed to the
// callback since it is assumed to be the 'Error' argument
// and thus intercepted by the domain.
// If this throws, it will also be passed to the domain
// so the error-handling logic can be moved to the 'error'
// event on the domain instead of being repeated throughout
// the program.
return cb(null, JSON.parse(data));
}));
}
d.on('error', (er) => {
// An error occurred somewhere. If we throw it now, it will crash
// with the normal line number and stack message.
});
domain.remove(emitter)
emitter{EventEmitter|Timer} emitter or timer to be removed
from the domain
The opposite of domain.add(emitter). Removes domain handling from
the specified emitter.
domain.run(fn[, ...args])
fn {Function}
...args {any}
Run the supplied function in the context of the domain, implicitly
binding all event emitters, timers, and low-level requests that are
created in that context. Optionally, arguments can be passed to the
function.
This is the most basic way to use a domain.
const domain = require('node:domain');
const fs = require('node:fs');
const d = domain.create();
d.on('error', (er) => {
console.error('Caught error!', er);
});
d.run(() => {
process.nextTick(() => {
setTimeout(() => { // Simulating some various async stuff
fs.open('non-existent file', 'r', (er, fd) => {
if (er) throw er;
// proceed...
});
}, 100);
});
});
In this example, the d.on('error') handler will be triggered, rather
than crashing the program.
Domains and promises
As of Node.js 8.0.0, the handlers of promises are run inside the
domain in which the call to .then() or .catch() itself was made:
const d1 = domain.create();
const d2 = domain.create();
let p;
d1.run(() => {
p = Promise.resolve(42);
});
d2.run(() => {
p.then((v) => {
// running in d2
});
});
A callback may be bound to a specific domain using
domain.bind(callback):
const d1 = domain.create();
const d2 = domain.create();
let p;
d1.run(() => {
p = Promise.resolve(42);
});
d2.run(() => {
p.then(p.domain.bind((v) => {
// running in d1
}));
});
Domains will not interfere with the error handling mechanisms for
promises. In other words, no 'error' event will be emitted for
unhandled Promise rejections.
Events
Stability: 2 - Stable
Much of the Node.js core API is built around an idiomatic
asynchronous event-driven architecture in which certain kinds of
objects (called “emitters”) emit named events that cause Function
objects (“listeners”) to be called.
For instance: a net.Server object emits an event each time a peer
connects to it; a fs.ReadStream emits an event when the file is opened;
a stream emits an event whenever data is available to be read.
All objects that emit events are instances of the EventEmitter class.
These objects expose an eventEmitter.on() function that allows one or
more functions to be attached to named events emitted by the object.
Typically, event names are camel-cased strings but any valid
JavaScript property key can be used.
When the EventEmitter object emits an event, all of the functions
attached to that specific event are called synchronously. Any values
returned by the called listeners are ignored and discarded.
The following example shows a simple EventEmitter instance with a
single listener. The eventEmitter.on() method is used to register
listeners, while the eventEmitter.emit() method is used to trigger the
event.
import { EventEmitter } from 'node:events';
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
myEmitter.on('event', () => {
console.log('an event occurred!');
});
myEmitter.emit('event');
const EventEmitter = require('node:events');
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
myEmitter.on('event', () => {
console.log('an event occurred!');
});
myEmitter.emit('event');
Passing arguments and this to listeners
The eventEmitter.emit() method allows an arbitrary set of arguments
to be passed to the listener functions. Keep in mind that when an
ordinary listener function is called, the standard this keyword is
intentionally set to reference the EventEmitter instance to which the
listener is attached.
import { EventEmitter } from 'node:events';
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
myEmitter.on('event', function(a, b) {
console.log(a, b, this, this === myEmitter);
// Prints:
// a b MyEmitter {
// _events: [Object: null prototype] { event: [Function (anony
// _eventsCount: 1,
// _maxListeners: undefined,
// [Symbol(kCapture)]: false
// } true
});
myEmitter.emit('event', 'a', 'b');
const EventEmitter = require('node:events');
const EventEmitter require( node:events );
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
myEmitter.on('event', function(a, b) {
console.log(a, b, this, this === myEmitter);
// Prints:
// a b MyEmitter {
// _events: [Object: null prototype] { event: [Function (anony
// _eventsCount: 1,
// _maxListeners: undefined,
// [Symbol(kCapture)]: false
// } true
});
myEmitter.emit('event', 'a', 'b');
It is possible to use ES6 Arrow Functions as listeners, however, when
doing so, the this keyword will no longer reference the EventEmitter
instance:
import { EventEmitter } from 'node:events';
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
myEmitter.on('event', (a, b) => {
console.log(a, b, this);
// Prints: a b undefined
});
myEmitter.emit('event', 'a', 'b');
const EventEmitter = require('node:events');
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
myEmitter.on('event', (a, b) => {
console.log(a, b, this);
// Prints: a b {}
});
myEmitter.emit('event', 'a', 'b');
Asynchronous vs. synchronous
The EventEmitter calls all listeners synchronously in the order in
which they were registered. This ensures the proper sequencing of
events and helps avoid race conditions and logic errors. When
appropriate, listener functions can switch to an asynchronous mode
of operation using the setImmediate() or process.nextTick() methods:
import { EventEmitter } from 'node:events';
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
myEmitter.on('event', (a, b) => {
setImmediate(() => {
console.log('this happens asynchronously');
});
});
myEmitter.emit('event', 'a', 'b');
const EventEmitter = require('node:events');
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
myEmitter.on('event', (a, b) => {
setImmediate(() => {
console.log('this happens asynchronously');
});
});
myEmitter.emit('event', 'a', 'b');
Handling events only once
When a listener is registered using the eventEmitter.on() method,
that listener is invoked every time the named event is emitted.
import { EventEmitter } from 'node:events';
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
let m = 0;
myEmitter.on('event', () => {
console.log(++m);
});
myEmitter.emit('event');
// Prints: 1
myEmitter.emit('event');
// Prints: 2
const EventEmitter = require('node:events');
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
let m = 0;
myEmitter.on('event', () => {
console.log(++m);
});
myEmitter.emit('event');
// Prints: 1
myEmitter.emit('event');
// Prints: 2
Using the eventEmitter.once() method, it is possible to register a
listener that is called at most once for a particular event. Once the
event is emitted, the listener is unregistered and then called.
import { EventEmitter } from 'node:events';
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
let m = 0;
myEmitter.once('event', () => {
console.log(++m);
});
myEmitter.emit('event');
// Prints: 1
myEmitter.emit('event');
// Ignored
const EventEmitter = require('node:events');
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
let m = 0;
myEmitter.once('event', () => {
console.log(++m);
});
myEmitter.emit('event');
// Prints: 1
myEmitter.emit('event');
// Ignored
Error events
When an error occurs within an EventEmitter instance, the typical
action is for an 'error' event to be emitted. These are treated as
special cases within Node.js.
If an EventEmitter does not have at least one listener registered for
the 'error' event, and an 'error' event is emitted, the error is
thrown, a stack trace is printed, and the Node.js process exits.
import { EventEmitter } from 'node:events';
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
myEmitter.emit('error', new Error('whoops!'));
// Throws and crashes Node.js
const EventEmitter = require('node:events');
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
myEmitter.emit('error', new Error('whoops!'));
// Throws and crashes Node.js
To guard against crashing the Node.js process the domain module can
be used. (Note, however, that the node:domain module is deprecated.)
As a best practice, listeners should always be added for the 'error'
events.
import { EventEmitter } from 'node:events';
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
myEmitter.on('error', (err) => {
console.error('whoops! there was an error');
});
myEmitter.emit('error', new Error('whoops!'));
// Prints: whoops! there was an error
const EventEmitter = require('node:events');
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
myEmitter.on('error', (err) => {
console.error('whoops! there was an error');
});
myEmitter.emit('error', new Error('whoops!'));
// Prints: whoops! there was an error
It is possible to monitor 'error' events without consuming the
emitted error by installing a listener using the symbol
events.errorMonitor.
import { EventEmitter, errorMonitor } from 'node:events';
const myEmitter = new EventEmitter();
myEmitter.on(errorMonitor, (err) => {
MyMonitoringTool.log(err);
});
myEmitter.emit('error', new Error('whoops!'));
// Still throws and crashes Node.js
const { EventEmitter, errorMonitor } = require('node:events');
const myEmitter = new EventEmitter();
myEmitter.on(errorMonitor, (err) => {
MyMonitoringTool.log(err);
});
myEmitter.emit('error', new Error('whoops!'));
// Still throws and crashes Node.js
Capture rejections of promises
Using async functions with event handlers is problematic, because it
can lead to an unhandled rejection in case of a thrown exception:
import { EventEmitter } from 'node:events';
const ee = new EventEmitter();
ee.on('something', async (value) => {
throw new Error('kaboom');
});
const EventEmitter = require('node:events');
const ee = new EventEmitter();
ee.on('something', async (value) => {
throw new Error('kaboom');
});
The captureRejections option in the EventEmitter constructor or the
global setting change this behavior, installing a .then(undefined,
handler) handler on the Promise. This handler routes the exception
asynchronously to the Symbol.for('nodejs.rejection') method if
there is one, or to 'error' event handler if there is none.
import { EventEmitter } from 'node:events';
const ee1 = new EventEmitter({ captureRejections: true });
ee1.on('something', async (value) => {
throw new Error('kaboom');
});
ee1.on('error', console.log);
const ee2 = new EventEmitter({ captureRejections: true });
ee2.on('something', async (value) => {
throw new Error('kaboom');
});
ee2[Symbol.for('nodejs.rejection')] = console.log;
const EventEmitter = require('node:events');
const ee1 = new EventEmitter({ captureRejections: true });
ee1.on('something', async (value) => {
throw new Error('kaboom');
});
ee1.on('error', console.log);
const ee2 = new EventEmitter({ captureRejections: true });
ee2.on('something', async (value) => {
throw new Error('kaboom');
});
ee2[Symbol.for('nodejs.rejection')] = console.log;
Setting events.captureRejections = true will change the default for all
new instances of EventEmitter.
import { EventEmitter } from 'node:events';
EventEmitter.captureRejections = true;
const ee1 = new EventEmitter();
ee1.on('something', async (value) => {
throw new Error('kaboom');
});
ee1.on('error', console.log);
const events = require('node:events');
events.captureRejections = true;
const ee1 = new events.EventEmitter();
ee1.on('something', async (value) => {
throw new Error('kaboom');
});
ee1.on('error', console.log);
The 'error' events that are generated by the captureRejections
behavior do not have a catch handler to avoid infinite error loops:
the recommendation is to not use async functions as 'error'
event handlers.
Class: EventEmitter
The EventEmitter class is defined and exposed by the node:events
module:
import { EventEmitter } from 'node:events';
const EventEmitter = require('node:events');
All EventEmitters emit the event 'newListener' when new listeners are
added and 'removeListener' when existing listeners are removed.
It supports the following option:
captureRejections {boolean} It enables automatic capturing of
promise rejection. Default: false.
Event: 'newListener'
eventName {string|symbol} The name of the event being listened
for
listener {Function} The event handler function
The EventEmitter instance will emit its own 'newListener' event
before a listener is added to its internal array of listeners.
Listeners registered for the 'newListener' event are passed the event
name and a reference to the listener being added.
The fact that the event is triggered before adding the listener has a
subtle but important side effect: any additional listeners registered
to the same name within the 'newListener' callback are inserted before
the listener that is in the process of being added.
import { EventEmitter } from 'node:events';
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
// Only do this once so we don't loop forever
myEmitter.once('newListener', (event, listener) => {
if (event === 'event') {
// Insert a new listener in front
myEmitter.on('event', () => {
console.log('B');
});
}
});
myEmitter.on('event', () => {
console.log('A');
});
myEmitter.emit('event');
// Prints:
// B
// A
const EventEmitter = require('node:events');
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
// Only do this once so we don't loop forever
myEmitter.once('newListener', (event, listener) => {
if (event === 'event') {
// Insert a new listener in front
myEmitter.on('event', () => {
console.log('B');
});
}
});
myEmitter.on('event', () => {
console.log('A');
});
myEmitter.emit('event');
// Prints:
// B
// A
Event: 'removeListener'
eventName {string|symbol} The event name
listener {Function} The event handler function
The 'removeListener' event is emitted after the listener is removed.
emitter.addListener(eventName, listener)
eventName {string|symbol}
listener {Function}
Alias for emitter.on(eventName, listener).
emitter.emit(eventName[, ...args])
eventName {string|symbol}
...args {any}
Returns: {boolean}
Synchronously calls each of the listeners registered for the event
named eventName, in the order they were registered, passing the
supplied arguments to each.
Returns true if the event had listeners, false otherwise.
import { EventEmitter } from 'node:events';
const myEmitter = new EventEmitter();
// First listener
myEmitter.on('event', function firstListener() {
console.log('Helloooo! first listener');
});
// Second listener
myEmitter.on('event', function secondListener(arg1, arg2) {
console.log(`event with parameters ${arg1}, ${arg2} in second list
});
// Third listener
myEmitter.on('event', function thirdListener(...args) {
const parameters = args.join(', ');
console.log(`event with parameters ${parameters} in third listener
});
console.log(myEmitter.listeners('event'));
myEmitter.emit('event', 1, 2, 3, 4, 5);
// Prints:
// [
// [Function: firstListener],
// [Function: secondListener],
// [Function: thirdListener]
// ]
// Helloooo! first listener
// event with parameters 1, 2 in second listener
// event with parameters 1, 2, 3, 4, 5 in third listener
const EventEmitter = require('node:events');
const myEmitter = new EventEmitter();
// First listener
myEmitter.on('event', function firstListener() {
console.log('Helloooo! first listener');
});
// Second listener
// Second listener
myEmitter.on('event', function secondListener(arg1, arg2) {
console.log(`event with parameters ${arg1}, ${arg2} in second list
});
// Third listener
myEmitter.on('event', function thirdListener(...args) {
const parameters = args.join(', ');
console.log(`event with parameters ${parameters} in third listener
});
console.log(myEmitter.listeners('event'));
myEmitter.emit('event', 1, 2, 3, 4, 5);
// Prints:
// [
// [Function: firstListener],
// [Function: secondListener],
// [Function: thirdListener]
// ]
// Helloooo! first listener
// event with parameters 1, 2 in second listener
// event with parameters 1, 2, 3, 4, 5 in third listener
emitter.eventNames()
Returns: {Array}
Returns an array listing the events for which the emitter has
registered listeners. The values in the array are strings or Symbols.
import { EventEmitter } from 'node:events';
const myEE = new EventEmitter();
myEE.on('foo', () => {});
myEE.on('bar', () => {});
const sym = Symbol('symbol');
myEE.on(sym, () => {});
console.log(myEE.eventNames());
// Prints: [ 'foo', 'bar', Symbol(symbol) ]
const EventEmitter = require('node:events');
const myEE = new EventEmitter();
myEE.on('foo', () => {});
myEE.on('bar', () => {});
const sym = Symbol('symbol');
myEE.on(sym, () => {});
console.log(myEE.eventNames());
// Prints: [ 'foo', 'bar', Symbol(symbol) ]
emitter.getMaxListeners()
Returns: {integer}
Returns the current max listener value for the EventEmitter which is
either set by emitter.setMaxListeners(n) or defaults to
events.defaultMaxListeners.
emitter.listenerCount(eventName[,
listener])
eventName {string|symbol} The name of the event being listened
for
listener{Function} The event handler function
Returns: {integer}
Returns the number of listeners listening for the event named
eventName. If listener is provided, it will return how many times the
listener is found in the list of the listeners of the event.
emitter.listeners(eventName)
eventName {string|symbol}
Returns: {Function[]}
Returns a copy of the array of listeners for the event named
eventName.
server.on('connection', (stream) => {
console.log('someone connected!');
});
console.log(util.inspect(server.listeners('connection')));
// Prints: [ [Function] ]
emitter.off(eventName, listener)
eventName {string|symbol}
listener {Function}
Returns: {EventEmitter}
Alias for emitter.removeListener().
emitter.on(eventName, listener)
eventName {string|symbol} The name of the event.
listener {Function} The callback function
Returns: {EventEmitter}
Adds the listener function to the end of the listeners array for the
event named eventName. No checks are made to see if the listener has
already been added. Multiple calls passing the same combination of
eventName and listener will result in the listener being added, and
called, multiple times.
server.on('connection', (stream) => {
console.log('someone connected!');
});
Returns a reference to the EventEmitter, so that calls can be chained.
By default, event listeners are invoked in the order they are added.
The emitter.prependListener() method can be used as an alternative
to add the event listener to the beginning of the listeners array.
import { EventEmitter } from 'node:events';
const myEE = new EventEmitter();
myEE.on('foo', () => console.log('a'));
myEE.prependListener('foo', () => console.log('b'));
myEE.emit('foo');
// Prints:
// b
// a
const EventEmitter = require('node:events');
const myEE = new EventEmitter();
myEE.on('foo', () => console.log('a'));
myEE.prependListener('foo', () => console.log('b'));
myEE.emit('foo');
// Prints:
// b
// a
emitter.once(eventName, listener)
eventName {string|symbol} The name of the event.
listener {Function} The callback function
Returns: {EventEmitter}
Adds a one-time listener function for the event named eventName.
The next time eventName is triggered, this listener is removed and
then invoked.
server.once('connection', (stream) => {
console.log('Ah, we have our first user!');
});
Returns a reference to the EventEmitter, so that calls can be chained.
By default, event listeners are invoked in the order they are added.
The emitter.prependOnceListener() method can be used as an
alternative to add the event listener to the beginning of the listeners
array.
import { EventEmitter } from 'node:events';
const myEE = new EventEmitter();
myEE.once('foo', () => console.log('a'));
myEE.prependOnceListener('foo', () => console.log('b'));
myEE.emit('foo');
// Prints:
// b
// a
const EventEmitter = require('node:events');
const myEE = new EventEmitter();
myEE.once('foo', () => console.log('a'));
myEE.prependOnceListener('foo', () => console.log('b'));
myEE.emit('foo');
// Prints:
// b
// a
emitter.prependListener(eventName,
listener)
eventName {string|symbol} The name of the event.
listener {Function} The callback function
Returns: {EventEmitter}
Adds the listener function to the beginning of the listeners array for
the event named eventName. No checks are made to see if the listener
has already been added. Multiple calls passing the same combination
of eventName and listener will result in the listener being added, and
called, multiple times.
server.prependListener('connection', (stream) => {
console.log('someone connected!');
});
Returns a reference to the EventEmitter, so that calls can be chained.
emitter.prependOnceListener(eventName,
listener)
eventName {string|symbol} The name of the event.
listener {Function} The callback function
Returns: {EventEmitter}
Adds a one-time listener function for the event named eventName to
the beginning of the listeners array. The next time eventName is
triggered, this listener is removed, and then invoked.
server.prependOnceListener('connection', (stream) => {
console.log('Ah, we have our first user!');
});
Returns a reference to the EventEmitter, so that calls can be chained.
emitter.removeAllListeners([eventName])
eventName {string|symbol}
Returns: {EventEmitter}
Removes all listeners, or those of the specified eventName.
It is bad practice to remove listeners added elsewhere in the code,
particularly when the EventEmitter instance was created by some
other component or module (e.g. sockets or file streams).
Returns a reference to the EventEmitter, so that calls can be chained.
emitter.removeListener(eventName, listener)
eventName {string|symbol}
listener {Function}
Returns: {EventEmitter}
Removes the specified listener from the listener array for the event
named eventName.
const callback = (stream) => {
console.log('someone connected!');
};
server.on('connection', callback);
// ...
server.removeListener('connection', callback);
removeListener() will remove, at most, one instance of a listener from
the listener array. If any single listener has been added multiple
times to the listener array for the specified eventName, then
removeListener() must be called multiple times to remove each
instance.
Once an event is emitted, all listeners attached to it at the time of
emitting are called in order. This implies that any removeListener() or
removeAllListeners() calls after emitting and before the last listener
finishes execution will not remove them from emit() in progress.
Subsequent events behave as expected.
import { EventEmitter } from 'node:events';
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
const callbackA = () => {
console.log('A');
myEmitter.removeListener('event', callbackB);
};
const callbackB = () => {
console.log('B');
};
myEmitter.on('event', callbackA);
myEmitter.on('event', callbackB);
// callbackA removes listener callbackB but it will still be called.
// Internal listener array at time of emit [callbackA, callbackB]
myEmitter.emit('event');
// i
// Prints:
// A
// B
// callbackB is now removed.
// Internal listener array [callbackA]
myEmitter.emit('event');
// Prints:
// A
const EventEmitter = require('node:events');
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
const callbackA = () => {
console.log('A');
myEmitter.removeListener('event', callbackB);
};
const callbackB = () => {
console.log('B');
};
myEmitter.on('event', callbackA);
myEmitter.on('event', callbackB);
// callbackA removes listener callbackB but it will still be called.
// Internal listener array at time of emit [callbackA, callbackB]
myEmitter.emit('event');
// Prints:
// A
// B
// callbackB is now removed.
// Internal listener array [callbackA]
myEmitter.emit('event');
// Prints:
// A
Because listeners are managed using an internal array, calling this
will change the position indices of any listener registered after the
listener being removed. This will not impact the order in which
listeners are called, but it means that any copies of the listener array
as returned by the emitter.listeners() method will need to be
recreated.
When a single function has been added as a handler multiple times
for a single event (as in the example below), removeListener() will
remove the most recently added instance. In the example the
once('ping') listener is removed:
import { EventEmitter } from 'node:events';
const ee = new EventEmitter();
function pong() {
console.log('pong');
}
ee.on('ping', pong);
ee.once('ping', pong);
ee.removeListener('ping', pong);
ee.emit('ping');
ee.emit('ping');
const EventEmitter = require('node:events');
const ee = new EventEmitter();
function pong() {
console.log('pong');
}
ee.on('ping', pong);
ee.once('ping', pong);
ee.removeListener('ping', pong);
ee.emit('ping');
ee.emit('ping');
Returns a reference to the EventEmitter, so that calls can be chained.
emitter.setMaxListeners(n)
n{integer}
Returns: {EventEmitter}
By default EventEmitters will print a warning if more than 10 listeners
are added for a particular event. This is a useful default that helps
finding memory leaks. The emitter.setMaxListeners() method allows
the limit to be modified for this specific EventEmitter instance. The
value can be set to Infinity (or 0) to indicate an unlimited number of
listeners.
Returns a reference to the EventEmitter, so that calls can be chained.
emitter.rawListeners(eventName)
eventName {string|symbol}
Returns: {Function[]}
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by .once()).
import { EventEmitter } from 'node:events';
const emitter = new EventEmitter();
emitter.once('log', () => console.log('log once'));
// Returns a new Array with a function `onceWrapper` which has a pro
// `listener` which contains the original listener bound above
const listeners = emitter.rawListeners('log');
const logFnWrapper = listeners[0];
// Logs "log once" to the console and does not unbind the `once` eve
logFnWrapper.listener();
// Logs "log once" to the console and removes the listener
logFnWrapper();
emitter.on('log', () => console.log('log persistently'));
// Will return a new Array with a single function bound by `.on()` a
const newListeners = emitter.rawListeners('log');
// Logs "log persistently" twice
newListeners[0]();
emitter.emit('log');
const EventEmitter = require('node:events');
const emitter = new EventEmitter();
emitter.once('log', () => console.log('log once'));
// Returns a new Array with a function `onceWrapper` which has a pro
// `listener` which contains the original listener bound above
const listeners = emitter.rawListeners('log');
const logFnWrapper = listeners[0];
// Logs "log once" to the console and does not unbind the `once` eve
logFnWrapper.listener();
// Logs "log once" to the console and removes the listener
logFnWrapper();
emitter.on('log', () => console.log('log persistently'));
// Will return a new Array with a single function bound by `.on()` a
const newListeners = emitter.rawListeners('log');
// Logs "log persistently" twice
newListeners[0]();
emitter.emit('log');
emitter[Symbol.for('nodejs.rejection')]
(err, eventName[, ...args])
err Error
eventName {string|symbol}
...args {any}
The Symbol.for('nodejs.rejection') method is called in case a
promise rejection happens when emitting an event and
captureRejections is enabled on the emitter. It is possible to use
events.captureRejectionSymbol in place of
Symbol.for('nodejs.rejection').
import { EventEmitter, captureRejectionSymbol } from 'node:events';
class MyClass extends EventEmitter {
constructor() {
super({ captureRejections: true });
}
[captureRejectionSymbol](err, event, ...args) {
console.log('rejection happened for', event, 'with', err, ...arg
this.destroy(err);
}
destroy(err) {
// Tear the resource down here.
}
}
const { EventEmitter, captureRejectionSymbol } = require('node:event
class MyClass extends EventEmitter {
constructor() {
super({ captureRejections: true });
}
[captureRejectionSymbol](err, event, ...args) {
console.log('rejection happened for', event, 'with', err, ...arg
this.destroy(err);
}
destroy(err) {
// Tear the resource down here.
}
}
events.defaultMaxListeners
By default, a maximum of 10 listeners can be registered for any single
event. This limit can be changed for individual EventEmitter instances
using the emitter.setMaxListeners(n) method. To change the default
for all EventEmitter instances, the events.defaultMaxListeners
property can be used. If this value is not a positive number, a
RangeError is thrown.
Take caution when setting the events.defaultMaxListeners because
the change affects all EventEmitter instances, including those created
before the change is made. However, calling
emitter.setMaxListeners(n) still has precedence over
events.defaultMaxListeners.
This is not a hard limit. The EventEmitter instance will allow more
listeners to be added but will output a trace warning to stderr
indicating that a “possible EventEmitter memory leak” has been
detected. For any single EventEmitter, the emitter.getMaxListeners()
and emitter.setMaxListeners() methods can be used to temporarily
avoid this warning:
import { EventEmitter } from 'node:events';
const emitter = new EventEmitter();
emitter.setMaxListeners(emitter.getMaxListeners() + 1);
emitter.once('event', () => {
// do stuff
emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0)
});
const EventEmitter = require('node:events');
const emitter = new EventEmitter();
emitter.setMaxListeners(emitter.getMaxListeners() + 1);
emitter.once('event', () => {
// do stuff
emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0)
});
The --trace-warnings command-line flag can be used to display the
stack trace for such warnings.
The emitted warning can be inspected with process.on('warning')
and will have the additional emitter, type, and count properties,
referring to the event emitter instance, the event’s name and the
number of attached listeners, respectively. Its name property is set to
'MaxListenersExceededWarning'.
events.errorMonitor
This symbol shall be used to install a listener for only monitoring
'error' events. Listeners installed using this symbol are called before
the regular 'error' listeners are called.
Installing a listener using this symbol does not change the behavior
once an 'error' event is emitted. Therefore, the process will still
crash if no regular 'error' listener is installed.
events.getEventListeners(emitterOr
Target, eventName)
emitterOrTarget {EventEmitter|EventTarget}
eventName {string|symbol}
Returns: {Function[]}
Returns a copy of the array of listeners for the event named
eventName.
For EventEmitters this behaves exactly the same as calling .listeners
on the emitter.
For EventTargets this is the only way to get the event listeners for the
event target. This is useful for debugging and diagnostic purposes.
import { getEventListeners, EventEmitter } from 'node:events';
{
const ee = new EventEmitter();
const listener = () => console.log('Events are fun');
ee.on('foo', listener);
console.log(getEventListeners(ee, 'foo')); // [ [Function: listene
}
{
const et = new EventTarget();
const listener = () => console.log('Events are fun');
et.addEventListener('foo', listener);
console.log(getEventListeners(et, 'foo')); // [ [Function: listene
}
const { getEventListeners, EventEmitter } = require('node:events');
{
const ee = new EventEmitter();
const listener = () => console.log('Events are fun');
ee.on('foo', listener);
console.log(getEventListeners(ee, 'foo')); // [ [Function: listene
}
{
const et = new EventTarget();
const listener = () => console.log('Events are fun');
et.addEventListener('foo', listener);
console.log(getEventListeners(et, 'foo')); // [ [Function: listene
}
events.getMaxListeners(emitterOrTa
rget)
{EventEmitter|EventTarget}
emitterOrTarget
Returns: {number}
Returns the currently set max amount of listeners.
For EventEmitters this behaves exactly the same as calling
.getMaxListeners on the emitter.
For EventTargets this is the only way to get the max event listeners for
the event target. If the number of event handlers on a single
EventTarget exceeds the max set, the EventTarget will print a
warning.
import { getMaxListeners, setMaxListeners, EventEmitter } from 'node
{
const ee = new EventEmitter();
console.log(getMaxListeners(ee)); // 10
setMaxListeners(11, ee);
console.log(getMaxListeners(ee)); // 11
}
{
const et = new EventTarget();
console.log(getMaxListeners(et)); // 10
setMaxListeners(11, et);
console.log(getMaxListeners(et)); // 11
}
const { getMaxListeners, setMaxListeners, EventEmitter } = require(
{
const ee = new EventEmitter();
console.log(getMaxListeners(ee)); // 10
setMaxListeners(11, ee);
console.log(getMaxListeners(ee)); // 11
}
{
const et = new EventTarget();
console.log(getMaxListeners(et)); // 10
setMaxListeners(11, et);
console.log(getMaxListeners(et)); // 11
}
events.once(emitter, name[,
options])
emitter {EventEmitter}
name {string}
options{Object}
signal {AbortSignal} Can be used to cancel waiting for the
event.
Returns: {Promise}
Creates a Promise that is fulfilled when the EventEmitter emits the
given event or that is rejected if the EventEmitter emits 'error' while
waiting. The Promise will resolve with an array of all the arguments
emitted to the given event.
This method is intentionally generic and works with the web
platform EventTarget interface, which has no special 'error' event
semantics and does not listen to the 'error' event.
import { once, EventEmitter } from 'node:events';
import process from 'node:process';
const ee = new EventEmitter();
process.nextTick(() => {
ee.emit('myevent', 42);
});
const [value] = await once(ee, 'myevent');
console.log(value);
const err = new Error('kaboom');
process.nextTick(() => {
ee.emit('error', err);
});
try {
await once(ee, 'myevent');
} catch (err) {
console.error('error happened', err);
}
const { once, EventEmitter } = require('node:events');
async function run() {
const ee = new EventEmitter();
process.nextTick(() => {
ee.emit('myevent', 42);
});
const [value] = await once(ee, 'myevent');
console.log(value);
const err = new Error('kaboom');
process.nextTick(() => {
ee.emit('error', err);
});
try {
await once(ee, 'myevent');
} catch (err) {
console.error('error happened', err);
}
}
run();
The special handling of the 'error' event is only used when
events.once() is used to wait for another event. If events.once() is
used to wait for the ’error' event itself, then it is treated as any other
kind of event without special handling:
import { EventEmitter, once } from 'node:events';
const ee = new EventEmitter();
once(ee, 'error')
.then(([err]) => console.log('ok', err.message))
.catch((err) => console.error('error', err.message));
ee.emit('error', new Error('boom'));
// Prints: ok boom
const { EventEmitter, once } = require('node:events');
const ee = new EventEmitter();
once(ee, 'error')
.then(([err]) => console.log('ok', err.message))
.catch((err) => console.error('error', err.message));
ee.emit('error', new Error('boom'));
// Prints: ok boom
An {AbortSignal} can be used to cancel waiting for the event:
import { EventEmitter, once } from 'node:events';
const ee = new EventEmitter();
const ac = new AbortController();
async function foo(emitter, event, signal) {
try {
await once(emitter, event, { signal });
console.log('event emitted!');
} catch (error) {
if (error.name === 'AbortError') {
console.error('Waiting for the event was canceled!');
} else {
console.error('There was an error', error.message);
}
}
}
foo(ee, 'foo', ac.signal);
ac.abort(); // Abort waiting for the event
ee.emit('foo'); // Prints: Waiting for the event was canceled!
const { EventEmitter, once } = require('node:events');
const ee = new EventEmitter();
const ac = new AbortController();
async function foo(emitter, event, signal) {
try {
await once(emitter, event, { signal });
console.log('event emitted!');
} catch (error) {
if (error.name === 'AbortError') {
console.error('Waiting for the event was canceled!');
} else {
console.error('There was an error', error.message);
}
}
}
foo(ee, 'foo', ac.signal);
ac.abort(); // Abort waiting for the event
ee.emit('foo'); // Prints: Waiting for the event was canceled!
Awaiting multiple events emitted on
process.nextTick()
There is an edge case worth noting when using the events.once()
function to await multiple events emitted on in the same batch of
process.nextTick() operations, or whenever multiple events are
emitted synchronously. Specifically, because the process.nextTick()
queue is drained before the Promise microtask queue, and because
EventEmitter emits all events synchronously, it is possible for
events.once() to miss an event.
import { EventEmitter, once } from 'node:events';
import process from 'node:process';
const myEE = new EventEmitter();
async function foo() {
await once(myEE, 'bar');
console.log('bar');
// This Promise will never resolve because the 'foo' event will
// have already been emitted before the Promise is created.
await once(myEE, 'foo');
console.log('foo');
}
process.nextTick(() => {
myEE.emit('bar');
myEE.emit('foo');
});
foo().then(() => console.log('done'));
const { EventEmitter, once } = require('node:events');
const myEE = new EventEmitter();
async function foo() {
await once(myEE, 'bar');
console.log('bar');
// This Promise will never resolve because the 'foo' event will
// have already been emitted before the Promise is created.
await once(myEE, 'foo');
console.log('foo');
}
process.nextTick(() => {
myEE.emit('bar');
myEE.emit('foo');
});
foo().then(() => console.log('done'));
To catch both events, create each of the Promises before awaiting
either of them, then it becomes possible to use Promise.all(),
Promise.race(), or Promise.allSettled():
import { EventEmitter, once } from 'node:events';
import process from 'node:process';
const myEE = new EventEmitter();
async function foo() {
await Promise.all([once(myEE, 'bar'), once(myEE, 'foo')]);
console.log('foo', 'bar');
}
process.nextTick(() => {
myEE.emit('bar');
myEE.emit('foo');
});
foo().then(() => console.log('done'));
const { EventEmitter, once } = require('node:events');
const myEE = new EventEmitter();
async function foo() {
await Promise.all([once(myEE, 'bar'), once(myEE, 'foo')]);
console.log('foo', 'bar');
}
process.nextTick(() => {
myEE.emit('bar');
myEE.emit('foo');
});
foo().then(() => console.log('done'));
events.captureRejections
Value: {boolean}
Change the default captureRejections option on all new EventEmitter
objects.
events.captureRejectionSymbol
Value: Symbol.for('nodejs.rejection')
See how to write a custom rejection handler.
events.listenerCount(emitter,
eventName)
Stability: 0 - Deprecated: Use emitter.listenerCount() instead.
emitter {EventEmitter} The emitter to query
eventName {string|symbol} The event name
A class method that returns the number of listeners for the given
eventName registered on the given emitter.
import { EventEmitter, listenerCount } from 'node:events';
const myEmitter = new EventEmitter();
myEmitter.on('event', () => {});
myEmitter.on('event', () => {});
console.log(listenerCount(myEmitter, 'event'));
// Prints: 2
const { EventEmitter, listenerCount } = require('node:events');
const myEmitter = new EventEmitter();
myEmitter.on('event', () => {});
myEmitter.on('event', () => {});
console.log(listenerCount(myEmitter, 'event'));
// Prints: 2
events.on(emitter, eventName[,
options])
emitter {EventEmitter}
eventName {string|symbol} The name of the event being listened
for
options {Object}
signal {AbortSignal} Can be used to cancel awaiting events.
Returns: {AsyncIterator} that iterates eventName events emitted
by the emitter
import { on, EventEmitter } from 'node:events';
import process from 'node:process';
const ee = new EventEmitter();
// Emit later on
process.nextTick(() => {
ee.emit('foo', 'bar');
ee.emit('foo', 42);
});
for await (const event of on(ee, 'foo')) {
// The execution of this inner block is synchronous and it
// processes one event at a time (even with await). Do not use
// if concurrent execution is required.
console.log(event); // prints ['bar'] [42]
}
// Unreachable here
const { on, EventEmitter } = require('node:events');
(async () => {
const ee = new EventEmitter();
// Emit later on
process.nextTick(() => {
ee.emit('foo', 'bar');
ee.emit('foo', 42);
});
for await (const event of on(ee, 'foo')) {
// The execution of this inner block is synchronous and it
// processes one event at a time (even with await). Do not use
// if concurrent execution is required.
console.log(event); // prints ['bar'] [42]
}
// Unreachable here
})();
Returns an AsyncIterator that iterates eventName events. It will throw
if the EventEmitter emits 'error'. It removes all listeners when exiting
the loop. The value returned by each iteration is an array composed
of the emitted event arguments.
An {AbortSignal} can be used to cancel waiting on events:
import { on, EventEmitter } from 'node:events';
import process from 'node:process';
const ac = new AbortController();
(async () => {
const ee = new EventEmitter();
// Emit later on
process.nextTick(() => {
ee.emit('foo', 'bar');
ee.emit('foo', 42);
});
for await (const event of on(ee, 'foo', { signal: ac.signal })) {
// The execution of this inner block is synchronous and it
// processes one event at a time (even with await). Do not use
// if concurrent execution is required.
console.log(event); // prints ['bar'] [42]
}
// Unreachable here
})();
process.nextTick(() => ac.abort());
const { on, EventEmitter } = require('node:events');
const ac = new AbortController();
(async () => {
const ee = new EventEmitter();
// Emit later on
process.nextTick(() => {
ee.emit('foo', 'bar');
ee.emit('foo', 42);
});
for await (const event of on(ee, 'foo', { signal: ac.signal })) {
// The execution of this inner block is synchronous and it
// processes one event at a time (even with await). Do not use
// if concurrent execution is required.
console.log(event); // prints ['bar'] [42]
}
// Unreachable here
})();
process.nextTick(() => ac.abort());
events.setMaxListeners(n[,
...eventTargets])
n {number} A non-negative number. The maximum number of
listeners per EventTarget event.
...eventsTargets {EventTarget[]|EventEmitter[]} Zero or more
{EventTarget} or {EventEmitter} instances. If none are specified,
n is set as the default max for all newly created {EventTarget} and
{EventEmitter} objects.
import { setMaxListeners, EventEmitter } from 'node:events';
const target = new EventTarget();
const emitter = new EventEmitter();
setMaxListeners(5, target, emitter);
const {
setMaxListeners,
EventEmitter,
} = require('node:events');
const target = new EventTarget();
const emitter = new EventEmitter();
setMaxListeners(5, target, emitter);
events.addAbortListener(signal,
listener)
Stability: 1 - Experimental
signal {AbortSignal}
listener {Function|EventListener}
Returns: {Disposable} that removes the abort listener.
Listens once to the abort event on the provided signal.
Listening to the abort event on abort signals is unsafe and may lead
to resource leaks since another third party with the signal can call
e.stopImmediatePropagation(). Unfortunately Node.js cannot change
this since it would violate the web standard. Additionally, the
original API makes it easy to forget to remove listeners.
This API allows safely using AbortSignals in Node.js APIs by solving
these two issues by listening to the event such that
stopImmediatePropagation does not prevent the listener from running.
Returns a disposable so that it may be unsubscribed from more
easily.
const { addAbortListener } = require('node:events');
function example(signal) {
let disposable;
try {
signal.addEventListener('abort', (e) => e.stopImmediatePropagati
disposable = addAbortListener(signal, (e) => {
// Do something when signal is aborted.
});
} finally {
disposable?.[Symbol.dispose]();
}
}
import { addAbortListener } from 'node:events';
function example(signal) {
let disposable;
try {
signal.addEventListener('abort', (e) => e.stopImmediatePropagati
disposable = addAbortListener(signal, (e) => {
// Do something when signal is aborted.
});
} finally {
disposable?.[Symbol.dispose]();
}
}
Class:
events.EventEmitterAsyncResource
extends EventEmitter
Integrates EventEmitter with {AsyncResource} for EventEmitters that
require manual async tracking. Specifically, all events emitted by
instances of events.EventEmitterAsyncResource will run within its
async context.
import { EventEmitterAsyncResource, EventEmitter } from 'node:events
import { notStrictEqual, strictEqual } from 'node:assert';
import { executionAsyncId, triggerAsyncId } from 'node:async_hooks'
// Async tracking tooling will identify this as 'Q'.
const ee1 = new EventEmitterAsyncResource({ name: 'Q' });
// 'foo' listeners will run in the EventEmitters async context.
ee1.on('foo', () => {
strictEqual(executionAsyncId(), ee1.asyncId);
strictEqual(triggerAsyncId(), ee1.triggerAsyncId);
});
const ee2 = new EventEmitter();
// 'foo' listeners on ordinary EventEmitters that do not track async
// context, however, run in the same async context as the emit().
ee2.on('foo', () => {
notStrictEqual(executionAsyncId(), ee2.asyncId);
notStrictEqual(triggerAsyncId(), ee2.triggerAsyncId);
});
Promise.resolve().then(() => {
ee1.emit('foo');
ee2.emit('foo');
});
const { EventEmitterAsyncResource, EventEmitter } = require('node:ev
const { notStrictEqual, strictEqual } = require('node:assert');
const { executionAsyncId, triggerAsyncId } = require('node:async_hoo
// Async tracking tooling will identify this as 'Q'.
const ee1 = new EventEmitterAsyncResource({ name: 'Q' });
// 'foo' listeners will run in the EventEmitters async context.
ee1.on('foo', () => {
strictEqual(executionAsyncId(), ee1.asyncId);
strictEqual(triggerAsyncId(), ee1.triggerAsyncId);
});
const ee2 = new EventEmitter();
// 'foo' listeners on ordinary EventEmitters that do not track async
// context, however, run in the same async context as the emit().
ee2.on('foo', () => {
notStrictEqual(executionAsyncId(), ee2.asyncId);
notStrictEqual(triggerAsyncId(), ee2.triggerAsyncId);
});
Promise.resolve().then(() => {
ee1.emit('foo');
ee2.emit('foo');
});
The EventEmitterAsyncResource class has the same methods and takes
the same options as EventEmitter and AsyncResource themselves.
new
events.EventEmitterAsyncResource([options])
options {Object}
captureRejections {boolean} It enables automatic capturing of
promise rejection. Default: false.
name {string} The type of async event. Default:
new.target.name.
triggerAsyncId {number} The ID of the execution context that
created this async event. Default: executionAsyncId().
requireManualDestroy {boolean} If set to true, disables
emitDestroy when the object is garbage collected. This usually
does not need to be set (even if emitDestroy is called
manually), unless the resource’s asyncId is retrieved and the
sensitive API’s emitDestroy is called with it. When set to false,
the emitDestroy call on garbage collection will only take place
if there is at least one active destroy hook. Default: false.
eventemitterasyncresource.asyncId
Type: {number} The unique asyncId assigned to the resource.
eventemitterasyncresource.asyncResource
Type: The underlying {AsyncResource}.
The returned AsyncResource object has an additional eventEmitter
property that provides a reference to this EventEmitterAsyncResource.
eventemitterasyncresource.emitDestroy()
Call all destroy hooks. This should only ever be called once. An error
will be thrown if it is called more than once. This must be manually
called. If the resource is left to be collected by the GC then the
destroy hooks will never be called.
eventemitterasyncresource.triggerAsyncId
Type: {number} The same triggerAsyncId that is passed to the
AsyncResource constructor.
EventTarget and Event API
The EventTarget and Event objects are a Node.js-specific
implementation of the EventTarget Web API that are exposed by
some Node.js core APIs.
const target = new EventTarget();
target.addEventListener('foo', (event) => {
console.log('foo event happened!');
});
Node.js EventTarget vs. DOM EventTarget
There are two key differences between the Node.js EventTarget and
the EventTarget Web API:
1. Whereas DOM EventTarget instances may be hierarchical, there
is no concept of hierarchy and event propagation in Node.js. That
is, an event dispatched to an EventTarget does not propagate
through a hierarchy of nested target objects that may each have
their own set of handlers for the event.
2. In the Node.js EventTarget, if an event listener is an async
function or returns a Promise, and the returned Promise rejects,
the rejection is automatically captured and handled the same way
as a listener that throws synchronously (see EventTarget error
handling for details).
NodeEventTarget vs. EventEmitter
The NodeEventTarget object implements a modified subset of the
EventEmitter API that allows it to closely emulate an EventEmitter in
certain situations. A NodeEventTarget is not an instance of
EventEmitter and cannot be used in place of an EventEmitter in most
cases.
1. Unlike EventEmitter, any given listener can be registered at most
once per event type. Attempts to register a listener multiple
times are ignored.
2. The NodeEventTarget does not emulate the full EventEmitter API.
Specifically the prependListener(), prependOnceListener(),
rawListeners(), and errorMonitor APIs are not emulated. The
'newListener' and 'removeListener' events will also not be
emitted.
3. The NodeEventTarget does not implement any special default
behavior for events with type 'error'.
4. The NodeEventTarget supports EventListener objects as well as
functions as handlers for all event types.
Event listener
Event listeners registered for an event type may either be JavaScript
functions or objects with a handleEvent property whose value is a
function.
In either case, the handler function is invoked with the event
argument passed to the eventTarget.dispatchEvent() function.
Async functions may be used as event listeners. If an async handler
function rejects, the rejection is captured and handled as described
in EventTarget error handling.
An error thrown by one handler function does not prevent the other
handlers from being invoked.
The return value of a handler function is ignored.
Handlers are always invoked in the order they were added.
Handler functions may mutate the event object.
function handler1(event) {
console.log(event.type); // Prints 'foo'
event.a = 1;
}
async function handler2(event) {
console.log(event.type); // Prints 'foo'
console.log(event.a); // Prints 1
}
const handler3 = {
handleEvent(event) {
console.log(event.type); // Prints 'foo'
},
};
const handler4 = {
async handleEvent(event) {
console.log(event.type); // Prints 'foo'
},
};
const target = new EventTarget();
target.addEventListener('foo', handler1);
target.addEventListener('foo', handler2);
target.addEventListener('foo', handler3);
target.addEventListener('foo', handler4, { once: true });
EventTarget error handling
When a registered event listener throws (or returns a Promise that
rejects), by default the error is treated as an uncaught exception on
process.nextTick(). This means uncaught exceptions in EventTargets
will terminate the Node.js process by default.
Throwing within an event listener will not stop the other registered
handlers from being invoked.
The EventTarget does not implement any special default handling for
'error' type events like EventEmitter.
Currently errors are first forwarded to the process.on('error') event
before reaching process.on('uncaughtException'). This behavior is
deprecated and will change in a future release to align EventTarget
with other Node.js APIs. Any code relying on the process.on('error')
event should be aligned with the new behavior.
Class: Event
The Event object is an adaptation of the Event Web API. Instances are
created internally by Node.js.
event.bubbles
Type: {boolean} Always returns false.
This is not used in Node.js and is provided purely for completeness.
event.cancelBubble
Stability: 3 - Legacy: Use event.stopPropagation() instead.
Type: {boolean}
Alias for event.stopPropagation() if set to true. This is not used in
Node.js and is provided purely for completeness.
event.cancelable
Type: {boolean} True if the event was created with the cancelable
option.
event.composed
Type: {boolean} Always returns false.
This is not used in Node.js and is provided purely for completeness.
event.composedPath()
Returns an array containing the current EventTarget as the only entry
or empty if the event is not being dispatched. This is not used in
Node.js and is provided purely for completeness.
event.currentTarget
Type: {EventTarget} The EventTarget dispatching the event.
Alias for event.target.
event.defaultPrevented
Type: {boolean}
Is true if cancelable is true and event.preventDefault() has been
called.
event.eventPhase
Type: {number} Returns 0 while an event is not being
dispatched, 2 while it is being dispatched.
This is not used in Node.js and is provided purely for completeness.
event.initEvent(type[, bubbles[, cancelable]])
Stability: 3 - Legacy: The WHATWG spec considers it deprecated
and users shouldn’t use it at all.
type {string}
bubbles {boolean}
cancelable {boolean}
Redundant with event constructors and incapable of setting composed.
This is not used in Node.js and is provided purely for completeness.
event.isTrusted
Type: {boolean}
The {AbortSignal} "abort" event is emitted with isTrusted set to true.
The value is false in all other cases.
event.preventDefault()
Sets the defaultPrevented property to true if cancelable is true.
event.returnValue
Stability: 3 - Legacy: Use event.defaultPrevented instead.
Type: {boolean} True if the event has not been canceled.
The value of event.returnValue is always the opposite of
event.defaultPrevented. This is not used in Node.js and is provided
purely for completeness.
event.srcElement
Stability: 3 - Legacy: Use event.target instead.
Type: {EventTarget} The EventTarget dispatching the event.
Alias for event.target.
event.stopImmediatePropagation()
Stops the invocation of event listeners after the current one
completes.
event.stopPropagation()
This is not used in Node.js and is provided purely for completeness.
event.target
Type: {EventTarget} The EventTarget dispatching the event.
event.timeStamp
Type: {number}
The millisecond timestamp when the Event object was created.
event.type
Type: {string}
The event type identifier.
Class: EventTarget
eventTarget.addEventListener(type, listener[,
options])
type {string}
listener {Function|EventListener}
options {Object}
once {boolean} When true, the listener is automatically
removed when it is first invoked. Default: false.
passive {boolean} When true, serves as a hint that the
listener will not call the Event object’s preventDefault()
method. Default: false.
capture {boolean} Not directly used by Node.js. Added for
API completeness. Default: false.
signal {AbortSignal} The listener will be removed when the
given AbortSignal object’s abort() method is called.
Adds a new handler for the type event. Any given listener is added
only once per type and per capture option value.
If the once option is true, the listener is removed after the next time a
type event is dispatched.
The capture option is not used by Node.js in any functional way other
than tracking registered event listeners per the EventTarget
specification. Specifically, the capture option is used as part of the
key when registering a listener. Any individual listener may be
added once with capture = false, and once with capture = true.
function handler(event) {}
const target = new EventTarget();
target.addEventListener('foo', handler, { capture: true }); // firs
target.addEventListener('foo', handler, { capture: false }); // seco
// Removes the second instance of handler
target.removeEventListener('foo', handler);
// Removes the first instance of handler
target.removeEventListener('foo', handler, { capture: true });
eventTarget.dispatchEvent(event)
event {Event}
Returns: {boolean} true if either event’s cancelable attribute
value is false or its preventDefault() method was not invoked,
otherwise false.
Dispatches the event to the list of handlers for event.type.
The registered event listeners is synchronously invoked in the order
they were registered.
eventTarget.removeEventListener(type, listener[,
options])
type {string}
listener {Function|EventListener}
options {Object}
capture {boolean}
Removes the listener from the list of handlers for event type.
Class: CustomEvent
Stability: 1 - Experimental.
Extends: {Event}
The CustomEvent object is an adaptation of the CustomEvent Web API.
Instances are created internally by Node.js.
event.detail
Stability: 1 - Experimental.
Type: {any} Returns custom data passed when initializing.
Read-only.
Class: NodeEventTarget
Extends: {EventTarget}
The NodeEventTarget is a Node.js-specific extension to EventTarget
that emulates a subset of the EventEmitter API.
nodeEventTarget.addListener(type, listener)
type {string}
listener {Function|EventListener}
Returns: {EventTarget} this
Node.js-specific extension to the EventTarget class that emulates the
equivalent EventEmitter API. The only difference between
addListener() and addEventListener() is that addListener() will return
a reference to the EventTarget.
nodeEventTarget.emit(type, arg)
type {string}
arg {any}
Returns: {boolean} true if event listeners registered for the type
exist, otherwise false.
Node.js-specific extension to the EventTarget class that dispatches the
arg to the list of handlers for type.
nodeEventTarget.eventNames()
Returns: {string[]}
Node.js-specific extension to the EventTarget class that returns an
array of event type names for which event listeners are registered.
nodeEventTarget.listenerCount(type)
type {string}
Returns: {number}
Node.js-specific extension to the EventTarget class that returns the
number of event listeners registered for the type.
nodeEventTarget.setMaxListeners(n)
n {number}
Node.js-specific extension to the EventTarget class that sets the
number of max event listeners as n.
nodeEventTarget.getMaxListeners()
Returns: {number}
Node.js-specific extension to the EventTarget class that returns the
number of max event listeners.
nodeEventTarget.off(type, listener[, options])
type {string}
listener {Function|EventListener}
options {Object}
capture {boolean}
Returns: {EventTarget} this
Node.js-specific alias for eventTarget.removeEventListener().
nodeEventTarget.on(type, listener)
type {string}
listener {Function|EventListener}
Returns: {EventTarget} this
Node.js-specific alias for eventTarget.addEventListener().
nodeEventTarget.once(type, listener)
type {string}
listener {Function|EventListener}
Returns: {EventTarget} this
Node.js-specific extension to the EventTarget class that adds a once
listener for the given event type. This is equivalent to calling on with
the once option set to true.
nodeEventTarget.removeAllListeners([type])
type {string}
Returns: {EventTarget} this
Node.js-specific extension to the EventTarget class. If type is
specified, removes all registered listeners for type, otherwise removes
all registered listeners.
nodeEventTarget.removeListener(type, listener[,
options])
type {string}
listener {Function|EventListener}
options {Object}
capture {boolean}
Returns: {EventTarget} this
Node.js-specific extension to the EventTarget class that removes the
listener for the given type. The only difference between
removeListener() and removeEventListener() is that removeListener()
will return a reference to the EventTarget.
File system
Stability: 2 - Stable
The node:fs module enables interacting with the file system in a way
modeled on standard POSIX functions.
To use the promise-based APIs:
import * as fs from 'node:fs/promises';
const fs = require('node:fs/promises');
To use the callback and sync APIs:
import * as fs from 'node:fs';
const fs = require('node:fs');
All file system operations have synchronous, callback, and promise-
based forms, and are accessible using both CommonJS syntax and
ES6 Modules (ESM).
Promise example
Promise-based operations return a promise that is fulfilled when the
asynchronous operation is complete.
import { unlink } from 'node:fs/promises';
try {
await unlink('/tmp/hello');
console.log('successfully deleted /tmp/hello');
} catch (error) {
console.error('there was an error:', error.message);
}
const { unlink } = require('node:fs/promises');
(async function(path) {
try {
await unlink(path);
console.log(`successfully deleted ${path}`);
} catch (error) {
console.error('there was an error:', error.message);
}
})('/tmp/hello');
Callback example
The callback form takes a completion callback function as its last
argument and invokes the operation asynchronously. The arguments
passed to the completion callback depend on the method, but the
first argument is always reserved for an exception. If the operation is
completed successfully, then the first argument is null or undefined.
import { unlink } from 'node:fs';
unlink('/tmp/hello', (err) => {
if (err) throw err;
console.log('successfully deleted /tmp/hello');
});
const { unlink } = require('node:fs');
unlink('/tmp/hello', (err) => {
if (err) throw err;
console.log('successfully deleted /tmp/hello');
});
The callback-based versions of the node:fs module APIs are
preferable over the use of the promise APIs when maximal
performance (both in terms of execution time and memory
allocation) is required.
Synchronous example
The synchronous APIs block the Node.js event loop and further
JavaScript execution until the operation is complete. Exceptions are
thrown immediately and can be handled using try…catch, or can be
allowed to bubble up.
import { unlinkSync } from 'node:fs';
try {
unlinkSync('/tmp/hello');
console.log('successfully deleted /tmp/hello');
} catch (err) {
// handle the error
}
const { unlinkSync } = require('node:fs');
try {
unlinkSync('/tmp/hello');
console.log('successfully deleted /tmp/hello');
} catch (err) {
// handle the error
}
Promises API
The fs/promises API provides asynchronous file system methods that
return promises.
The promise APIs use the underlying Node.js threadpool to perform
file system operations off the event loop thread. These operations are
not synchronized or threadsafe. Care must be taken when
performing multiple concurrent modifications on the same file or
data corruption may occur.
Class: FileHandle
A {FileHandle} object is an object wrapper for a numeric file
descriptor.
Instances of the {FileHandle} object are created by the
fsPromises.open() method.
All {FileHandle} objects are {EventEmitter}s.
If a {FileHandle} is not closed using the filehandle.close() method,
it will try to automatically close the file descriptor and emit a process
warning, helping to prevent memory leaks. Please do not rely on this
behavior because it can be unreliable and the file may not be closed.
Instead, always explicitly close {FileHandle}s. Node.js may change
this behavior in the future.
Event: 'close'
The 'close' event is emitted when the {FileHandle} has been closed
and can no longer be used.
filehandle.appendFile(data[, options])
data
{string|Buffer|TypedArray|DataView|AsyncIterable|Iterable|Str
eam}
options {Object|string}
encoding {string|null} Default: 'utf8'
flush {boolean} If true, the underlying file descriptor is
flushed prior to closing it. Default: false.
Returns: {Promise} Fulfills with undefined upon success.
Alias of filehandle.writeFile().
When operating on file handles, the mode cannot be changed from
what it was set to with fsPromises.open(). Therefore, this is
equivalent to filehandle.writeFile().
filehandle.chmod(mode)
mode{integer} the file mode bit mask.
Returns: {Promise} Fulfills with undefined upon success.
Modifies the permissions on the file. See chmod(2).
filehandle.chown(uid, gid)
uid {integer} The file’s new owner’s user id.
gid {integer} The file’s new group’s group id.
Returns: {Promise} Fulfills with undefined upon success.
Changes the ownership of the file. A wrapper for chown(2).
filehandle.close()
Returns: {Promise} Fulfills with undefined upon success.
Closes the file handle after waiting for any pending operation on the
handle to complete.
import { open } from 'node:fs/promises';
let filehandle;
try {
filehandle = await open('thefile.txt', 'r');
} finally {
await filehandle?.close();
}
filehandle.createReadStream([options])
options{Object}
encoding {string} Default: null
autoClose {boolean} Default: true
emitClose {boolean} Default: true
start {integer}
end {integer} Default: Infinity
highWaterMark {integer} Default: 64 * 1024
Returns: {fs.ReadStream}
Unlike the 16 KiB default highWaterMark for a {stream.Readable}, the
stream returned by this method has a default highWaterMark of 64
KiB.
options can include start and end values to read a range of bytes from
the file instead of the entire file. Both start and end are inclusive and
start counting at 0, allowed values are in the [0,
Number.MAX_SAFE_INTEGER] range. If start is omitted or undefined,
filehandle.createReadStream() reads sequentially from the current file
position. The encoding can be any one of those accepted by {Buffer}.
If the FileHandle points to a character device that only supports
blocking reads (such as keyboard or sound card), read operations do
not finish until data is available. This can prevent the process from
exiting and the stream from closing naturally.
By default, the stream will emit a 'close' event after it has been
destroyed. Set the emitClose option to false to change this behavior.
import { open } from 'node:fs/promises';
const fd = await open('/dev/input/event0');
// Create a stream from some character device.
const stream = fd.createReadStream();
setTimeout(() => {
stream.close(); // This may not close the stream.
// Artificially marking end-of-stream, as if the underlying resour
// indicated end-of-file by itself, allows the stream to close.
// This does not cancel pending read operations, and if there is s
// operation, the process may still not be able to exit successful
// il i fi i h
// until it finishes.
stream.push(null);
stream.read(0);
}, 100);
If autoClose is false, then the file descriptor won’t be closed, even if
there’s an error. It is the application’s responsibility to close it and
make sure there’s no file descriptor leak. If autoClose is set to true
(default behavior), on 'error' or 'end' the file descriptor will be
closed automatically.
An example to read the last 10 bytes of a file which is 100 bytes long:
import { open } from 'node:fs/promises';
const fd = await open('sample.txt');
fd.createReadStream({ start: 90, end: 99 });
filehandle.createWriteStream([options])
options{Object}
encoding {string} Default: 'utf8'
autoClose {boolean} Default: true
emitClose {boolean} Default: true
start {integer}
highWaterMark {number} Default: 16384
flush {boolean} If true, the underlying file descriptor is
flushed prior to closing it. Default: false.
Returns: {fs.WriteStream}
options may also include a start option to allow writing data at some
position past the beginning of the file, allowed values are in the [0,
Number.MAX_SAFE_INTEGER] range. Modifying a file rather than replacing
it may require the flags open option to be set to r+ rather than the
default r. The encoding can be any one of those accepted by {Buffer}.
If autoClose is set to true (default behavior) on 'error' or 'finish' the
file descriptor will be closed automatically. If autoClose is false, then
the file descriptor won’t be closed, even if there’s an error. It is the
application’s responsibility to close it and make sure there’s no file
descriptor leak.
By default, the stream will emit a 'close' event after it has been
destroyed. Set the emitClose option to false to change this behavior.
filehandle.datasync()
Returns: {Promise} Fulfills with undefined upon success.
Forces all currently queued I/O operations associated with the file to
the operating system’s synchronized I/O completion state. Refer to
the POSIX fdatasync(2) documentation for details.
Unlike filehandle.sync this method does not flush modified
metadata.
filehandle.fd
{number} The numeric file descriptor managed by the
{FileHandle} object.
filehandle.read(buffer, offset, length, position)
buffer {Buffer|TypedArray|DataView} A buffer that will be filled
with the file data read.
offset {integer} The location in the buffer at which to start filling.
length {integer} The number of bytes to read.
position {integer|bigint|null} The location where to begin
reading data from the file. If null or -1, data will be read from the
current file position, and the position will be updated. If position
is a non-negative integer, the current file position will remain
unchanged.
Returns: {Promise} Fulfills upon success with an object with two
properties:
bytesRead {integer} The number of bytes read
buffer {Buffer|TypedArray|DataView} A reference to the
passed in buffer argument.
Reads data from the file and stores that in the given buffer.
If the file is not modified concurrently, the end-of-file is reached
when the number of bytes read is zero.
filehandle.read([options])
options {Object}
buffer {Buffer|TypedArray|DataView} A buffer that will be
filled with the file data read. Default: Buffer.alloc(16384)
offset {integer} The location in the buffer at which to start
filling. Default: 0
length {integer} The number of bytes to read. Default:
buffer.byteLength - offset
position{integer|bigint|null} The location where to begin
reading data from the file. If null or -1, data will be read from
the current file position, and the position will be updated. If
position is a non-negative integer, the current file position
will remain unchanged. Default:: null
Returns: {Promise} Fulfills upon success with an object with two
properties:
bytesRead {integer} The number of bytes read
buffer {Buffer|TypedArray|DataView} A reference to the
passed in buffer argument.
Reads data from the file and stores that in the given buffer.
If the file is not modified concurrently, the end-of-file is reached
when the number of bytes read is zero.
filehandle.read(buffer[, options])
buffer {Buffer|TypedArray|DataView} A buffer that will be filled
with the file data read.
options {Object}
offset {integer} The location in the buffer at which to start
filling. Default: 0
length {integer} The number of bytes to read. Default:
buffer.byteLength - offset
position{integer|bigint|null} The location where to begin
reading data from the file. If null or -1, data will be read from
the current file position, and the position will be updated. If
position is a non-negative integer, the current file position
will remain unchanged. Default:: null
Returns: {Promise} Fulfills upon success with an object with two
properties:
bytesRead {integer} The number of bytes read
buffer {Buffer|TypedArray|DataView} A reference to the
passed in buffer argument.
Reads data from the file and stores that in the given buffer.
If the file is not modified concurrently, the end-of-file is reached
when the number of bytes read is zero.
filehandle.readableWebStream([options])
Stability: 1 - Experimental
options{Object}
type {string|undefined} Whether to open a normal or a
'bytes' stream. Default: undefined
Returns: {ReadableStream}
Returns a ReadableStream that may be used to read the files data.
An error will be thrown if this method is called more than once or is
called after the FileHandle is closed or closing.
import {
open,
} from 'node:fs/promises';
const file = await open('./some/file/to/read');
for await (const chunk of file.readableWebStream())
console.log(chunk);
await file.close();
const {
open,
} = require('node:fs/promises');
(async () => {
const file = await open('./some/file/to/read');
for await (const chunk of file.readableWebStream())
console.log(chunk);
await file.close();
})();
While the ReadableStream will read the file to completion, it will not
close the FileHandle automatically. User code must still call the
fileHandle.close() method.
filehandle.readFile(options)
options {Object|string}
encoding {string|null} Default: null
signal {AbortSignal} allows aborting an in-progress readFile
Returns: {Promise} Fulfills upon a successful read with the
contents of the file. If no encoding is specified (using
options.encoding), the data is returned as a {Buffer} object.
Otherwise, the data will be a string.
Asynchronously reads the entire contents of a file.
If options is a string, then it specifies the encoding.
The {FileHandle} has to support reading.
If one or more filehandle.read() calls are made on a file handle and
then a filehandle.readFile() call is made, the data will be read from
the current position till the end of the file. It doesn’t always read
from the beginning of the file.
filehandle.readLines([options])
options{Object}
encoding {string} Default: null
autoClose {boolean} Default: true
emitClose {boolean} Default: true
start {integer}
end {integer} Default: Infinity
highWaterMark {integer} Default: 64 * 1024
Returns: {readline.InterfaceConstructor}
Convenience method to create a readline interface and stream over
the file. See filehandle.createReadStream() for the options.
import { open } from 'node:fs/promises';
const file = await open('./some/file/to/read');
for await (const line of file.readLines()) {
console.log(line);
}
const { open } = require('node:fs/promises');
(async () => {
const file = await open('./some/file/to/read');
for await (const line of file.readLines()) {
console.log(line);
}
})();
filehandle.readv(buffers[, position])
buffers {Buffer[]|TypedArray[]|DataView[]}
position {integer|null} The offset from the beginning of the file
where the data should be read from. If position is not a number,
the data will be read from the current position. Default: null
Returns: {Promise} Fulfills upon success an object containing
two properties:
bytesRead {integer} the number of bytes read
buffers {Buffer[]|TypedArray[]|DataView[]} property
containing a reference to the buffers input.
Read from a file and write to an array of {ArrayBufferView}s
filehandle.stat([options])
options{Object}
bigint {boolean} Whether the numeric values in the returned
{fs.Stats} object should be bigint. Default: false.
Returns: {Promise} Fulfills with an {fs.Stats} for the file.
filehandle.sync()
Returns: {Promise} Fulfills with undefined upon success.
Request that all data for the open file descriptor is flushed to the
storage device. The specific implementation is operating system and
device specific. Refer to the POSIX fsync(2) documentation for more
detail.
filehandle.truncate(len)
len{integer} Default: 0
Returns: {Promise} Fulfills with undefined upon success.
Truncates the file.
If the file was larger than len bytes, only the first len bytes will be
retained in the file.
The following example retains only the first four bytes of the file:
import { open } from 'node:fs/promises';
let filehandle = null;
try {
filehandle = await open('temp.txt', 'r+');
await filehandle.truncate(4);
} finally {
await filehandle?.close();
}
If the file previously was shorter than len bytes, it is extended, and
the extended part is filled with null bytes ('\0'):
If len is negative then 0 will be used.
filehandle.utimes(atime, mtime)
atime {number|string|Date}
mtime {number|string|Date}
Returns: {Promise}
Change the file system timestamps of the object referenced by the
{FileHandle} then fulfills the promise with no arguments upon
success.
filehandle.write(buffer, offset[, length[,
position]])
buffer {Buffer|TypedArray|DataView}
offset {integer} The start position from within buffer where the
data to write begins.
length {integer} The number of bytes from buffer to write.
Default: buffer.byteLength - offset
position {integer|null} The offset from the beginning of the file
where the data from buffer should be written. If position is not a
number, the data will be written at the current position. See the
POSIX pwrite(2) documentation for more detail. Default: null
Returns: {Promise}
Write buffer to the file.
The promise is fulfilled with an object containing two properties:
bytesWritten {integer} the number of bytes written
buffer {Buffer|TypedArray|DataView} a reference to the buffer
written.
It is unsafe to use filehandle.write() multiple times on the same file
without waiting for the promise to be fulfilled (or rejected). For this
scenario, use filehandle.createWriteStream().
On Linux, positional writes do not work when the file is opened in
append mode. The kernel ignores the position argument and always
appends the data to the end of the file.
filehandle.write(buffer[, options])
buffer {Buffer|TypedArray|DataView}
options {Object}
offset {integer} Default: 0
length {integer} Default: buffer.byteLength - offset
position {integer} Default: null
Returns: {Promise}
Write buffer to the file.
Similar to the above filehandle.write function, this version takes an
optional options object. If no options object is specified, it will default
with the above values.
filehandle.write(string[, position[, encoding]])
string {string}
position {integer|null} The offset from the beginning of the file
where the data from string should be written. If position is not a
number the data will be written at the current position. See the
POSIX pwrite(2) documentation for more detail. Default: null
encoding {string} The expected string encoding. Default: 'utf8'
Returns: {Promise}
Write string to the file. If string is not a string, the promise is
rejected with an error.
The promise is fulfilled with an object containing two properties:
bytesWritten {integer} the number of bytes written
buffer {string} a reference to the string written.
It is unsafe to use filehandle.write() multiple times on the same file
without waiting for the promise to be fulfilled (or rejected). For this
scenario, use filehandle.createWriteStream().
On Linux, positional writes do not work when the file is opened in
append mode. The kernel ignores the position argument and always
appends the data to the end of the file.
filehandle.writeFile(data, options)
data
{string|Buffer|TypedArray|DataView|AsyncIterable|Iterable|Str
eam}
options{Object|string}
encoding {string|null} The expected character encoding when
data is a string. Default: 'utf8'
Returns: {Promise}
Asynchronously writes data to a file, replacing the file if it already
exists. data can be a string, a buffer, an {AsyncIterable}, or an
{Iterable} object. The promise is fulfilled with no arguments upon
success.
If options is a string, then it specifies the encoding.
The {FileHandle} has to support writing.
It is unsafe to use filehandle.writeFile() multiple times on the same
file without waiting for the promise to be fulfilled (or rejected).
If one or more filehandle.write() calls are made on a file handle and
then a filehandle.writeFile() call is made, the data will be written
from the current position till the end of the file. It doesn’t always
write from the beginning of the file.
filehandle.writev(buffers[, position])
buffers {Buffer[]|TypedArray[]|DataView[]}
position {integer|null} The offset from the beginning of the file
where the data from buffers should be written. If position is not a
number, the data will be written at the current position. Default:
null
Returns: {Promise}
Write an array of {ArrayBufferView}s to the file.
The promise is fulfilled with an object containing a two properties:
bytesWritten {integer} the number of bytes written
buffers {Buffer[]|TypedArray[]|DataView[]} a reference to the
buffers input.
It is unsafe to call writev() multiple times on the same file without
waiting for the promise to be fulfilled (or rejected).
On Linux, positional writes don’t work when the file is opened in
append mode. The kernel ignores the position argument and always
appends the data to the end of the file.
filehandle[Symbol.asyncDispose]()
Stability: 1 - Experimental
An alias for filehandle.close().
fsPromises.access(path[, mode])
path {string|Buffer|URL}
mode {integer} Default: fs.constants.F_OK
Returns: {Promise} Fulfills with undefined upon success.
Tests a user’s permissions for the file or directory specified by path.
The mode argument is an optional integer that specifies the
accessibility checks to be performed. mode should be either the value
fs.constants.F_OK or a mask consisting of the bitwise OR of any of
fs.constants.R_OK, fs.constants.W_OK, and fs.constants.X_OK (e.g.
fs.constants.W_OK | fs.constants.R_OK). Check File access constants
for possible values of mode.
If the accessibility check is successful, the promise is fulfilled with no
value. If any of the accessibility checks fail, the promise is rejected
with an {Error} object. The following example checks if the file
/etc/passwd can be read and written by the current process.
import { access, constants } from 'node:fs/promises';
try {
await access('/etc/passwd', constants.R_OK | constants.W_OK);
console.log('can access');
} catch {
console.error('cannot access');
}
Using fsPromises.access() to check for the accessibility of a file
before calling fsPromises.open() is not recommended. Doing so
introduces a race condition, since other processes may change the
file’s state between the two calls. Instead, user code should
open/read/write the file directly and handle the error raised if the
file is not accessible.
fsPromises.appendFile(path, data[,
options])
path {string|Buffer|URL|FileHandle} filename or {FileHandle}
data {string|Buffer}
options {Object|string}
encoding {string|null} Default: 'utf8'
mode {integer} Default: 0o666
flag {string} See support of file system flags. Default: 'a'.
flush {boolean} If true, the underlying file descriptor is
flushed prior to closing it. Default: false.
Returns: {Promise} Fulfills with undefined upon success.
Asynchronously append data to a file, creating the file if it does not
yet exist. data can be a string or a {Buffer}.
If options is a string, then it specifies the encoding.
The mode option only affects the newly created file. See fs.open() for
more details.
The path may be specified as a {FileHandle} that has been opened for
appending (using fsPromises.open()).
fsPromises.chmod(path, mode)
path {string|Buffer|URL}
mode {string|integer}
Returns: {Promise} Fulfills with undefined upon success.
Changes the permissions of a file.
fsPromises.chown(path, uid, gid)
path {string|Buffer|URL}
uid {integer}
gid {integer}
Returns: {Promise} Fulfills with undefined upon success.
Changes the ownership of a file.
fsPromises.copyFile(src, dest[, mode])
src {string|Buffer|URL} source filename to copy
dest {string|Buffer|URL} destination filename of the copy
operation
mode {integer} Optional modifiers that specify the behavior of the
copy operation. It is possible to create a mask consisting of the
bitwise OR of two or more values (e.g. fs.constants.COPYFILE_EXCL
| fs.constants.COPYFILE_FICLONE) Default: 0.
fs.constants.COPYFILE_EXCL: The copy operation will fail if
dest already exists.
fs.constants.COPYFILE_FICLONE: The copy operation will
attempt to create a copy-on-write reflink. If the platform does
not support copy-on-write, then a fallback copy mechanism
is used.
fs.constants.COPYFILE_FICLONE_FORCE: The copy operation will
attempt to create a copy-on-write reflink. If the platform does
not support copy-on-write, then the operation will fail.
Returns: {Promise} Fulfills with undefined upon success.
Asynchronously copies src to dest. By default, dest is overwritten if it
already exists.
No guarantees are made about the atomicity of the copy operation. If
an error occurs after the destination file has been opened for writing,
an attempt will be made to remove the destination.
import { copyFile, constants } from 'node:fs/promises';
try {
await copyFile('source.txt', 'destination.txt');
console.log('source.txt was copied to destination.txt');
} catch {
console.error('The file could not be copied');
}
// By using COPYFILE_EXCL, the operation will fail if destination.tx
try {
await copyFile('source.txt', 'destination.txt', constants.COPYFILE
console.log('source.txt was copied to destination.txt');
} catch {
console.error('The file could not be copied');
}
fsPromises.cp(src, dest[, options])
Stability: 1 - Experimental
src {string|URL} source path to copy.
dest {string|URL} destination path to copy to.
options {Object}
dereference {boolean} dereference symlinks. Default: false.
errorOnExist {boolean} when force is false, and the
destination exists, throw an error. Default: false.
filter {Function} Function to filter copied files/directories.
Return true to copy the item, false to ignore it. When
ignoring a directory, all of its contents will be skipped as well.
Can also return a Promise that resolves to true or false
Default: undefined.
src {string} source path to copy.
dest {string} destination path to copy to.
Returns: {boolean|Promise}
force {boolean} overwrite existing file or directory. The copy
operation will ignore errors if you set this to false and the
destination exists. Use the errorOnExist option to change this
behavior. Default: true.
mode {integer} modifiers for copy operation. Default: 0. See
mode flag of fsPromises.copyFile().
preserveTimestamps {boolean} When true timestamps from src
will be preserved. Default: false.
recursive {boolean} copy directories recursively Default:
false
verbatimSymlinks{boolean} When true, path resolution for
symlinks will be skipped. Default: false
Returns: {Promise} Fulfills with undefined upon success.
Asynchronously copies the entire directory structure from src to
dest, including subdirectories and files.
When copying a directory to another directory, globs are not
supported and behavior is similar to cp dir1/ dir2/.
fsPromises.lchmod(path, mode)
path {string|Buffer|URL}
mode {integer}
Returns: {Promise} Fulfills with undefined upon success.
Changes the permissions on a symbolic link.
This method is only implemented on macOS.
fsPromises.lchown(path, uid, gid)
path {string|Buffer|URL}
uid {integer}
gid {integer}
Returns: {Promise} Fulfills with undefined upon success.
Changes the ownership on a symbolic link.
fsPromises.lutimes(path, atime, mtime)
path {string|Buffer|URL}
atime {number|string|Date}
mtime {number|string|Date}
Returns: {Promise} Fulfills with undefined upon success.
Changes the access and modification times of a file in the same way
as fsPromises.utimes(), with the difference that if the path refers to a
symbolic link, then the link is not dereferenced: instead, the
timestamps of the symbolic link itself are changed.
fsPromises.link(existingPath, newPath)
existingPath {string|Buffer|URL}
newPath {string|Buffer|URL}
Returns: {Promise} Fulfills with undefined upon success.
Creates a new link from the existingPath to the newPath. See the
POSIX link(2) documentation for more detail.
fsPromises.lstat(path[, options])
path {string|Buffer|URL}
options {Object}
bigint {boolean} Whether the numeric values in the returned
{fs.Stats} object should be bigint. Default: false.
Returns: {Promise} Fulfills with the {fs.Stats} object for the given
symbolic link path.
Equivalent to fsPromises.stat() unless path refers to a symbolic link,
in which case the link itself is stat-ed, not the file that it refers to.
Refer to the POSIX lstat(2) document for more detail.
fsPromises.mkdir(path[, options])
path {string|Buffer|URL}
options {Object|integer}
recursive {boolean} Default: false
mode {string|integer} Not supported on Windows. Default:
0o777.
Returns: {Promise} Upon success, fulfills with undefined if
recursive is false, or the first directory path created if recursive is
true.
Asynchronously creates a directory.
The optional options argument can be an integer specifying mode
(permission and sticky bits), or an object with a mode property and a
recursive property indicating whether parent directories should be
created. Calling fsPromises.mkdir() when path is a directory that
exists results in a rejection only when recursive is false.
i t { kdi } f ' d f / i '
import { mkdir } from 'node:fs/promises';
try {
const projectFolder = new URL('./test/project/', import.meta.url)
const createDir = await mkdir(projectFolder, { recursive: true })
console.log(`created ${createDir}`);
} catch (err) {
console.error(err.message);
}
const { mkdir } = require('node:fs/promises');
const { join } = require('node:path');
async function makeDirectory() {
const projectFolder = join(__dirname, 'test', 'project');
const dirCreation = await mkdir(projectFolder, { recursive: true
console.log(dirCreation);
return dirCreation;
}
makeDirectory().catch(console.error);
fsPromises.mkdtemp(prefix[, options])
prefix {string|Buffer|URL}
options {string|Object}
encoding {string} Default: 'utf8'
Returns: {Promise} Fulfills with a string containing the file
system path of the newly created temporary directory.
Creates a unique temporary directory. A unique directory name is
generated by appending six random characters to the end of the
provided prefix. Due to platform inconsistencies, avoid trailing X
characters in prefix. Some platforms, notably the BSDs, can return
more than six random characters, and replace trailing X characters in
prefix with random characters.
The optional options argument can be a string specifying an
encoding, or an object with an encoding property specifying the
character encoding to use.
import { mkdtemp } from 'node:fs/promises';
import { join } from 'node:path';
import { tmpdir } from 'node:os';
try {
await mkdtemp(join(tmpdir(), 'foo-'));
} catch (err) {
console.error(err);
}
The fsPromises.mkdtemp() method will append the six randomly
selected characters directly to the prefix string. For instance, given a
directory /tmp, if the intention is to create a temporary directory
within /tmp, the prefix must end with a trailing platform-specific
path separator (require('node:path').sep).
fsPromises.open(path, flags[, mode])
path {string|Buffer|URL}
flags {string|number} See support of file system flags. Default:
'r'.
mode {string|integer} Sets the file mode (permission and sticky
bits) if the file is created. Default: 0o666 (readable and writable)
Returns: {Promise} Fulfills with a {FileHandle} object.
Opens a {FileHandle}.
Refer to the POSIX open(2) documentation for more detail.
Some characters (< > : " / \ | ? *) are reserved under Windows as
documented by Naming Files, Paths, and Namespaces. Under NTFS,
if the filename contains a colon, Node.js will open a file system
stream, as described by this MSDN page.
fsPromises.opendir(path[, options])
path {string|Buffer|URL}
options {Object}
encoding {string|null} Default: 'utf8'
bufferSize {number} Number of directory entries that are
buffered internally when reading from the directory. Higher
values lead to better performance but higher memory usage.
Default: 32
recursive {boolean} Resolved Dir will be an {AsyncIterable}
containing all sub files and directories. Default: false
Returns: {Promise} Fulfills with an {fs.Dir}.
Asynchronously open a directory for iterative scanning. See the
POSIX opendir(3) documentation for more detail.
Creates an {fs.Dir}, which contains all further functions for reading
from and cleaning up the directory.
The encoding option sets the encoding for the path while opening the
directory and subsequent read operations.
Example using async iteration:
import { opendir } from 'node:fs/promises';
try {
const dir = await opendir('./');
for await (const dirent of dir)
console.log(dirent.name);
} catch (err) {
console.error(err);
}
When using the async iterator, the {fs.Dir} object will be
automatically closed after the iterator exits.
fsPromises.readdir(path[, options])
path {string|Buffer|URL}
options {string|Object}
encoding {string} Default: 'utf8'
withFileTypes {boolean} Default: false
recursive {boolean} If true, reads the contents of a directory
recursively. In recursive mode, it will list all files, sub files,
and directories. Default: false.
Returns: {Promise} Fulfills with an array of the names of the files
in the directory excluding '.' and '..'.
Reads the contents of a directory.
The optional options argument can be a string specifying an
encoding, or an object with an encoding property specifying the
character encoding to use for the filenames. If the encoding is set to
'buffer', the filenames returned will be passed as {Buffer} objects.
If options.withFileTypes is set to true, the returned array will contain
{fs.Dirent} objects.
import { readdir } from 'node:fs/promises';
try {
const files = await readdir(path);
for (const file of files)
console.log(file);
} catch (err) {
console.error(err);
}
fsPromises.readFile(path[, options])
path {string|Buffer|URL|FileHandle} filename or FileHandle
options {Object|string}
encoding {string|null} Default: null
flag {string} See support of file system flags. Default: 'r'.
signal {AbortSignal} allows aborting an in-progress readFile
Returns: {Promise} Fulfills with the contents of the file.
Asynchronously reads the entire contents of a file.
If no encoding is specified (using options.encoding), the data is
returned as a {Buffer} object. Otherwise, the data will be a string.
If options is a string, then it specifies the encoding.
When the path is a directory, the behavior of fsPromises.readFile() is
platform-specific. On macOS, Linux, and Windows, the promise will
be rejected with an error. On FreeBSD, a representation of the
directory’s contents will be returned.
An example of reading a package.json file located in the same
directory of the running code:
import { readFile } from 'node:fs/promises';
try {
const filePath = new URL('./package.json', import.meta.url);
const contents = await readFile(filePath, { encoding: 'utf8' });
console.log(contents);
} catch (err) {
console.error(err.message);
}
const { readFile } = require('node:fs/promises');
const { resolve } = require('node:path');
async function logFile() {
try {
const filePath = resolve('./package.json');
const contents = await readFile(filePath, { encoding: 'utf8' })
console.log(contents);
} catch (err) {
console.error(err.message);
}
}
logFile();
It is possible to abort an ongoing readFile using an {AbortSignal}. If
a request is aborted the promise returned is rejected with an
AbortError:
import { readFile } from 'node:fs/promises';
try {
const controller = new AbortController();
const { signal } = controller;
const promise = readFile(fileName, { signal });
// Abort the request before the promise settles.
controller.abort();
await promise;
} catch (err) {
// When a request is aborted - err is an AbortError
console.error(err);
}
Aborting an ongoing request does not abort individual operating
system requests but rather the internal buffering fs.readFile
performs.
Any specified {FileHandle} has to support reading.
fsPromises.readlink(path[, options])
path {string|Buffer|URL}
options {string|Object}
encoding {string} Default: 'utf8'
Returns: {Promise} Fulfills with the linkString upon success.
Reads the contents of the symbolic link referred to by path. See the
POSIX readlink(2) documentation for more detail. The promise is
fulfilled with the linkString upon success.
The optional options argument can be a string specifying an
encoding, or an object with an encoding property specifying the
character encoding to use for the link path returned. If the encoding is
set to 'buffer', the link path returned will be passed as a {Buffer}
object.
fsPromises.realpath(path[, options])
path {string|Buffer|URL}
options {string|Object}
encoding {string} Default: 'utf8'
Returns: {Promise} Fulfills with the resolved path upon success.
Determines the actual location of path using the same semantics as
the fs.realpath.native() function.
Only paths that can be converted to UTF8 strings are supported.
The optional options argument can be a string specifying an
encoding, or an object with an encoding property specifying the
character encoding to use for the path. If the encoding is set to
'buffer', the path returned will be passed as a {Buffer} object.
On Linux, when Node.js is linked against musl libc, the procfs file
system must be mounted on /proc in order for this function to work.
Glibc does not have this restriction.
fsPromises.rename(oldPath, newPath)
oldPath {string|Buffer|URL}
newPath {string|Buffer|URL}
Returns: {Promise} Fulfills with undefined upon success.
Renames oldPath to newPath.
fsPromises.rmdir(path[, options])
path {string|Buffer|URL}
options {Object}
maxRetries {integer} If an EBUSY, EMFILE, ENFILE, ENOTEMPTY, or
EPERM error is encountered, Node.js retries the operation with
a linear backoff wait of retryDelay milliseconds longer on
each try. This option represents the number of retries. This
option is ignored if the recursive option is not true. Default:
0.
recursive {boolean} If true, perform a recursive directory
removal. In recursive mode, operations are retried on failure.
Default: false. Deprecated.
retryDelay {integer} The amount of time in milliseconds to
wait between retries. This option is ignored if the recursive
option is not true. Default: 100.
Returns: {Promise} Fulfills with undefined upon success.
Removes the directory identified by path.
Using fsPromises.rmdir() on a file (not a directory) results in the
promise being rejected with an ENOENT error on Windows and an
ENOTDIR error on POSIX.
To get a behavior similar to the rm -rf Unix command, use
fsPromises.rm() with options { recursive: true, force: true }.
fsPromises.rm(path[, options])
path {string|Buffer|URL}
options {Object}
force {boolean} When true, exceptions will be ignored if path
does not exist. Default: false.
maxRetries {integer} If an EBUSY, EMFILE, ENFILE, ENOTEMPTY, or
EPERM error is encountered, Node.js will retry the operation
with a linear backoff wait of retryDelay milliseconds longer
on each try. This option represents the number of retries.
This option is ignored if the recursive option is not true.
Default: 0.
recursive {boolean} If true, perform a recursive directory
removal. In recursive mode operations are retried on failure.
Default: false.
retryDelay {integer} The amount of time in milliseconds to
wait between retries. This option is ignored if the recursive
option is not true. Default: 100.
Returns: {Promise} Fulfills with undefined upon success.
Removes files and directories (modeled on the standard POSIX rm
utility).
fsPromises.stat(path[, options])
path {string|Buffer|URL}
options {Object}
bigint {boolean} Whether the numeric values in the returned
{fs.Stats} object should be bigint. Default: false.
Returns: {Promise} Fulfills with the {fs.Stats} object for the given
path.
fsPromises.statfs(path[, options])
path {string|Buffer|URL}
options {Object}
bigint {boolean} Whether the numeric values in the returned
{fs.StatFs} object should be bigint. Default: false.
Returns: {Promise} Fulfills with the {fs.StatFs} object for the
given path.
fsPromises.symlink(target, path[, type])
target {string|Buffer|URL}
path {string|Buffer|URL}
type {string|null} Default: null
Returns: {Promise} Fulfills with undefined upon success.
Creates a symbolic link.
The type argument is only used on Windows platforms and can be
one of 'dir', 'file', or 'junction'. If the type argument is not a
string, Node.js will autodetect target type and use 'file' or 'dir'. If
the target does not exist, 'file' will be used. Windows junction
points require the destination path to be absolute. When using
'junction', the target argument will automatically be normalized to
absolute path. Junction points on NTFS volumes can only point to
directories.
fsPromises.truncate(path[, len])
path {string|Buffer|URL}
len {integer} Default: 0
Returns: {Promise} Fulfills with undefined upon success.
Truncates (shortens or extends the length) of the content at path to
len bytes.
fsPromises.unlink(path)
path{string|Buffer|URL}
Returns: {Promise} Fulfills with undefined upon success.
If path refers to a symbolic link, then the link is removed without
affecting the file or directory to which that link refers. If the path
refers to a file path that is not a symbolic link, the file is deleted. See
the POSIX unlink(2) documentation for more detail.
fsPromises.utimes(path, atime, mtime)
path {string|Buffer|URL}
atime {number|string|Date}
mtime {number|string|Date}
Returns: {Promise} Fulfills with undefined upon success.
Change the file system timestamps of the object referenced by path.
The atime and mtime arguments follow these rules:
Values can be either numbers representing Unix epoch time,
Dates, or a numeric string like '123456789.0'.
If the value can not be converted to a number, or is NaN, Infinity,
or -Infinity, an Error will be thrown.
fsPromises.watch(filename[, options])
filename {string|Buffer|URL}
options {string|Object}
persistent {boolean} Indicates whether the process should
continue to run as long as files are being watched. Default:
true.
recursive {boolean} Indicates whether all subdirectories
should be watched, or only the current directory. This applies
when a directory is specified, and only on supported
platforms (See caveats). Default: false.
encoding {string} Specifies the character encoding to be used
for the filename passed to the listener. Default: 'utf8'.
signal {AbortSignal} An {AbortSignal} used to signal when
the watcher should stop.
Returns: {AsyncIterator} of objects with the properties:
eventType {string} The type of change
filename {string|Buffer|null} The name of the file changed.
Returns an async iterator that watches for changes on filename,
where filename is either a file or a directory.
const { watch } = require('node:fs/promises');
const ac = new AbortController();
const { signal } = ac;
setTimeout(() => ac.abort(), 10000);
(async () => {
try {
const watcher = watch(__filename, { signal });
for await (const event of watcher)
console.log(event);
} catch (err) {
if (err.name === 'AbortError')
return;
throw err;
}
})();
On most platforms, 'rename' is emitted whenever a filename appears
or disappears in the directory.
All the caveats for fs.watch() also apply to fsPromises.watch().
fsPromises.writeFile(file, data[, options])
file {string|Buffer|URL|FileHandle} filename or FileHandle
data
{string|Buffer|TypedArray|DataView|AsyncIterable|Iterable|Str
eam}
options {Object|string}
encoding {string|null} Default: 'utf8'
mode {integer} Default: 0o666
flag {string} See support of file system flags. Default: 'w'.
flush {boolean} If all data is successfully written to the file,
and flush is true, filehandle.sync() is used to flush the data.
Default: false.
{AbortSignal} allows aborting an in-progress writeFile
signal
Returns: {Promise} Fulfills with undefined upon success.
Asynchronously writes data to a file, replacing the file if it already
exists. data can be a string, a buffer, an {AsyncIterable}, or an
{Iterable} object.
The encoding option is ignored if data is a buffer.
If options is a string, then it specifies the encoding.
The mode option only affects the newly created file. See fs.open() for
more details.
Any specified {FileHandle} has to support writing.
It is unsafe to use fsPromises.writeFile() multiple times on the same
file without waiting for the promise to be settled.
Similarly to fsPromises.readFile - fsPromises.writeFile is a
convenience method that performs multiple write calls internally to
write the buffer passed to it. For performance sensitive code consider
using fs.createWriteStream() or filehandle.createWriteStream().
It is possible to use an {AbortSignal} to cancel an
fsPromises.writeFile(). Cancelation is “best effort”, and some
amount of data is likely still to be written.
import { writeFile } from 'node:fs/promises';
import { Buffer } from 'node:buffer';
try {
const controller = new AbortController();
const { signal } = controller;
const data = new Uint8Array(Buffer.from('Hello Node.js'));
const promise = writeFile('message.txt', data, { signal });
// Abort the request before the promise settles.
controller.abort();
await promise;
} catch (err) {
// When a request is aborted - err is an AbortError
console.error(err);
}
Aborting an ongoing request does not abort individual operating
system requests but rather the internal buffering fs.writeFile
performs.
fsPromises.constants
{Object}
Returns an object containing commonly used constants for file
system operations. The object is the same as fs.constants. See FS
constants for more details.
Callback API
The callback APIs perform all operations asynchronously, without
blocking the event loop, then invoke a callback function upon
completion or error.
The callback APIs use the underlying Node.js threadpool to perform
file system operations off the event loop thread. These operations are
not synchronized or threadsafe. Care must be taken when
performing multiple concurrent modifications on the same file or
data corruption may occur.
fs.access(path[, mode], callback)
path {string|Buffer|URL}
mode {integer} Default: fs.constants.F_OK
callback {Function}
err {Error}
Tests a user’s permissions for the file or directory specified by path.
The mode argument is an optional integer that specifies the
accessibility checks to be performed. mode should be either the value
fs.constants.F_OK or a mask consisting of the bitwise OR of any of
fs.constants.R_OK, fs.constants.W_OK, and fs.constants.X_OK (e.g.
fs.constants.W_OK | fs.constants.R_OK). Check File access constants
for possible values of mode.
The final argument, callback, is a callback function that is invoked
with a possible error argument. If any of the accessibility checks fail,
the error argument will be an Error object. The following examples
check if package.json exists, and if it is readable or writable.
import { access, constants } from 'node:fs';
const file = 'package.json';
// Check if the file exists in the current directory.
access(file, constants.F_OK, (err) => {
console.log(`${file} ${err ? 'does not exist' : 'exists'}`);
});
// Check if the file is readable.
access(file, constants.R_OK, (err) => {
console.log(`${file} ${err ? 'is not readable' : 'is readable'}`);
});
// Check if the file is writable.
access(file, constants.W_OK, (err) => {
console.log(`${file} ${err ? 'is not writable' : 'is writable'}`);
});
// Check if the file is readable and writable.
access(file, constants.R_OK | constants.W_OK, (err) => {
console.log(`${file} ${err ? 'is not' : 'is'} readable and writabl
});
Do not use fs.access() to check for the accessibility of a file before
calling fs.open(), fs.readFile(), or fs.writeFile(). Doing so
introduces a race condition, since other processes may change the
file’s state between the two calls. Instead, user code should
open/read/write the file directly and handle the error raised if the
file is not accessible.
write (NOT RECOMMENDED)
import { access, open, close } from 'node:fs';
access('myfile', (err) => {
if (!err) {
console.error('myfile already exists');
return;
}
open('myfile', 'wx', (err, fd) => {
if (err) throw err;
try {
writeMyData(fd);
} finally {
close(fd, (err) => {
if (err) throw err;
});
}
});
});
write (RECOMMENDED)
import { open, close } from 'node:fs';
open('myfile', 'wx', (err, fd) => {
if (err) {
if (err.code === 'EEXIST') {
console.error('myfile already exists');
return;
}
throw err;
}
try {
writeMyData(fd);
} finally {
close(fd, (err) => {
if (err) throw err;
});
}
});
read (NOT RECOMMENDED)
import { access, open, close } from 'node:fs';
access('myfile', (err) => {
if (err) {
if (err.code === 'ENOENT') {
console.error('myfile does not exist');
return;
}
throw err;
}
open('myfile', 'r', (err, fd) => {
if (err) throw err;
try {
readMyData(fd);
} finally {
close(fd, (err) => {
if (err) throw err;
});
}
});
});
read (RECOMMENDED)
import { open, close } from 'node:fs';
open('myfile', 'r', (err, fd) => {
if (err) {
if (err.code === 'ENOENT') {
console.error('myfile does not exist');
return;
}
throw err;
}
try {
readMyData(fd);
} finally {
close(fd, (err) => {
if (err) throw err;
});
}
});
The “not recommended” examples above check for accessibility and
then use the file; the “recommended” examples are better because
they use the file directly and handle the error, if any.
In general, check for the accessibility of a file only if the file will not
be used directly, for example when its accessibility is a signal from
another process.
On Windows, access-control policies (ACLs) on a directory may limit
access to a file or directory. The fs.access() function, however, does
not check the ACL and therefore may report that a path is accessible
even if the ACL restricts the user from reading or writing to it.
fs.appendFile(path, data[, options],
callback)
path {string|Buffer|URL|number} filename or file descriptor
data {string|Buffer}
options {Object|string}
encoding {string|null} Default: 'utf8'
mode {integer} Default: 0o666
flag {string} See support of file system flags. Default: 'a'.
{boolean} If true, the underlying file descriptor is
flush
flushed prior to closing it. Default: false.
callback {Function}
err {Error}
Asynchronously append data to a file, creating the file if it does not
yet exist. data can be a string or a {Buffer}.
The mode option only affects the newly created file. See fs.open() for
more details.
import { appendFile } from 'node:fs';
appendFile('message.txt', 'data to append', (err) => {
if (err) throw err;
console.log('The "data to append" was appended to file!');
});
If options is a string, then it specifies the encoding:
import { appendFile } from 'node:fs';
appendFile('message.txt', 'data to append', 'utf8', callback);
The path may be specified as a numeric file descriptor that has been
opened for appending (using fs.open() or fs.openSync()). The file
descriptor will not be closed automatically.
import { open, close, appendFile } from 'node:fs';
function closeFd(fd) {
close(fd, (err) => {
if (err) throw err;
});
}
open('message.txt', 'a', (err, fd) => {
if (err) throw err;
try {
appendFile(fd, 'data to append', 'utf8', (err) => {
closeFd(fd);
if (err) throw err;
});
} catch (err) {
closeFd(fd);
throw err;
}
});
fs.chmod(path, mode, callback)
path {string|Buffer|URL}
mode {string|integer}
callback {Function}
err {Error}
Asynchronously changes the permissions of a file. No arguments
other than a possible exception are given to the completion callback.
See the POSIX chmod(2) documentation for more detail.
import { chmod } from 'node:fs';
chmod('my_file.txt', 0o775, (err) => {
if (err) throw err;
console.log('The permissions for file "my_file.txt" have been chan
});
File modes
The mode argument used in both the fs.chmod() and fs.chmodSync()
methods is a numeric bitmask created using a logical OR of the
following constants:
Constant Octal Description
Constant Octal Description
fs.constants.S_IRUSR 0o400 read by owner
fs.constants.S_IWUSR 0o200 write by owner
fs.constants.S_IXUSR 0o100 execute/search by owner
fs.constants.S_IRGRP 0o40 read by group
fs.constants.S_IWGRP 0o20 write by group
fs.constants.S_IXGRP 0o10 execute/search by group
fs.constants.S_IROTH 0o4 read by others
fs.constants.S_IWOTH 0o2 write by others
fs.constants.S_IXOTH 0o1 execute/search by others
An easier method of constructing the mode is to use a sequence of
three octal digits (e.g. 765). The left-most digit (7 in the example),
specifies the permissions for the file owner. The middle digit (6 in the
example), specifies permissions for the group. The right-most digit (5
in the example), specifies the permissions for others.
Number Description
7 read, write, and execute
6 read and write
5 read and execute
4 read only
3 write and execute
2 write only
1 execute only
0 no permission
For example, the octal value 0o765 means:
The owner may read, write, and execute the file.
The group may read and write the file.
Others may read and execute the file.
When using raw numbers where file modes are expected, any value
larger than 0o777 may result in platform-specific behaviors that are
not supported to work consistently. Therefore constants like S_ISVTX,
S_ISGID, or S_ISUID are not exposed in fs.constants.
Caveats: on Windows only the write permission can be changed, and
the distinction among the permissions of group, owner, or others is
not implemented.
fs.chown(path, uid, gid, callback)
path {string|Buffer|URL}
uid {integer}
gid {integer}
callback {Function}
err {Error}
Asynchronously changes owner and group of a file. No arguments
other than a possible exception are given to the completion callback.
See the POSIX chown(2) documentation for more detail.
fs.close(fd[, callback])
fd {integer}
callback {Function}
err {Error}
Closes the file descriptor. No arguments other than a possible
exception are given to the completion callback.
Calling fs.close() on any file descriptor (fd) that is currently in use
through any other fs operation may lead to undefined behavior.
See the POSIX close(2) documentation for more detail.
fs.copyFile(src, dest[, mode], callback)
src {string|Buffer|URL} source filename to copy
dest {string|Buffer|URL} destination filename of the copy
operation
mode {integer} modifiers for copy operation. Default: 0.
callback {Function}
Asynchronously copies src to dest. By default, dest is overwritten if it
already exists. No arguments other than a possible exception are
given to the callback function. Node.js makes no guarantees about
the atomicity of the copy operation. If an error occurs after the
destination file has been opened for writing, Node.js will attempt to
remove the destination.
mode is an optional integer that specifies the behavior of the copy
operation. It is possible to create a mask consisting of the bitwise OR
of two or more values (e.g. fs.constants.COPYFILE_EXCL |
fs.constants.COPYFILE_FICLONE).
fs.constants.COPYFILE_EXCL: The copy operation will fail if dest
already exists.
fs.constants.COPYFILE_FICLONE: The copy operation will attempt
to create a copy-on-write reflink. If the platform does not support
copy-on-write, then a fallback copy mechanism is used.
fs.constants.COPYFILE_FICLONE_FORCE: The copy operation will
attempt to create a copy-on-write reflink. If the platform does
not support copy-on-write, then the operation will fail.
import { copyFile, constants } from 'node:fs';
function callback(err) {
function callback(err) {
if (err) throw err;
console.log('source.txt was copied to destination.txt');
}
// destination.txt will be created or overwritten by default.
copyFile('source.txt', 'destination.txt', callback);
// By using COPYFILE_EXCL, the operation will fail if destination.tx
copyFile('source.txt', 'destination.txt', constants.COPYFILE_EXCL, c
fs.cp(src, dest[, options], callback)
Stability: 1 - Experimental
src {string|URL} source path to copy.
dest {string|URL} destination path to copy to.
options {Object}
dereference {boolean} dereference symlinks. Default: false.
errorOnExist {boolean} when force is false, and the
destination exists, throw an error. Default: false.
filter {Function} Function to filter copied files/directories.
Return true to copy the item, false to ignore it. When
ignoring a directory, all of its contents will be skipped as well.
Can also return a Promise that resolves to true or false
Default: undefined.
src {string} source path to copy.
dest {string} destination path to copy to.
Returns: {boolean|Promise}
force {boolean} overwrite existing file or directory. The copy
operation will ignore errors if you set this to false and the
destination exists. Use the errorOnExist option to change this
behavior. Default: true.
mode {integer} modifiers for copy operation. Default: 0. See
mode flag of fs.copyFile().
preserveTimestamps {boolean} When true timestamps from src
will be preserved. Default: false.
recursive {boolean} copy directories recursively Default:
false
{boolean} When true, path resolution for
verbatimSymlinks
symlinks will be skipped. Default: false
callback {Function}
Asynchronously copies the entire directory structure from src to
dest, including subdirectories and files.
When copying a directory to another directory, globs are not
supported and behavior is similar to cp dir1/ dir2/.
fs.createReadStream(path[, options])
path {string|Buffer|URL}
options {string|Object}
flags {string} See support of file system flags. Default: 'r'.
encoding {string} Default: null
fd {integer|FileHandle} Default: null
mode {integer} Default: 0o666
autoClose {boolean} Default: true
emitClose {boolean} Default: true
start {integer}
end {integer} Default: Infinity
highWaterMark {integer} Default: 64 * 1024
fs {Object|null} Default: null
signal {AbortSignal|null} Default: null
Returns: {fs.ReadStream}
Unlike the 16 KiB default highWaterMark for a {stream.Readable}, the
stream returned by this method has a default highWaterMark of 64
KiB.
options can include start and end values to read a range of bytes from
the file instead of the entire file. Both start and end are inclusive and
start counting at 0, allowed values are in the [0,
Number.MAX_SAFE_INTEGER] range. If fd is specified and start is omitted
or undefined, fs.createReadStream() reads sequentially from the
current file position. The encoding can be any one of those accepted
by {Buffer}.
If fd is specified, ReadStream will ignore the path argument and will
use the specified file descriptor. This means that no 'open' event will
be emitted. fd should be blocking; non-blocking fds should be passed
to {net.Socket}.
If fd points to a character device that only supports blocking reads
(such as keyboard or sound card), read operations do not finish until
data is available. This can prevent the process from exiting and the
stream from closing naturally.
By default, the stream will emit a 'close' event after it has been
destroyed. Set the emitClose option to false to change this behavior.
By providing the fs option, it is possible to override the
corresponding fs implementations for open, read, and close. When
providing the fs option, an override for read is required. If no fd is
provided, an override for open is also required. If autoClose is true, an
override for close is also required.
import { createReadStream } from 'node:fs';
// Create a stream from some character device.
const stream = createReadStream('/dev/input/event0');
setTimeout(() => {
stream.close(); // This may not close the stream.
// Artificially marking end-of-stream, as if the underlying resour
// indicated end-of-file by itself, allows the stream to close.
// This does not cancel pending read operations, and if there is s
// operation, the process may still not be able to exit successful
// until it finishes.
stream.push(null);
stream.read(0);
}, 100);
If autoClose is false, then the file descriptor won’t be closed, even if
there’s an error. It is the application’s responsibility to close it