{ "type": "module", "source": "doc/api/perf_hooks.md", "modules": [ { "textRaw": "Performance Timing API", "name": "performance_timing_api", "introduced_in": "v8.5.0", "stability": 2, "stabilityText": "Stable", "desc": "
The Performance Timing API provides an implementation of the\nW3C Performance Timeline specification. The purpose of the API\nis to support collection of high resolution performance metrics.\nThis is the same Performance API as implemented in modern Web browsers.
\nconst { PerformanceObserver, performance } = require('perf_hooks');\n\nconst obs = new PerformanceObserver((items) => {\n console.log(items.getEntries()[0].duration);\n performance.clearMarks();\n});\nobs.observe({ entryTypes: ['measure'] });\n\nperformance.mark('A');\ndoSomeLongRunningProcess(() => {\n performance.mark('B');\n performance.measure('A to B', 'A', 'B');\n});\n
",
"modules": [
{
"textRaw": "Class: `Performance`",
"name": "class:_`performance`",
"meta": {
"added": [
"v8.5.0"
],
"changes": []
},
"modules": [
{
"textRaw": "`performance.clearMarks([name])`",
"name": "`performance.clearmarks([name])`",
"meta": {
"added": [
"v8.5.0"
],
"changes": []
},
"desc": "name
<string>If name
is not provided, removes all PerformanceMark
objects from the\nPerformance Timeline. If name
is provided, removes only the named mark.
name
<string>Creates a new PerformanceMark
entry in the Performance Timeline. A\nPerformanceMark
is a subclass of PerformanceEntry
whose\nperformanceEntry.entryType
is always 'mark'
, and whose\nperformanceEntry.duration
is always 0
. Performance marks are used\nto mark specific significant moments in the Performance Timeline.
Creates a new PerformanceMeasure
entry in the Performance Timeline. A\nPerformanceMeasure
is a subclass of PerformanceEntry
whose\nperformanceEntry.entryType
is always 'measure'
, and whose\nperformanceEntry.duration
measures the number of milliseconds elapsed since\nstartMark
and endMark
.
The startMark
argument may identify any existing PerformanceMark
in the\nPerformance Timeline, or may identify any of the timestamp properties\nprovided by the PerformanceNodeTiming
class. If the named startMark
does\nnot exist, then startMark
is set to timeOrigin
by default.
The endMark
argument must identify any existing PerformanceMark
in the\nPerformance Timeline or any of the timestamp properties provided by the\nPerformanceNodeTiming
class. If the named endMark
does not exist, an\nerror will be thrown.
An instance of the PerformanceNodeTiming
class that provides performance\nmetrics for specific Node.js operational milestones.
Returns the current high resolution millisecond timestamp, where 0 represents\nthe start of the current node
process.
The timeOrigin
specifies the high resolution millisecond timestamp at\nwhich the current node
process began, measured in Unix time.
fn
<Function>Wraps a function within a new function that measures the running time of the\nwrapped function. A PerformanceObserver
must be subscribed to the 'function'
\nevent type in order for the timing details to be accessed.
const {\n performance,\n PerformanceObserver\n} = require('perf_hooks');\n\nfunction someFunction() {\n console.log('hello world');\n}\n\nconst wrapped = performance.timerify(someFunction);\n\nconst obs = new PerformanceObserver((list) => {\n console.log(list.getEntries()[0].duration);\n obs.disconnect();\n});\nobs.observe({ entryTypes: ['function'] });\n\n// A performance timeline entry will be created\nwrapped();\n
",
"type": "module",
"displayName": "`performance.timerify(fn)`"
}
],
"type": "module",
"displayName": "Class: `Performance`"
},
{
"textRaw": "Class: `PerformanceEntry`",
"name": "class:_`performanceentry`",
"meta": {
"added": [
"v8.5.0"
],
"changes": []
},
"modules": [
{
"textRaw": "`performanceEntry.duration`",
"name": "`performanceentry.duration`",
"meta": {
"added": [
"v8.5.0"
],
"changes": []
},
"desc": "The total number of milliseconds elapsed for this entry. This value will not\nbe meaningful for all Performance Entry types.
", "type": "module", "displayName": "`performanceEntry.duration`" }, { "textRaw": "`performanceEntry.name`", "name": "`performanceentry.name`", "meta": { "added": [ "v8.5.0" ], "changes": [] }, "desc": "The name of the performance entry.
", "type": "module", "displayName": "`performanceEntry.name`" }, { "textRaw": "`performanceEntry.startTime`", "name": "`performanceentry.starttime`", "meta": { "added": [ "v8.5.0" ], "changes": [] }, "desc": "The high resolution millisecond timestamp marking the starting time of the\nPerformance Entry.
", "type": "module", "displayName": "`performanceEntry.startTime`" }, { "textRaw": "`performanceEntry.entryType`", "name": "`performanceentry.entrytype`", "meta": { "added": [ "v8.5.0" ], "changes": [] }, "desc": "The type of the performance entry. Currently it may be one of: 'node'
,\n'mark'
, 'measure'
, 'gc'
, 'function'
, 'http2'
or 'http'
.
When performanceEntry.entryType
is equal to 'gc'
, the performance.kind
\nproperty identifies the type of garbage collection operation that occurred.\nThe value may be one of:
perf_hooks.constants.NODE_PERFORMANCE_GC_MAJOR
perf_hooks.constants.NODE_PERFORMANCE_GC_MINOR
perf_hooks.constants.NODE_PERFORMANCE_GC_INCREMENTAL
perf_hooks.constants.NODE_PERFORMANCE_GC_WEAKCB
Provides timing details for Node.js itself.
", "modules": [ { "textRaw": "`performanceNodeTiming.bootstrapComplete`", "name": "`performancenodetiming.bootstrapcomplete`", "meta": { "added": [ "v8.5.0" ], "changes": [] }, "desc": "The high resolution millisecond timestamp at which the Node.js process\ncompleted bootstrapping. If bootstrapping has not yet finished, the property\nhas the value of -1.
", "type": "module", "displayName": "`performanceNodeTiming.bootstrapComplete`" }, { "textRaw": "`performanceNodeTiming.environment`", "name": "`performancenodetiming.environment`", "meta": { "added": [ "v8.5.0" ], "changes": [] }, "desc": "The high resolution millisecond timestamp at which the Node.js environment was\ninitialized.
", "type": "module", "displayName": "`performanceNodeTiming.environment`" }, { "textRaw": "`performanceNodeTiming.loopExit`", "name": "`performancenodetiming.loopexit`", "meta": { "added": [ "v8.5.0" ], "changes": [] }, "desc": "The high resolution millisecond timestamp at which the Node.js event loop\nexited. If the event loop has not yet exited, the property has the value of -1.\nIt can only have a value of not -1 in a handler of the 'exit'
event.
The high resolution millisecond timestamp at which the Node.js event loop\nstarted. If the event loop has not yet started (e.g., in the first tick of the\nmain script), the property has the value of -1.
", "type": "module", "displayName": "`performanceNodeTiming.loopStart`" }, { "textRaw": "`performanceNodeTiming.nodeStart`", "name": "`performancenodetiming.nodestart`", "meta": { "added": [ "v8.5.0" ], "changes": [] }, "desc": "The high resolution millisecond timestamp at which the Node.js process was\ninitialized.
", "type": "module", "displayName": "`performanceNodeTiming.nodeStart`" }, { "textRaw": "`performanceNodeTiming.v8Start`", "name": "`performancenodetiming.v8start`", "meta": { "added": [ "v8.5.0" ], "changes": [] }, "desc": "The high resolution millisecond timestamp at which the V8 platform was\ninitialized.
", "type": "module", "displayName": "`performanceNodeTiming.v8Start`" } ], "type": "module", "displayName": "Class: `PerformanceNodeTiming extends PerformanceEntry`" }, { "textRaw": "Class: `PerformanceObserver`", "name": "class:_`performanceobserver`", "modules": [ { "textRaw": "`new PerformanceObserver(callback)`", "name": "`new_performanceobserver(callback)`", "meta": { "added": [ "v8.5.0" ], "changes": [] }, "desc": "callback
<Function>
list
<PerformanceObserverEntryList>observer
<PerformanceObserver>PerformanceObserver
objects provide notifications when new\nPerformanceEntry
instances have been added to the Performance Timeline.
const {\n performance,\n PerformanceObserver\n} = require('perf_hooks');\n\nconst obs = new PerformanceObserver((list, observer) => {\n console.log(list.getEntries());\n observer.disconnect();\n});\nobs.observe({ entryTypes: ['mark'], buffered: true });\n\nperformance.mark('test');\n
\nBecause PerformanceObserver
instances introduce their own additional\nperformance overhead, instances should not be left subscribed to notifications\nindefinitely. Users should disconnect observers as soon as they are no\nlonger needed.
The callback
is invoked when a PerformanceObserver
is\nnotified about new PerformanceEntry
instances. The callback receives a\nPerformanceObserverEntryList
instance and a reference to the\nPerformanceObserver
.
Disconnects the PerformanceObserver
instance from all notifications.
options
<Object>
entryTypes
<string[]> An array of strings identifying the types of\nPerformanceEntry
instances the observer is interested in. If not\nprovided an error will be thrown.buffered
<boolean> If true, the notification callback will be\ncalled using setImmediate()
and multiple PerformanceEntry
instance\nnotifications will be buffered internally. If false
, notifications will\nbe immediate and synchronous. Default: false
.Subscribes the PerformanceObserver
instance to notifications of new\nPerformanceEntry
instances identified by options.entryTypes
.
When options.buffered
is false
, the callback
will be invoked once for\nevery PerformanceEntry
instance:
const {\n performance,\n PerformanceObserver\n} = require('perf_hooks');\n\nconst obs = new PerformanceObserver((list, observer) => {\n // Called three times synchronously. `list` contains one item.\n});\nobs.observe({ entryTypes: ['mark'] });\n\nfor (let n = 0; n < 3; n++)\n performance.mark(`test${n}`);\n
\nconst {\n performance,\n PerformanceObserver\n} = require('perf_hooks');\n\nconst obs = new PerformanceObserver((list, observer) => {\n // Called once. `list` contains three items.\n});\nobs.observe({ entryTypes: ['mark'], buffered: true });\n\nfor (let n = 0; n < 3; n++)\n performance.mark(`test${n}`);\n
",
"type": "module",
"displayName": "`performanceObserver.observe(options)`"
}
],
"type": "module",
"displayName": "Class: `PerformanceObserver`"
},
{
"textRaw": "Class: `PerformanceObserverEntryList`",
"name": "class:_`performanceobserverentrylist`",
"meta": {
"added": [
"v8.5.0"
],
"changes": []
},
"desc": "The PerformanceObserverEntryList
class is used to provide access to the\nPerformanceEntry
instances passed to a PerformanceObserver
.
Returns a list of PerformanceEntry
objects in chronological order\nwith respect to performanceEntry.startTime
.
name
<string>type
<string>Returns a list of PerformanceEntry
objects in chronological order\nwith respect to performanceEntry.startTime
whose performanceEntry.name
is\nequal to name
, and optionally, whose performanceEntry.entryType
is equal to\ntype
.
type
<string>Returns a list of PerformanceEntry
objects in chronological order\nwith respect to performanceEntry.startTime
whose performanceEntry.entryType
\nis equal to type
.
options
<Object>
resolution
<number> The sampling rate in milliseconds. Must be greater\nthan zero. Default: 10
.Creates a Histogram
object that samples and reports the event loop delay\nover time. The delays will be reported in nanoseconds.
Using a timer to detect approximate event loop delay works because the\nexecution of timers is tied specifically to the lifecycle of the libuv\nevent loop. That is, a delay in the loop will cause a delay in the execution\nof the timer, and those delays are specifically what this API is intended to\ndetect.
\nconst { monitorEventLoopDelay } = require('perf_hooks');\nconst h = monitorEventLoopDelay({ resolution: 20 });\nh.enable();\n// Do something.\nh.disable();\nconsole.log(h.min);\nconsole.log(h.max);\nconsole.log(h.mean);\nconsole.log(h.stddev);\nconsole.log(h.percentiles);\nconsole.log(h.percentile(50));\nconsole.log(h.percentile(99));\n
",
"modules": [
{
"textRaw": "Class: `Histogram`",
"name": "class:_`histogram`",
"meta": {
"added": [
"v11.10.0"
],
"changes": []
},
"desc": "Tracks the event loop delay at a given sampling rate.
", "modules": [ { "textRaw": "`histogram.disable()`", "name": "`histogram.disable()`", "meta": { "added": [ "v11.10.0" ], "changes": [] }, "desc": "Disables the event loop delay sample timer. Returns true
if the timer was\nstopped, false
if it was already stopped.
Enables the event loop delay sample timer. Returns true
if the timer was\nstarted, false
if it was already started.
The number of times the event loop delay exceeded the maximum 1 hour event\nloop delay threshold.
", "type": "module", "displayName": "`histogram.exceeds`" }, { "textRaw": "`histogram.max`", "name": "`histogram.max`", "meta": { "added": [ "v11.10.0" ], "changes": [] }, "desc": "The maximum recorded event loop delay.
", "type": "module", "displayName": "`histogram.max`" }, { "textRaw": "`histogram.mean`", "name": "`histogram.mean`", "meta": { "added": [ "v11.10.0" ], "changes": [] }, "desc": "The mean of the recorded event loop delays.
", "type": "module", "displayName": "`histogram.mean`" }, { "textRaw": "`histogram.min`", "name": "`histogram.min`", "meta": { "added": [ "v11.10.0" ], "changes": [] }, "desc": "The minimum recorded event loop delay.
", "type": "module", "displayName": "`histogram.min`" }, { "textRaw": "`histogram.percentile(percentile)`", "name": "`histogram.percentile(percentile)`", "meta": { "added": [ "v11.10.0" ], "changes": [] }, "desc": "\nReturns the value at the given percentile.
", "type": "module", "displayName": "`histogram.percentile(percentile)`" }, { "textRaw": "`histogram.percentiles`", "name": "`histogram.percentiles`", "meta": { "added": [ "v11.10.0" ], "changes": [] }, "desc": "Returns a Map
object detailing the accumulated percentile distribution.
Resets the collected histogram data.
", "type": "module", "displayName": "`histogram.reset()`" }, { "textRaw": "`histogram.stddev`", "name": "`histogram.stddev`", "meta": { "added": [ "v11.10.0" ], "changes": [] }, "desc": "The standard deviation of the recorded event loop delays.
\nThe following example uses the Async Hooks and Performance APIs to measure\nthe actual duration of a Timeout operation (including the amount of time it took\nto execute the callback).
\n'use strict';\nconst async_hooks = require('async_hooks');\nconst {\n performance,\n PerformanceObserver\n} = require('perf_hooks');\n\nconst set = new Set();\nconst hook = async_hooks.createHook({\n init(id, type) {\n if (type === 'Timeout') {\n performance.mark(`Timeout-${id}-Init`);\n set.add(id);\n }\n },\n destroy(id) {\n if (set.has(id)) {\n set.delete(id);\n performance.mark(`Timeout-${id}-Destroy`);\n performance.measure(`Timeout-${id}`,\n `Timeout-${id}-Init`,\n `Timeout-${id}-Destroy`);\n }\n }\n});\nhook.enable();\n\nconst obs = new PerformanceObserver((list, observer) => {\n console.log(list.getEntries()[0]);\n performance.clearMarks();\n observer.disconnect();\n});\nobs.observe({ entryTypes: ['measure'], buffered: true });\n\nsetTimeout(() => {}, 1000);\n
",
"type": "module",
"displayName": "Measuring the duration of async operations"
},
{
"textRaw": "Measuring how long it takes to load dependencies",
"name": "measuring_how_long_it_takes_to_load_dependencies",
"desc": "The following example measures the duration of require()
operations to load\ndependencies:
'use strict';\nconst {\n performance,\n PerformanceObserver\n} = require('perf_hooks');\nconst mod = require('module');\n\n// Monkey patch the require function\nmod.Module.prototype.require =\n performance.timerify(mod.Module.prototype.require);\nrequire = performance.timerify(require);\n\n// Activate the observer\nconst obs = new PerformanceObserver((list) => {\n const entries = list.getEntries();\n entries.forEach((entry) => {\n console.log(`require('${entry[0]}')`, entry.duration);\n });\n obs.disconnect();\n});\nobs.observe({ entryTypes: ['function'], buffered: true });\n\nrequire('some-module');\n
",
"type": "module",
"displayName": "Measuring how long it takes to load dependencies"
}
],
"type": "module",
"displayName": "`perf_hooks.monitorEventLoopDelay([options])`"
}
],
"type": "module",
"displayName": "Performance Timing API"
}
]
}