Releases: mongodb/node-mongodb-native
v6.10.0
6.10.0 (2024-10-21)
The MongoDB Node.js team is pleased to announce version 6.10.0 of the mongodb
package!
Release Notes
Warning
Server versions 3.6 and lower will get a compatibility error on connection and support for MONGODB-CR authentication is now also removed.
Support for new client bulkWrite API (8.0+)
A new bulk write API on the MongoClient
is now supported for users on server versions 8.0 and higher.
This API is meant to replace the existing bulk write API on the Collection
as it supports a bulk
write across multiple databases and collections in a single call.
Usage
Users of this API call MongoClient#bulkWrite
and provide a list of bulk write models and options.
The models have a structure as follows:
Insert One
Note that when no _id
field is provided in the document, the driver will generate a BSON ObjectId
automatically.
{
namespace: '<db>.<collection>',
name: 'insertOne',
document: Document
}
Update One
{
namespace: '<db>.<collection>',
name: 'updateOne',
filter: Document,
update: Document | Document[],
arrayFilters?: Document[],
hint?: Document | string,
collation?: Document,
upsert: boolean
}
Update Many
Note that write errors occuring with an update many model present are not retryable.
{
namespace: '<db>.<collection>',
name: 'updateMany',
filter: Document,
update: Document | Document[],
arrayFilters?: Document[],
hint?: Document | string,
collation?: Document,
upsert: boolean
}
Replace One
{
namespace: '<db>.<collection>',
name: 'replaceOne',
filter: Document,
replacement: Document,
hint?: Document | string,
collation?: Document
}
Delete One
{
namespace: '<db>.<collection>',
name: 'deleteOne',
filter: Document,
hint?: Document | string,
collation?: Document
}
Delete Many
Note that write errors occuring with a delete many model present are not retryable.*
{
namespace: '<db>.<collection>',
name: 'deleteMany',
filter: Document,
hint?: Document | string,
collation?: Document
}
Example
Below is a mixed model example of using the new API:
const client = new MongoClient(process.env.MONGODB_URI);
const models = [
{
name: 'insertOne',
namespace: 'db.authors',
document: { name: 'King' }
},
{
name: 'insertOne',
namespace: 'db.books',
document: { name: 'It' }
},
{
name: 'updateOne',
namespace: 'db.books',
filter: { name: 'it' },
update: { $set: { year: 1986 } }
}
];
const result = await client.bulkWrite(models);
The bulk write specific options that can be provided to the API are as follows:
ordered
: Optional boolean that indicates the bulk write as ordered. Defaults to true.verboseResults
: Optional boolean to indicate to provide verbose results. Defaults to false.bypassDocumentValidation
: Optional boolean to bypass document validation rules. Defaults to false.let
: Optional document of parameter names and values that can be accessed using $$var. No default.
The object returned by the bulk write API is:
interface ClientBulkWriteResult {
// Whether the bulk write was acknowledged.
readonly acknowledged: boolean;
// The total number of documents inserted across all insert operations.
readonly insertedCount: number;
// The total number of documents upserted across all update operations.
readonly upsertedCount: number;
// The total number of documents matched across all update operations.
readonly matchedCount: number;
// The total number of documents modified across all update operations.
readonly modifiedCount: number;
// The total number of documents deleted across all delete operations.
readonly deletedCount: number;
// The results of each individual insert operation that was successfully performed.
// Note the keys in the map are the associated index in the models array.
readonly insertResults?: ReadonlyMap<number, ClientInsertOneResult>;
// The results of each individual update operation that was successfully performed.
// Note the keys in the map are the associated index in the models array.
readonly updateResults?: ReadonlyMap<number, ClientUpdateResult>;
// The results of each individual delete operation that was successfully performed.
// Note the keys in the map are the associated index in the models array.
readonly deleteResults?: ReadonlyMap<number, ClientDeleteResult>;
}
Error Handling
Server side errors encountered during a bulk write will throw a MongoClientBulkWriteError
. This error
has the following properties:
writeConcernErrors
: Ann array of documents for each write concern error that occurred.writeErrors
: A map of index pointing at the models provided and the individual write error.partialResult
: The client bulk write result at the point where the error was thrown.
Schema assertion support
interface Book {
name: string;
authorName: string;
}
interface Author {
name: string;
}
type MongoDBSchemas = {
'db.books': Book;
'db.authors': Author;
}
const model: ClientBulkWriteModel<MongoDBSchemas> = {
namespace: 'db.books'
name: 'insertOne',
document: { title: 'Practical MongoDB Aggregations', authorName: 3 }
// error `authorName` cannot be number
};
Notice how authorName is type checked against the Book
type because namespace is set to "db.books"
.
Allow SRV hostnames with fewer than three .
separated parts
In an effort to make internal networking solutions easier to use like deployments using kubernetes, the client now accepts SRV hostname strings with one or two .
separated parts.
await new MongoClient('mongodb+srv://mongodb.local').connect();
For security reasons, the returned addresses of SRV strings with less than three parts must end with the entire SRV hostname and contain at least one additional domain level. This is because this added validation ensures that the returned address(es) are from a known host. In future releases, we plan on extending this validation to SRV strings with three or more parts, as well.
// Example 1: Validation fails since the returned address doesn't end with the entire SRV hostname
'mongodb+srv://mySite.com' => 'myEvilSite.com'
// Example 2: Validation fails since the returned address is identical to the SRV hostname
'mongodb+srv://mySite.com' => 'mySite.com'
// Example 3: Validation passes since the returned address ends with the entire SRV hostname and contains an additional domain level
'mongodb+srv://mySite.com' => 'cluster_1.mySite.com'
Explain now supports maxTimeMS
Driver CRUD commands can be explained by providing the explain
option:
collection.find({}).explain('queryPlanner'); // using the fluent cursor API
collection.deleteMany({}, { explain: 'queryPlanner' }); // as an option
However, if maxTimeMS was provided, the maxTimeMS value was applied to the command to explain, and consequently the server could take more than maxTimeMS to respond.
Now, maxTimeMS can be specified as a new option for explain commands:
collection.find({}).explain({ verbosity: 'queryPlanner', maxTimeMS: 2000 }); // using the fluent cursor API
collection.deleteMany({}, {
explain: {
verbosity: 'queryPlanner',
maxTimeMS: 2000
}
); // as an option
If a top-level maxTimeMS option is provided in addition to the explain maxTimeMS, the explain-specific maxTimeMS is applied to the explain command, and the top-level maxTimeMS is applied to the explained command:
collection.deleteMany({}, {
maxTimeMS: 1000,
explain: {
verbosity: 'queryPlanner',
maxTimeMS: 2000
}
);
// the actual command that gets sent to the server looks like:
{
explain: { delete: <collection name>, ..., maxTimeMS: 1000 },
verbosity: 'queryPlanner',
maxTimeMS: 2000
}
Find and Aggregate Explain in Options is Deprecated
Note
Specifying explain for cursors in the operation's options is deprecated in favor of the .explain()
methods on cursors:
collection.find({}, { explain: true })
// -> collection.find({}).explain()
collection.aggregate([], { explain: true })
// -> collection.aggregate([]).explain()
db.find([], { explain: true })
// -> db.find([]).explain()
Fixed writeConcern.w set to 0 unacknowledged write protocol trigger
The driver now correctly handles w=0 writes as 'fire-and-forget' messages, where the server does not reply and the driver does not wait for a response. This change eliminates the possibility of encountering certain rare protocol format, BSON type, or network errors that could previously arise during server replies. As a result, w=0 operations now involve less I/O, specifically no socket read.
In addition, when command monitoring is enabled, the reply
field of a CommandSucceededEvent
of an unacknowledged write will always be { ok: 1 }
.
Fixed indefinite hang bug for high write load scenarios
When performing large and many write operations, the driver will likely encounter buffering at the socket layer. The logic that waited until buffered writes were complete would mistakenly drop 'data'
(reading from the socket), causing the driver to hang indefinitely or until a socket timeout. Using pausing and resuming mechanisms exposed by Node streams we have eliminated the possibility for data events to go unhandled.
Shout out to @hunkydoryrepair for debugging and finding this issue!
Fixed change stream infinite resume
Before this fix, when change streams would fail to establish a cursor on the server, the driver would infinitely attempt to resume the change stream. Now, when the aggregate to establish the change stream fails, the driver will throw an error and clos the change stream.
`ClientSession.commitTransactio...
v6.9.0
6.9.0 (2024-09-06)
The MongoDB Node.js team is pleased to announce version 6.9.0 of the mongodb
package!
Release Notes
Driver support of upcoming MongoDB server release
Increased the driver's max supported Wire Protocol version and server version in preparation for the upcoming release of MongoDB 8.0.
MongoDB 3.6 server support deprecated
Warning
Support for 3.6 servers is deprecated and will be removed in a future version.
Support for explicit resource management
The driver now natively supports explicit resource management for MongoClient
, ClientSession
, ChangeStreams
and cursors. Additionally, on compatible Node.js versions, explicit resource management can be used with cursor.stream()
and the GridFSDownloadStream
, since these classes inherit resource management from Node.js' readable streams.
This feature is experimental and subject to changes at any time. This feature will remain experimental until the proposal has reached stage 4 and Node.js declares its implementation of async disposable resources as stable.
To use explicit resource management with the Node driver, you must:
- Use Typescript 5.2 or greater (or another bundler that supports resource management)
- Enable
tslib
polyfills for your application - Either use a compatible Node.js version or polyfill
Symbol.asyncDispose
(see the TS 5.2 release announcement for more information).
Explicit resource management is a feature that ensures that resources' disposal methods are always called when the resources' scope is exited. For driver resources, explicit resource management guarantees that the resources' corresponding close
method is called when the resource goes out of scope.
// before:
{
try {
const client = MongoClient.connect('<uri>');
try {
const session = client.startSession();
const cursor = client.db('my-db').collection("my-collection").find({}, { session });
try {
const doc = await cursor.next();
} finally {
await cursor.close();
}
} finally {
await session.endSession();
}
} finally {
await client.close();
}
}
// with explicit resource management:
{
await using client = MongoClient.connect('<uri>');
await using session = client.startSession();
await using cursor = client.db('my-db').collection('my-collection').find({}, { session });
const doc = await cursor.next();
}
// outside of scope, the cursor, session and mongo client will be cleaned up automatically.
The full explicit resource management proposal can be found here.
Driver now supports auto selecting between IPv4 and IPv6 connections
For users on Node versions that support the autoSelectFamily
and autoSelectFamilyAttemptTimeout
options (Node 18.13+), they can now be provided to the MongoClient
and will be passed through to socket creation. autoSelectFamily
will default to true
with autoSelectFamilyAttemptTimeout
by default not defined. Example:
const client = new MongoClient(process.env.MONGODB_URI, { autoSelectFamilyAttemptTimeout: 100 });
Allow passing through allowPartialTrustChain
Node.js TLS option
This option is now exposed through the MongoClient constructor's options parameter and controls the X509_V_FLAG_PARTIAL_CHAIN
OpenSSL flag.
Fixed enableUtf8Validation
option
Starting in v6.8.0 we inadvertently removed the ability to disable UTF-8 validation when deserializing BSON. Validation is normally a good thing, but it was always meant to be configurable and the recent Node.js runtime issues (v22.7.0) make this option indispensable for avoiding errors from mistakenly generated invalid UTF-8 bytes.
Add duration indicating time elapsed between connection creation and when the connection is ready
ConnectionReadyEvent
now has a durationMS
property that represents the time between the connection creation event and when the connection ready event is fired.
Add duration indicating time elapsed between the beginning and end of a connection checkout operation
ConnectionCheckedOutEvent
/ConnectionCheckFailedEvent
now have a durationMS
property that represents the time between checkout start and success/failure.
Create native cryptoCallbacks π
Node.js bundles OpenSSL, which means we can access the crypto APIs from C++ directly, avoiding the need to define them in JavaScript and call back into the JS engine to perform encryption. Now, when running the bindings in a version of Node.js that bundles OpenSSL 3 (should correspond to Node.js 18+), the cryptoCallbacks
option will be ignored and C++ defined callbacks will be used instead. This improves the performance of encryption dramatically, as much as 5x faster. π
This improvement was made to [email protected] which is available now!
Only permit mongocryptd spawn path and arguments to be own properties
We have added some defensive programming to the options that specify spawn path and spawn arguments for mongocryptd
due to the sensitivity of the system resource they control, namely, launching a process. Now, mongocryptdSpawnPath
and mongocryptdSpawnArgs
must be own properties of autoEncryption.extraOptions
. This makes it more difficult for a global prototype pollution bug related to these options to occur.
Support for range v2: Queryable Encryption supports range queries
Queryable encryption range queries are now officially supported. To use this feature, you must:
- use a version of mongodb-client-encryption > 6.1.0
- use a Node driver version > 6.9.0
- use an 8.0+ MongoDB enterprise server
Important
Collections and documents encrypted with range queryable fields with a 7.0 server are not compatible with range queries on 8.0 servers.
Documentation for queryable encryption can be found in the MongoDB server manual.
insertMany
and bulkWrite
accept ReadonlyArray
inputs
This improves the typescript developer experience, developers tend to use ReadonlyArray
because it can help understand where mutations are made and when enabling noUncheckedIndexedAccess
leads to a better type narrowing experience.
Please note, that the array is read only but not the documents, the driver adds _id
fields to your documents unless you request that the server generate the _id
with forceServerObjectId
Fix retryability criteria for write concern errors on pre-4.4 sharded clusters
Previously, the driver would erroneously retry writes on pre-4.4 sharded clusters based on a nested code in the server response (error.result.writeConcernError.code). Per the common drivers specification, retryability should be based on the top-level code (error.code). With this fix, the driver avoids unnecessary retries.
The LocalKMSProviderConfiguration
's key
property accepts Binary
for auto encryption
In #4160 we fixed a type issue where a local
KMS provider at runtime accepted a BSON
Binary
instance but the Typescript inaccurately only permitted Buffer
and string
. The same change has now been applied to AutoEncryptionOptions
.
BulkOperationBase
(superclass of UnorderedBulkOperation
and OrderedBulkOperation
) now reports length
property in Typescript
The length
getter for these classes was defined manually using Object.defineProperty
which hid it from typescript. Thanks to @sis0k0 we now have the getter defined on the class, which is functionally the same, but a greatly improved DX when working with types. π
MongoWriteConcernError.code
is overwritten by nested code within MongoWriteConcernError.result.writeConcernError.code
MongoWriteConcernError
is now correctly formed such that the original top-level code is preserved
- If no top-level code exists,
MongoWriteConcernError.code
should be set toMongoWriteConcernError.result.writeConcernError.code
- If a top-level code is passed into the constructor, it shouldn't be changed or overwritten by the nested
writeConcernError.code
Optimized cursor.toArray()
Prior to this change, toArray()
simply used the cursor's async iterator API, which parses BSON documents lazily (see more here). toArray()
, however, eagerly fetches the entire set of results, pushing each document into the returned array. As such, toArray
does not have the same benefits from lazy parsing as other parts of the cursor API.
With this change, when toArray()
accumulates documents, it empties the current batch of documents into the array before calling the async iterator again, which means each iteration will fetch the next batch rather than wrap each d...
v6.8.2
6.8.2 (2024-09-12)
The MongoDB Node.js team is pleased to announce version 6.8.2 of the mongodb
package!
Release Notes
Fixed mixed use of cursor.next() and cursor[Symbol.asyncIterator]
In 6.8.0, we inadvertently prevented the use of cursor.next() along with using for await syntax to iterate cursors. If your code made use of the following pattern and the call to cursor.next retrieved all your documents in the first batch, then the for-await loop would never be entered. This issue is now fixed.
const firstDoc = await cursor.next();
for await (const doc of cursor) {
// process doc
// ...
}
Bug Fixes
Documentation
We invite you to try the mongodb
library immediately, and report any issues to the NODE project.
v6.8.1
6.8.1 (2024-09-06)
The MongoDB Node.js team is pleased to announce version 6.8.1 of the mongodb
package!
Release Notes
Fixed enableUtf8Validation
option
Starting in v6.8.0 we inadvertently removed the ability to disable UTF-8 validation when deserializing BSON. Validation is normally a good thing, but it was always meant to be configurable and the recent Node.js runtime issues (v22.7.0) make this option indispensable for avoiding errors from mistakenly generated invalid UTF-8 bytes.
Bug Fixes
Documentation
We invite you to try the mongodb
library immediately, and report any issues to the NODE project.
v6.8.0
6.8.0 (2024-06-27)
The MongoDB Node.js team is pleased to announce version 6.8.0 of the mongodb
package!
Release Notes
Add ReadConcernMajorityNotAvailableYet
to retryable errors
ReadConcernMajorityNotAvailableYet
(error code 134
) is now a retryable read error.
ClientEncryption.createDataKey() and other helpers now support named KMS providers
KMS providers can now be associated with a name and multiple keys can be provided per-KMS provider. The following example configures a ClientEncryption object with multiple AWS keys:
const clientEncryption = new ClientEncryption(keyVaultClient, {
'aws:key1': {
accessKeyId: ...,
secretAccessKey: ...
},
'aws:key2': {
accessKeyId: ...,
secretAccessKey: ...
},
clientEncryption.createDataKey('aws:key-1', { ... });
Named KMS providers are supported for azure, AWS, KMIP, local and gcp KMS providers. Named KMS providers cannot be used if the application is using the automatic KMS provider refresh capability.
This feature requires mongodb-client-encryption>=6.0.1.
KMIP data keys now support a delegated
option
When creating a KMIP data key, delegated
can now be specified. If true, the KMIP provider will perform encryption / decryption of the data key locally, ensuring that the encryption key never leaves the KMIP server.
clientEncryption.createDataKey('kmip', { masterKey: { delegated: true } } );
This feature requires mongodb-client-encryption>=6.0.1.
Cursor responses are now parsed lazily π¦₯
MongoDB cursors (find, aggregate, etc.) operate on batches of documents equal to batchSize
. Each time the driver runs out of documents for the current batch it gets more (getMore
) and returns each document one at a time through APIs like cursor.next()
or for await (const doc of cursor)
.
Prior to this change, the Node.js driver was designed in such a way that the entire BSON response was decoded after it was received. Parsing BSON, just like parsing JSON, is a synchronous blocking operation. This means that throughout a cursor's lifetime invocations of .next()
that need to fetch a new batch hold up on parsing batchSize
(default 1000) documents before returning to the user.
In an effort to provide more responsiveness, the driver now decodes BSON "on demand". By operating on the layers of data returned by the server, the driver now receives a batch, and only obtains metadata like size, and if there are more documents to iterate after this batch. After that, each document is parsed out of the BSON as the cursor is iterated.
A perfect example of where this comes in handy is our beloved mongosh
! π
test> db.test.find()
[
{ _id: ObjectId('665f7fc5c9d5d52227434c65'), ... },
...
]
Type "it" for more
That Type "it" for more
message would now print after parsing only the documents displayed rather than after the entire batch is parsed.
Add Signature to Github Releases
The Github release for the mongodb
package now contains a detached signature file for the NPM package (named
mongodb-X.Y.Z.tgz.sig
), on every major and patch release to 6.x and 5.x. To verify the signature, follow the instructions in the 'Release Integrity' section of the README.md
file.
The LocalKMSProviderConfiguration
's key
property accepts Binary
A local
KMS provider at runtime accepted a BSON
Binary
instance but the Typescript inaccurately only permitted Buffer
and string
.
Clarified cursor state properties
The cursor has a few properties that represent the current state from the perspective of the driver and server. This PR corrects an issue that never made it to a release but we would like to take the opportunity to re-highlight what each of these properties mean.
cursor.closed
-cursor.close()
has been called, and there are no more documents stored in the cursor.cursor.killed
-cursor.close()
was called while the cursor still had a non-zero id, and the driver sent a killCursors command to free server-side resourcescursor.id == null
- The cursor has yet to send it's first command (ex.find
,aggregate
)cursor.id.isZero()
- The server sent the driver a cursor id of0
indicating a cursor no longer exists on the server side because all data has been returned to the driver.cursor.bufferedCount()
- The amount of documents stored locally in the cursor.
Features
- NODE-5718: add ReadConcernMajorityNotAvailableYet to retryable errors (#4154) (4f32dec)
- NODE-5801: allow multiple providers providers per type (#4137) (4d209ce)
- NODE-5853: support delegated KMIP data key option (#4129) (aa429f8)
- NODE-6136: parse cursor responses on demand (#4112) (3ed6a2a)
- NODE-6157: add signature to github releases (#4119) (f38c5fe)
Bug Fixes
- NODE-5801: use more specific key typing for multiple KMS provider support (#4146) (465ffd9)
- NODE-6085: add TS support for KMIP data key options (#4128) (f790cc1)
- NODE-6241: allow
Binary
as local KMS provider key (#4160) (fb724eb) - NODE-6242: close becomes true after calling close when documents still remain (#4161) (e3d70c3)
Documentation
We invite you to try the mongodb
library immediately, and report any issues to the NODE project.
v6.7.0
6.7.0 (2024-05-29)
The MongoDB Node.js team is pleased to announce version 6.7.0 of the mongodb
package!
Release Notes
Support for MONGODB-OIDC Authentication
MONGODB-OIDC
is now supported as an authentication mechanism for MongoDB server versions 7.0+. The currently supported facets to authenticate with are callback authentication, human interaction callback authentication, Azure machine authentication, and GCP machine authentication.
Azure Machine Authentication
The MongoClient
must be instantiated with authMechanism=MONGODB-OIDC
in the URI or in the client options. Additional required auth mechanism properties of TOKEN_RESOURCE
and ENVIRONMENT
are required and another optional username can be provided. Example:
const client = new MongoClient('mongodb+srv://<username>@<host>:<port>/?authMechanism=MONGODB-OIDC&authMechanismProperties=TOKEN_RESOURCE:<azure_token>,ENVIRONMENT:azure');
await client.connect();
GCP Machine Authentication
The MongoClient
must be instantiated with authMechanism=MONGODB-OIDC
in the URI or in the client options. Additional required auth mechanism properties of TOKEN_RESOURCE
and ENVIRONMENT
are required. Example:
const client = new MongoClient('mongodb+srv://<host>:<port>/?authMechanism=MONGODB-OIDC&authMechanismProperties=TOKEN_RESOURCE:<gcp_token>,ENVIRONMENT:gcp');
await client.connect();
Callback Authentication
The user can provide a custom callback to the MongoClient
that returns a valid response with an access token. The callback is provided as an auth mechanism property an has the signature of:
const oidcCallBack = (params: OIDCCallbackParams): Promise<OIDCResponse> => {
// params.timeoutContext is an AbortSignal that will abort after 30 seconds for non-human and 5 minutes for human.
// params.version is the current OIDC API version.
// params.idpInfo is the IdP info returned from the server.
// params.username is the optional username.
// Make a call to get a token.
const token = ...;
return {
accessToken: token,
expiresInSeconds: 300,
refreshToken: token
};
}
const client = new MongoClient('mongodb+srv://<host>:<port>/?authMechanism=MONGODB-OIDC', {
authMechanismProperties: {
OIDC_CALLBACK: oidcCallback
}
});
await client.connect();
For callbacks that require human interaction, set the callback to the OIDC_HUMAN_CALLBACK
property:
const client = new MongoClient('mongodb+srv://<host>:<port>/?authMechanism=MONGODB-OIDC', {
authMechanismProperties: {
OIDC_HUMAN_CALLBACK: oidcCallback
}
});
await client.connect();
Fixed error when useBigInt64=true was set on Db or MongoClient
Fixed an issue where when setting useBigInt64
=true
on MongoClients or Dbs an internal function compareTopologyVersion
would throw an error when encountering a bigint value.
Features
Bug Fixes
Documentation
We invite you to try the mongodb
library immediately, and report any issues to the NODE project.
v6.6.2
6.6.2 (2024-05-15)
The MongoDB Node.js team is pleased to announce version 6.6.2 of the mongodb
package!
Release Notes
Server Selection performance regression due to incorrect RTT measurement
Starting in version 6.6.0, when using the stream
server monitoring mode, heartbeats were incorrectly timed as having a duration of 0, leading to server selection viewing each server as equally desirable for selection.
Bug Fixes
Documentation
We invite you to try the mongodb
library immediately, and report any issues to the NODE project.
v6.6.1
6.6.1 (2024-05-06)
The MongoDB Node.js team is pleased to announce version 6.6.1 of the mongodb
package!
Release Notes
ref()
-ed timer keeps event loop running until client.connect()
resolves
When the MongoClient
is first starting up (client.connect()
) monitoring connections begin the process of discovering servers to make them selectable. The ref()
-ed serverSelectionTimeoutMS
timer keeps Node.js' event loop running as the monitoring connections are created. In the last release we inadvertently unref()
-ed this initial timer which would allow Node.js to close before the monitors could create connections.
Bug Fixes
Documentation
We invite you to try the mongodb
library immediately, and report any issues to the NODE project.
v6.6.0
6.6.0 (2024-05-02)
The MongoDB Node.js team is pleased to announce version 6.6.0 of the mongodb
package!
Release Notes
Aggregation pipelines can now add stages manually
When creating an aggregation pipeline cursor, a new generic method addStage()
has been added in the fluid API for users to add aggregation pipeline stages in a general manner.
const documents = await users.aggregate().addStage({ $project: { name: true } }).toArray();
Thank you @prenaissance for contributing this feature!
cause and package name included for MongoMissingDependencyErrors
MongoMissingDependencyError
s now include a cause
and a dependencyName
field, which can be used to programmatically determine which package is missing and why the driver failed to load it.
For example:
MongoMissingDependencyError: The iHateJavascript module does not exist
at findOne (mongodb/main.js:7:11)
at Object.<anonymous> (mongodb/main.js:14:1)
... 3 lines matching cause stack trace ...
at Module._load (node:internal/modules/cjs/loader:1021:12) {
dependencyName: 'iHateJavascript',
[Symbol(errorLabels)]: Set(0) {},
[cause]: Error: Cannot find module 'iHateJavascript'
Require stack:
- mongodb/main.js
at require (node:internal/modules/helpers:179:18)
at findOne (mongodb/main.js:5:5)
at Object.<anonymous> (mongodb/main.js:14:1) {
code: 'MODULE_NOT_FOUND',
requireStack: [ 'mongodb/main.js' ]
}
}
ServerDescription
Round Trip Time (RTT) measurement changes
(1) ServerDescription.roundTripTime
is now a moving average
Previously, ServerDescription.roundTripTime
was calculated as a weighted average of the most recently observed heartbeat duration and the previous duration. This update changes this behaviour to average ServerDescription.roundTripTime
over the last 10 observed heartbeats. This should reduce the likelihood that the selected server changes as a result of momentary spikes in server latency.
(2) Added minRoundTripTime
to ServerDescription
A new minRoundTripTime
property is now available on the ServerDescription
class which gives the minimum RTT over the last 10 heartbeats. Note that this value will be reported as 0 when fewer than 2 samples have been observed.
type
supported in SearchIndexDescription
It is now possible to specify the type of a search index when creating a search index:
const indexName = await collection.createSearchIndex({
name: 'my-vector-search-index',
// new! specifies that a `vectorSearch` index is created
type: 'vectorSearch',
definition: {
mappings: { dynamic: false }
}
});
Collection.findOneAndModify
's UpdateFilter.$currentDate
no longer throws on collections with limited schema
Example:
// collection has no schema
collection.update(
$currentData: {
lastModified: true
} // no longer throws a TS error
);
TopologyDescription
now properly stringifies itself to JSON
The TopologyDescription
class is exposed by the driver in server selection errors and topology monitoring events to provide insight into the driver's current representation of the server's topology and to aid in debugging. However, the TopologyDescription uses Map
s internally, which get serialized to {}
when JSON stringified. We recommend using Node's util.inspect()
helper to print topology descriptions because inspect
properly handles all JS types and all types we use in the driver. However, if JSON must be used, the TopologyDescription
now provides a custom toJSON()
hook:
client.on('topologyDescriptionChanged', ({ newDescription }) => {
// recommended!
console.log('topology description changed', inspect(newDescription, { depth: Infinity, colors: true }))
// now properly prints the entire topology description
console.log('topology description changed', JSON.stringify(newDescription))
});
Omit readConcern
and writeConcern
in Collection.listSearchIndexes
options argument
Important
readConcern
and writeConcern
are no longer viable keys in the options argument passed into Collection.listSearchIndexes
This type change is a correctness fix.
Collection.listSearchIndexes
is an Atlas specific method, and Atlas' search indexes do not support readConcern
and writeConcern
options. The types for this function now reflect this functionality.
Don't throw error when non-read operation in a transaction has a ReadPreferenceMode
other than 'primary'
The following error will now only be thrown when a user provides a ReadPreferenceMode
other than primary
and then tries to perform a command that involves a read:
new MongoTransactionError('Read preference in a transaction must be primary');
Prior to this change, the Node Driver would incorrectly throw this error even when the operation does not perform a read.
Note: a RunCommandOperation
is treated as a read operation for this error.
TopologyDescription.error
type is MongoError
Important
The TopologyDescription.error
property type is now MongoError
rather than MongoServerError
.
This type change is a correctness fix.
Before this change, the following errors that were not instances of MongoServerError
were already passed into TopologyDescription.error
at runtime:
MongoNetworkError
(excludingMongoNetworkRuntimeError
)MongoError
with aMongoErrorLabel.HandshakeError
label
indexExists()
no longer supports the full
option
The Collection.indexExists()
helper supported an option, full
, that modified the internals of the method. When full
was set to true
, the driver would always return false
, regardless of whether or not the index exists.
The full
option is intended to modify the return type of index enumeration APIs (Collection.indexes()
and Collection.indexInformation()
, but since the return type of Collection.indexExists()
this option does not make sense for the Collection.indexExists()
helper.
We have removed support for this option.
indexExists()
, indexes()
and indexInformation()
support cursor options in Typescript
These APIs have supported cursor options at runtime since the 4.x version of the driver, but our Typescript has incorrectly omitted cursor options from these APIs.
Index information helpers have accurate Typescript return types
Collection.indexInformation()
, Collection.indexes()
and Db.indexInformation()
are helpers that return index information for a given collection or database. These helpers take an option, full
, that configures whether the return value contains full index descriptions or a compact summary:
collection.indexes({ full: true }); // returns an array of index descriptions
collection.indexes({ full: false }); // returns an object, mapping index names to index keys
However, the Typescript return type of these helpers was always Document
. Thanks to @prenaissance, these helpers now have accurate type information! The helpers return a new type, IndexDescriptionCompact | IndexDescriptionInfo[]
, which accurately reflects the return type of these helpers. The helpers also support type narrowing by providing a boolean literal as an option to the API:
collection.indexes(); // returns `IndexDescriptionCompact | IndexDescriptionInfo[]`
collection.indexes({ full: false }); // returns an `IndexDescriptionCompact`
collection.indexes({ full: true }); // returns an `IndexDescriptionInfo[]`
collection.indexInfo(); // returns `IndexDescriptionCompact | IndexDescriptionInfo[]`
collection.indexInfo({ full: false }); // returns an `IndexDescriptionCompact`
collection.indexInfo({ full: true }); // returns an `IndexDescriptionInfo[]`
db.indexInfo(); // returns `IndexDescriptionCompact | IndexDescriptionInfo[]`
db.indexInfo({ full: false }); // returns an `IndexDescriptionCompact`
db.indexInfo({ full: true }); // returns an `IndexDescriptionInfo[]`
AWS credentials with expirations no longer throw when using on-demand AWS KMS credentials
In addition to letting users provide KMS credentials manually, client-side encryption supports fetching AWS KMS credentials on-demand using the AWS SDK. However, AWS credential mechanisms that returned access keys with expiration timestamps caused the driver to throw an error.
The driver will no longer throw an error when receiving an expiration token from the AWS SDK.
ClusterTime
interface signature
optionality
The ClusterTime
interface incorrectly reported the signature
field as required, the server may omit it, so the typescript has been updated to reflect reality.
Summary
Features
- NODE-3639: add a general stage to the aggregation pipeline builder (#4079) (8fca1aa)
- NODE-5678: add options parsing support for
timeoutMS
anddefaultTimeoutMS
(#4068) (ddd1e81) - NODE-5762: include
cause
and package name for allMongoMissingDependencyError
s (#4067) (62ea94b) - NODE-5825: add
minRoundTripTime
toServerDescription
and changeroundTripTime
to a moving average ([#40...
v6.5.0
6.5.0 (2024-03-11)
The MongoDB Node.js team is pleased to announce version 6.5.0 of the mongodb
package!
Release Notes
Bulk Write Operations Generate Ids using pkFactory
When performing inserts, the driver automatically generates _id
s for each document if there is no _id
present. By default, the driver generates ObjectId
s. An option, pkFactory
, can be used to configure the driver to generate _id
s that are not object ids.
For a long time, only Collection.insert
and Collection.insertMany
actually used the pkFactory
, if configured. Notably, Collection.bulkWrite()
, Collection.initializeOrderedBulkOp()
and Collection.initializeOrderedBulkOp()
always generated ObjectId
s, regardless of what was configured on collection.
The driver always generates _id
s for inserted documents using the pkFactory
.
Caution
If you are using a pkFactory
and performing bulk writes, you may have inserted data into your database that does not have _id
s generated by the pkFactory
.
Fixed applying read preference to commands depending on topology
When connecting to a secondary in a replica set with a direct connection, if a read operation is performed, the driver attaches a read preference of primaryPreferred
to the command.
Fixed memory leak in Connection layer
The Connection class has recently been refactored to operate on our socket operations using promises. An oversight how we made async network operations interruptible made new promises for every operation. We've simplified the approach and corrected the leak.
Query SRV and TXT records in parallel
When connecting using a convenient SRV connection string (mongodb+srv://
) hostnames are obtained from an SRV dns lookup and some configuration options are obtained from a TXT dns query. Those DNS operations are now performed in parallel to reduce first-time connection latency.
Container and Kubernetes Awareness
The Node.js driver now keeps track of container metadata in the client.env.container
field of the handshake document.
If space allows, the following metadata will be included in client.env.container
:
env?: {
container?: {
orchestrator?: 'kubernetes' // if process.env.KUBERNETES_SERVICE_HOST is set
runtime?: 'docker' // if the '/.dockerenv' file exists
}
}
Note: If neither Kubernetes nor Docker is present, client.env
will not have the container
property.
Add property errorResponse
to MongoServerError
The MongoServer error maps keys from the error document returned by the server on to itself. There are some use cases where the original error document is desirable to obtain in isolation. So now, the mongoServerError.errorResponse
property stores a reference to the error document returned by the server.
Deprecated unused CloseOptions
interface
The CloseOptions
interface was unintentionally made public and was only intended for use in the driver's internals. Due to recent refactoring (NODE-5915), this interface is no longer used in the driver. Since it was marked public, out of an abundance of caution we will not be removing it outside of a major version, but we have deprecated it and will be removing it in the next major version.
Features
- NODE-5968: container and Kubernetes awareness in client metadata (#4005) (28b7040)
- NODE-5988: Provide access to raw results doc on MongoServerError (#4016) (c023242)
- NODE-6008: deprecate CloseOptions interface (#4030) (f6cd8d9)
Bug Fixes
- NODE-5636: generate _ids using pkFactory in bulk write operations (#4025) (fbb5059)
- NODE-5981: read preference not applied to commands properly (#4010) (937c9c8)
- NODE-5985: throw Nodejs' certificate expired error when TLS fails to connect instead of
CERT_HAS_EXPIRED
(#4014) (057c223) - NODE-5993: memory leak in the
Connection
class (#4022) (69de253)
Performance Improvements
Documentation
We invite you to try the mongodb
library immediately, and report any issues to the NODE project.