MongoDB Reference Manual Master
MongoDB Reference Manual Master
Release 3.2.1
MongoDB, Inc.
MongoDB, Inc. 2008 - 2015 This work is licensed under a Creative Commons Attribution-NonCommercialShareAlike 3.0 United States License
Contents
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Interfaces Reference
2.1 mongo Shell Methods
2.2 Database Commands .
2.3 Operators . . . . . . .
2.4 Aggregation Reference
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
19
. 19
. 302
. 519
. 738
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
3
3
4
4
5
Internal Metadata
5.1 Config Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2 The local Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3 System Collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
877
877
882
884
887
887
919
931
938
945
951
955
958
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Release Notes
971
7.1 Current Stable Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 971
7.2 Previous Stable Releases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 999
ii
This document contains all of the reference material from the MongoDB Manual, reflecting the 3.2.1 release. See
the full manual, for complete documentation of MongoDB, its operation, and use.
Contents
Contents
CHAPTER 1
On this page
License (page 3)
Editions (page 3)
Version and Revisions (page 4)
Report an Issue or Make a Change Request (page 4)
Contribute to the Documentation (page 5)
The MongoDB Manual1 contains comprehensive documentation on MongoDB. This page describes the manuals
licensing, editions, and versions, and describes how to make a change request and how to contribute to the manual.
1.1 License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 United States License2
MongoDB, Inc. 2008-2016
1.2 Editions
In addition to the MongoDB Manual3 , you can also access this content in the following editions:
PDF Format4 (without reference).
HTML tar.gz5
ePub Format6
You also can access PDF files that contain subsets of the MongoDB Manual:
MongoDB Reference Manual7
1 http://docs.mongodb.org/manual/#
2 http://creativecommons.org/licenses/by-nc-sa/3.0/us/
3 http://docs.mongodb.org/manual/#
4 http://docs.mongodb.org/master/MongoDB-manual.pdf
5 http://docs.mongodb.org/master/manual.tar.gz
6 http://docs.mongodb.org/master/MongoDB-manual.epub
7 http://docs.mongodb.org/master/MongoDB-reference-manual.pdf
and
stable
version
of
the
manual
is
always
available
at
2011-09-27: Document created with a (very) rough list of style guidelines, conventions, and questions.
2012-01-12: Document revised based on slight shifts in practice, and as part of an effort of making it easier for people
outside of the documentation team to contribute to documentation.
2012-03-21: Merged in content from the Jargon, and cleaned up style in light of recent experiences.
2012-08-10: Addition to the Referencing section.
2013-02-07: Migrated this document to the manual. Added map-reduce terminology convention. Other edits.
2013-11-15: Added new table of preferred terms.
2016-01-05: Standardizing on embedded document
Naming Conventions
This section contains guidelines on naming files, sections, documents and other document elements.
File naming Convention:
For Sphinx, all files should have a .txt extension.
Separate words in file names with hyphens (i.e. -.)
For most documents, file names should have a terse one or two word name that describes the material covered in the document.
Allow the path of the file within the document tree to add some of the required context/categorization.
For example its acceptable
to
have
https://docs.mongodb.org/manual/core/sharding.rst
and
https://docs.mongodb.org/manual/administration/sharding.rst.
For tutorials, the full title of the document should be in the file name.
For example,
https://docs.mongodb.org/manual/tutorial/replace-one-configuration-server-in-a-shar
Phrase headlines and titles so users can determine what questions the text will answer, and material that will
be addressed, without needing them to read the content. This shortens the amount of time that people spend
looking for answers, and improvise search/scanning, and possibly SEO.
Prefer titles and headers in the form of Using foo over How to Foo.
When using target references (i.e. :ref: references in documents), use names that include enough context to
be intelligible through all documentation. For example, use replica-set-secondary-only-node as
opposed to secondary-only-node. This makes the source more usable and easier to maintain.
Style Guide
This includes the local typesetting, English, grammatical, conventions and preferences that all documents in the manual
should use. The goal here is to choose good standards, that are clear, and have a stylistic minimalism that does not
interfere with or distract from the content. A uniform style will improve user experience and minimize the effect of a
multi-authored document.
Punctuation
Use the Oxford comma.
Oxford commas are the commas in a list of things (e.g. something, something else, and another thing) before
the conjunction (e.g. and or or.).
Do not add two spaces after terminal punctuation, such as periods.
Place commas and periods inside quotation marks.
Headings Use title case for headings and document titles. Title case capitalizes the first letter of the first, last, and
all significant words.
Verbs Verb tense and mood preferences, with examples:
Avoid the first person. For example do not say, We will begin the backup process by locking the database, or
I begin the backup process by locking my database instance.
Use the second person. If you need to back up your database, start by locking the database first. In practice,
however, its more concise to imply second person using the imperative, as in Before initiating a backup, lock
the database.
When indicated, use the imperative mood. For example: Back up your databases often and To prevent data
loss, back up your databases.
The future perfect is also useful in some cases. For example, Creating disk snapshots without locking the
database will lead to an invalid state.
Avoid helper verbs, as possible, to increase clarity and concision. For example, attempt to avoid this does
foo and this will do foo when possible. Use does foo over will do foo in situations where this foos is
unacceptable.
Referencing
To refer to future or planned functionality in MongoDB or a driver, always link to the Jira case. The Manuals
conf.py provides an :issue: role that links directly to a Jira case (e.g. :issue:\SERVER-9001\).
For non-object references (i.e. functions, operators, methods, database commands, settings) always reference
only the first occurrence of the reference in a section. You should always reference objects, except in section
headings.
Structure references with the why first; the link second.
For example, instead of this:
General Formulations
Contractions are acceptable insofar as they are necessary to increase readability and flow. Avoid otherwise.
Make lists grammatically correct.
Do not use a period after every item unless the list item completes the unfinished sentence before the list.
Use appropriate commas and conjunctions in the list items.
Typically begin a bulleted list with an introductory sentence or clause, with a colon or comma.
The following terms are one word:
standalone
workflow
Use unavailable, offline, or unreachable to refer to a mongod instance that cannot be accessed. Do not
use the colloquialism down.
Always write out units (e.g. megabytes) rather than using abbreviations (e.g. MB.)
Structural Formulations
There should be at least two headings at every nesting level. Within an h2 block, there should be either: no
h3 blocks, 2 h3 blocks, or more than 2 h3 blocks.
Section headers are in title case (capitalize first, last, and all important words) and should effectively describe
the contents of the section. In a single document you should strive to have section titles that are not redundant
and grammatically consistent with each other.
Use paragraphs and paragraph breaks to increase clarity and flow. Avoid burying critical information in the
middle of long paragraphs. Err on the side of shorter paragraphs.
Prefer shorter sentences to longer sentences. Use complex formations only as a last resort, if at all (e.g. compound complex structures that require semi-colons).
Avoid paragraphs that consist of single sentences as they often represent a sentence that has unintentionally
become too complex or incomplete. However, sometimes such paragraphs are useful for emphasis, summary,
or introductions.
As a corollary, most sections should have multiple paragraphs.
For longer lists and more complex lists, use bulleted items rather than integrating them inline into a sentence.
Do not expect that the content of any example (inline or blocked) will be self explanatory. Even when it feels
redundant, make sure that the function and use of every example is clearly described.
ReStructured Text and Typesetting
Place spaces between nested parentheticals and elements in JavaScript examples. For example, prefer { [ a,
a, a ] } over {[a,a,a]}.
For underlines associated with headers in RST, use:
= for heading level 1 or h1s. Use underlines and overlines for document titles.
- for heading level 2 or h2s.
~ for heading level 3 or h3s.
for heading level 4 or h4s.
Use hyphens (-) to indicate items of an ordered list.
Place footnotes and other references, if you use them, at the end of a section rather than the end of a file.
Use the footnote format that includes automatic numbering and a target name for ease of use. For instance a
footnote tag may look like: [#note]_ with the corresponding directive holding the body of the footnote that
resembles the following: .. [#note].
Do not include ..
code-block::
[language] in footnotes.
As it makes sense, use the .. code-block:: [language] form to insert literal blocks into the text.
While the double colon, ::, is functional, the .. code-block:: [language] form makes the source
easier to read and understand.
For all mentions of referenced types (i.e. commands, operators, expressions, functions, statuses, etc.) use the
reference types to ensure uniform formatting and cross-referencing.
10
Preferred
Term
document
Concept
Dispreferred
Alternatives
Notes
record, object,
row
instance
process
(acceptable
sometimes), node
(never
acceptable),
server.
field
name
key, column
field/value
The name/value pair that
describes a unit of data in
MongoDB.
value
MongoDB
data
mongo,
mongodb, cluster
embedded
document
mapreduce
An embedded or nested
document within a document or
an array.
nested document
cluster
A sharded cluster.
mapReduce, map
reduce,
map/reduce
grid, shard
cluster, set,
deployment
shard cluster,
cluster, sharded
system
set, replication
deployment
cluster, system
11
Geo-Location
1. While MongoDB is capable of storing coordinates in embedded documents, in practice, users should only
store coordinates in arrays. (See: DOCS-4132 .)
MongoDB Documentation Practices and Processes
This document provides an overview of the practices and processes.
Commits
When relevant, include a Jira case identifier in a commit message. Reference documentation cases when applicable,
but feel free to reference other cases from jira.mongodb.org33 .
Err on the side of creating a larger number of discrete commits rather than bundling large set of changes into one
commit.
32 https://jira.mongodb.org/browse/DOCS-41
33 http://jira.mongodb.org/
12
For the sake of consistency, remove trailing whitespaces in the source file.
Hard wrap files to between 72 and 80 characters per-line.
Standards and Practices
At least two people should vet all non-trivial changes to the documentation before publication. One of the
reviewers should have significant technical experience with the material covered in the documentation.
All development and editorial work should transpire on GitHub branches or forks that editors can then merge
into the publication branches.
Collaboration
Building the documentation is useful because Sphinx37 and docutils can catch numerous errors in the format and
syntax of the documentation. Additionally, having access to an example documentation as it will appear to the users
is useful for providing more effective basis for the review process. Besides Sphinx, Pygments, and Python-Docutils,
the documentation repository contains all requirements for building the documentation resource.
Talk to someone on the documentation team if you are having problems running builds yourself.
Publication
The makefile for this repository contains targets that automate the publication process. Use make html to publish
a test build of the documentation in the build/ directory of your repository. Use make publish to build the full
contents of the manual from the current branch in the ../public-docs/ directory relative the docs repository.
Other targets include:
man - builds UNIX Manual pages for all Mongodb utilities.
push - builds and deploys the contents of the ../public-docs/.
pdfs - builds a PDF version of the manual (requires LaTeX dependencies.)
Branches
This section provides an overview of the git branches in the MongoDB documentation repository and their use.
34 https://jira.mongodb.org/browse/DOCS
35 https://github.com/
36 https://github.com/mongodb/docs
37 http://sphinx.pocoo.org/
13
At the present time, future work transpires in the master, with the main publication being current. As the
documentation stabilizes, the documentation team will begin to maintain branches of the documentation for specific
MongoDB releases.
Migration from Legacy Documentation
The MongoDB.org Wiki contains a wealth of information. As the transition to the Manual (i.e. this project and
resource) continues, its critical that no information disappears or goes missing. The following process outlines how
to migrate a wiki page to the manual:
1. Read the relevant sections of the Manual, and see what the new documentation has to offer on a specific topic.
In this process you should follow cross references and gain an understanding of both the underlying information
and how the parts of the new content relates its constituent parts.
2. Read the wiki page you wish to redirect, and take note of all of the factual assertions, examples presented by the
wiki page.
3. Test the factual assertions of the wiki page to the greatest extent possible. Ensure that example output is accurate.
In the case of commands and reference material, make sure that documented options are accurate.
4. Make corrections to the manual page or pages to reflect any missing pieces of information.
The target of the redirect need not contain every piece of information on the wiki page, if the manual as a
whole does, and relevant section(s) with the information from the wiki page are accessible from the target of the
redirection.
5. As necessary, get these changes reviewed by another writer and/or someone familiar with the area of the information in question.
At this point, update the relevant Jira case with the target that youve chosen for the redirect, and make the ticket
unassigned.
6. When someone has reviewed the changes and published those changes to Manual, you, or preferably someone
else on the team, should make a final pass at both pages with fresh eyes and then make the redirect.
Steps 1-5 should ensure that no information is lost in the migration, and that the final review in step 6 should be
trivial to complete.
Review Process
Types of Review The content in the Manual undergoes many types of review, including the following:
Initial Technical Review Review by an engineer familiar with MongoDB and the topic area of the documentation.
This review focuses on technical content, and correctness of the procedures and facts presented, but can improve any
aspect of the documentation that may still be lacking. When both the initial technical review and the content review
are complete, the piece may be published.
Content Review Textual review by another writer to ensure stylistic consistency with the rest of the manual. Depending on the content, this may precede or follow the initial technical review. When both the initial technical review
and the content review are complete, the piece may be published.
14
Consistency Review This occurs post-publication and is content focused. The goals of consistency reviews are to
increase the internal consistency of the documentation as a whole. Insert relevant cross-references, update the style as
needed, and provide background fact-checking.
When possible, consistency reviews should be as systematic as possible and we should avoid encouraging stylistic and
information drift by editing only small sections at a time.
Subsequent Technical Review If the documentation needs to be updated following a change in functionality of the
server or following the resolution of a user issue, changes may be significant enough to warrant additional technical
review. These reviews follow the same form as the initial technical review, but is often less involved and covers a
smaller area.
Review Methods If youre not a usual contributor to the documentation and would like to review something, you
can submit reviews in any of the following methods:
If youre reviewing an open pull request in GitHub, the best way to comment is on the overview diff, which
you can find by clicking on the diff button in the upper left portion of the screen. You can also use the
following URL to reach this interface:
https://github.com/mongodb/docs/pull/[pull-request-id]/files
Replace [pull-request-id] with the identifier of the pull request. Make all comments inline, using
GitHubs comment system.
You may also provide comments directly on commits, or on the pull request itself but these commit-comments
are archived in less coherent ways and generate less useful emails, while comments on the pull request lead to
less specific changes to the document.
Leave feedback on Jira cases in the DOCS38 project. These are better for more general changes that arent
necessarily tied to a specific line, or affect multiple files.
Create a fork of the repository in your GitHub account, make any required changes and then create a pull request
with your changes.
If you insert lines that begin with any of the following annotations:
.. TODO:
TODO:
.. TODO
TODO
followed by your comments, it will be easier for the original writer to locate your comments. The two dots ..
format is a comment in reStructured Text, which will hide your comments from Sphinx and publication if youre
worried about that.
This format is often easier for reviewers with larger portions of content to review.
MongoDB Manual Organization
This document provides an overview of the global organization of the documentation resource. Refer to the notes
below if you are having trouble understanding the reasoning behind a files current location, or if you want to add new
documentation but arent sure how to integrate it into the existing resource.
If you have questions, dont hesitate to open a ticket in the Documentation Jira Project39 or contact the documentation
team40 .
38 http://jira.mongodb.org/browse/DOCS
39 https://jira.mongodb.org/browse/DOCS
40 [email protected]
15
Global Organization
Indexes
and
Experience The
documentation
project
has
two
index
files:
https://docs.mongodb.org/manual/contents.txt and https://docs.mongodb.org/manual/index.txt.
The contents file provides the documentations tree structure, which Sphinx uses to create the left-pane navigational
structure, to power the Next and Previous page functionality, and to provide all overarching outlines of the
resource. The index file is not included in the contents file (and thus builds will produce a warning here) and is
the page that users first land on when visiting the resource.
Having separate contents and index files provides a bit more flexibility with the organization of the resource while
also making it possible to customize the primary user experience.
Topical Organization The placement of files in the repository depends on the type of documentation rather than the
topic of the content. Like the difference between contents.txt and index.txt, by decoupling the organization
of the files from the organization of the information the documentation can be more flexible and can more adequately
address changes in the product and in users needs.
Files in the source/ directory represent the tip of a logical tree of documents, while directories are containers of
types of content. The administration and applications directories, however, are legacy artifacts and with a
few exceptions contain sub-navigation pages.
With several exceptions in the reference/ directory, there is only one level of sub-directories in the source/
directory.
Tools
The organization of the site, like all Sphinx sites derives from the toctree structure. However, in order to annotate
the table of contents and provide additional flexibility, the MongoDB documentation generates toctree structures
using data from YAML files stored in the source/includes/ directory. These files start with ref-toc or toc
and generate output in the source/includes/toc/ directory. Briefly this system has the following behavior:
files that start with ref-toc refer to the documentation of API objects (i.e. commands, operators and methods),
and the build system generates files that hold toctree directives as well as files that hold tables that list objects
and a brief description.
files that start with toc refer to all other documentation and the build system generates files that hold toctree
directives as well as files that hold definition lists that contain links to the documents and short descriptions the
content.
file names that have spec following toc or ref-toc will generate aggregated tables or definition lists and
allow ad-hoc combinations of documents for landing pages and quick reference guides.
MongoDB Documentation Build System
This document contains more direct instructions for building the MongoDB documentation.
Getting Started
Install Dependencies The MongoDB Documentation project depends on the following tools:
Python
Git
Inkscape (Image generation.)
16
Feel free to use pip rather than easy_install to install python packages.
To generate the images used in the documentation, download and install Inkscape42 .
Optional
To generate PDFs for the full production build, install a TeX distribution (for building the PDF.) If you do not have a
LaTeX installation, use MacTeX43 . This is only required to build PDFs.
Arch Linux Install packages from the system repositories with the following command:
pacman -S inkscape python2-pip
Optional
To generate PDFs for the full production build, install the following packages from the system repository:
pacman -S texlive-bin texlive-core texlive-latexextra
Debian/Ubuntu Install the required system packages with the following command:
apt-get install inkscape python-pip
Optional
To generate PDFs for the full production build, install the following packages from the system repository:
apt-get install texlive-latex-recommended texlive-latex-recommended
17
The MongoDB documentation build system is entirely accessible via make targets. For example, to build an HTML
version of the documentation issue the following command:
make html
You can find the build output in build/<branch>/html, where <branch> is the name of the current branch.
In addition to the html target, the build system provides the following targets:
publish Builds and integrates all output for the production build.
Build output is in
build/public/<branch>/. When you run publish in the master, the build will generate
some output in build/public/.
push; stage Uploads the production build to the production or staging web servers. Depends on publish. Requires access production or staging environment.
push-all; stage-all Uploads the entire content of build/public/ to the web servers.
publish. Not used in common practice.
Depends on
push-with-delete; stage-with-delete Modifies the action of push and stage to remove remote file
that dont exist in the local build. Use with caution.
html; latex; dirhtml; epub; texinfo; man; json These are standard targets derived from the default
Sphinx Makefile, with adjusted dependencies. Additionally, for all of these targets you can append -nitpick
to increase Sphinxs verbosity, or -clean to remove all Sphinx build artifacts.
latex performs several additional post-processing steps on .tex output generated by Sphinx. This target will
also compile PDFs using pdflatex.
html and man also generates a .tar.gz file of the build outputs for inclusion in the final releases.
If you have any questions, please feel free to open a Jira Case44 .
44 https://jira.mongodb.org/browse/DOCS
18
CHAPTER 2
Interfaces Reference
JavaScript in MongoDB
Although these methods use JavaScript, most interactions with MongoDB do not use JavaScript but use an
idiomatic driver in the language of the interacting application.
2.1.1 Collection
Collection Methods
Name
db.collection.aggregate() (page 20)
db.collection.bulkWrite() (page 24)
db.collection.count() (page 32)
db.collection.copyTo() (page 35)
db.collection.createIndex() (page 36)
db.collection.dataSize() (page 39)
Description
Provides access to the aggregation pipeline.
Provides bulk write operation functionality.
Wraps count (page 306) to return a count of the number of docum
Deprecated. Wraps eval (page 357) to copy data between collecti
Builds an index on a collection.
Returns the size of the collection. Wraps the size (page 474) field
19
Name
db.collection.deleteOne() (page 39)
db.collection.deleteMany() (page 41)
db.collection.distinct() (page 43)
db.collection.drop() (page 45)
db.collection.dropIndex() (page 46)
db.collection.dropIndexes() (page 47)
db.collection.ensureIndex() (page 47)
db.collection.explain() (page 48)
db.collection.find() (page 51)
db.collection.findAndModify() (page 57)
db.collection.findOne() (page 61)
db.collection.findOneAndDelete() (page 63)
db.collection.findOneAndReplace() (page 66)
db.collection.findOneAndUpdate() (page 69)
db.collection.getIndexes() (page 72)
db.collection.getShardDistribution() (page 73)
db.collection.getShardVersion() (page 74)
db.collection.group() (page 75)
db.collection.insert() (page 78)
db.collection.insertOne() (page 82)
db.collection.insertMany() (page 84)
db.collection.isCapped() (page 89)
db.collection.mapReduce() (page 89)
db.collection.reIndex() (page 97)
db.collection.replaceOne() (page 98)
db.collection.remove() (page 100)
db.collection.renameCollection() (page 103)
db.collection.save() (page 104)
db.collection.stats() (page 106)
db.collection.storageSize() (page 116)
db.collection.totalSize() (page 116)
db.collection.totalIndexSize() (page 116)
db.collection.update() (page 116)
db.collection.updateOne() (page 124)
db.collection.updateMany() (page 128)
db.collection.validate() (page 131)
db.collection.aggregate()
On this page
Definition (page 20)
Behavior (page 22)
Examples (page 22)
Definition
db.collection.aggregate(pipeline, options)
Calculates aggregate values for the data in a collection.
20
param array pipeline A sequence of data aggregation operations or stages. See the aggregation
pipeline operators (page 622) for details.
Changed in version 2.6: The method can still accept the pipeline stages as separate arguments
instead of as elements in an array; however, if you do not specify the pipeline as an array,
you cannot specify the options parameter.
param document options Optional. Additional options that aggregate() (page 20) passes to
the aggregate (page 302) command.
New in version 2.6: Available only if you specify the pipeline as an array.
The options document can contain the following fields and values:
field boolean explain Optional. Specifies to return the information on the processing of the pipeline.
See Return Information on Aggregation Pipeline Operation (page 23) for an example.
New in version 2.6.
field boolean allowDiskUse Optional. Enables writing to temporary files. When set to true, aggregation operations can write data to the _tmp subdirectory in the dbPath (page 907) directory. See Perform Large Sort Operation with External Sort (page 23) for an example.
New in version 2.6.
field document cursor Optional. Specifies the initial batch size for the cursor. The value of the
cursor field is a document with the field batchSize. See Specify an Initial Batch Size
(page 23) for syntax and example.
New in version 2.6.
field boolean bypassDocumentValidation Optional.
(page 648) aggregation operator.
21
See also:
For more information, see https://docs.mongodb.org/manual/core/aggregation-pipeline, Aggregation Reference (page 738), https://docs.mongodb.org/manual/core/aggregation-pipeline-limits,
and aggregate (page 302).
Examples The following examples use the collection orders that contains the following documents:
{
{
{
{
{
_id:
_id:
_id:
_id:
_id:
1,
2,
3,
4,
5,
cust_id:
cust_id:
cust_id:
cust_id:
cust_id:
"abc1",
"xyz1",
"xyz1",
"xyz1",
"abc1",
ord_date:
ord_date:
ord_date:
ord_date:
ord_date:
ISODate("2012-11-02T17:04:11.102Z"),
ISODate("2013-10-01T17:04:11.102Z"),
ISODate("2013-10-12T17:04:11.102Z"),
ISODate("2013-10-11T17:04:11.102Z"),
ISODate("2013-11-12T17:04:11.102Z"),
status:
status:
status:
status:
status:
"A",
"A",
"D",
"D",
"A",
Group by and Calculate a Sum The following aggregation operation selects documents with status equal to "A",
groups the matching documents by the cust_id field and calculates the total for each cust_id field from the
sum of the amount field, and sorts the results by the total field in descending order:
db.orders.aggregate([
{ $match: { status: "A" } },
{ $group: { _id: "$cust_id", total: { $sum: "$amount" } } },
{ $sort: { total: -1 } }
])
22
amount:
amount:
amount:
amount:
amount:
50 }
100 }
25 }
125 }
25 }
The mongo (page 794) shell iterates the returned cursor automatically to print the results.
See
https://docs.mongodb.org/manual/tutorial/iterate-a-cursor for handling cursors manually
in the mongo (page 794) shell.
Return Information on Aggregation Pipeline Operation The following aggregation operation sets the option
explain to true to return information about the aggregation operation.
db.orders.aggregate(
[
{ $match: { status: "A" } },
{ $group: { _id: "$cust_id", total: { $sum: "$amount" } } },
{ $sort: { total: -1 } }
],
{
explain: true
}
)
The operation returns a cursor with the document that contains detailed information regarding the processing of the
aggregation pipeline. For example, the document may show, among other details, which index, if any, the operation
used. 1 If the orders collection is a sharded collection, the document would also show the division of labor between
the shards and the merge operation, and for targeted queries, the targeted shards.
Note: The intended readers of the explain output document are humans, and not machines, and the output format
is subject to change between releases.
The mongo (page 794) shell iterates the returned cursor automatically to print the results.
See
https://docs.mongodb.org/manual/tutorial/iterate-a-cursor for handling cursors manually
in the mongo (page 794) shell.
Perform Large Sort Operation with External Sort Aggregation pipeline stages have maximum memory use limit.
To handle large datasets, set allowDiskUse option to true to enable writing data to temporary files, as in the
following example:
var results = db.stocks.aggregate(
[
{ $project : { cusip: 1, date: 1, price: 1, _id: 0 } },
{ $sort : { cusip : 1, date: 1 } }
],
{
allowDiskUse: true
}
)
Specify an Initial Batch Size To specify an initial batch size for the cursor, use the following syntax for the cursor
option:
cursor: { batchSize: <int> }
For example, the following aggregation operation specifies the initial batch size of 0 for the cursor:
1
index-filters can affect the choice of index used. See index-filters for details.
23
db.orders.aggregate(
[
{
{
{
{
],
{
cursor: { batchSize: 0 }
}
)
A batchSize of 0 means an empty first batch and is useful for quickly returning a cursor or failure message
without doing significant server-side work. Specify subsequent batch sizes to OP_GET_MORE operations as with
other MongoDB cursors.
The mongo (page 794) shell iterates the returned cursor automatically to print the results.
See
https://docs.mongodb.org/manual/tutorial/iterate-a-cursor for handling cursors manually
in the mongo (page 794) shell.
Override
readConcern The
following
operation
on
a
replica
set
specifies
a
https://docs.mongodb.org/manual/reference/read-concern of "majority" to read the
most recent copy of the data confirmed as having been written to a majority of the nodes.
Note:
To use a read concern level of "majority", you must use the WiredTiger storage engine and start the mongod
(page 762) instances with the --enableMajorityReadConcern (page 773) command line option (or the
replication.enableMajorityReadConcern (page 914) setting if using a configuration file).
Only replica sets using protocol version 1 support "majority" read concern. Replica sets running
protocol version 0 do not support "majority" read concern.
To use a https://docs.mongodb.org/manual/reference/read-concern
"majority", you cannot include the $out (page 648) stage.
level
of
Regardless of the read concern level, the most recent data on a node may not reflect the most recent version of
the data in the system.
db.restaurants.aggregate(
[ { $match: { rating: { $lt: 5 } } } ],
{ readConcern: { level: "majority" } }
)
db.collection.bulkWrite()
On this page
Definition (page 24)
Behavior (page 25)
Examples (page 28)
Definition
24
db.collection.bulkWrite()
New in version 3.2.
Performs multiple write operations with controls for order of execution.
bulkWrite() (page 24) has the following syntax:
db.collection.bulkWrite(
[ <operation 1>, <operation 2>, ... ],
{
writeConcern : <document>,
ordered : <boolean>
}
)
25
db.collection.bulkWrite( [
{ insertOne : { "document" : <document> } }
] )
updateOne and updateMany updateOne updates a single document in the collection that matches the
filter.
If multiple documents match, updateOne will update the first matching document only.
See
db.collection.updateOne() (page 124).
db.collection.bulkWrite( [
{ updateOne :
{
"filter" : <document>,
"update" : <document>,
"upsert" : <boolean>
}
}
] )
the
collection
that
match
the
filter.
See
db.collection.bulkWrite( [
{ updateMany :
{
"filter" : <document>,
"update" : <document>,
"upsert" : <boolean>
}
}
] )
Use query selectors (page 519) such as those used with find() (page 51) for the filter field.
Use Update Operators (page 586) such as $set (page 592), $unset (page 594), or $rename (page 590) for the
update field.
By default, upsert is false.
replaceOne replaceOne replaces a single document in the collection that matches the
If multiple documents match, replaceOne will replace the first matching document only.
db.collection.replaceOne() (page 98).
filter.
See
db.collection.bulkWrite([
{ replaceOne :
{
"filter" : <document>,
"replacement" : <document>,
"upsert" : <boolean>
}
}
] )
Use query selectors (page 519) such as those used with find() (page 51) for the filter field.
The replacement field cannot contain update operators (page 586).
By default, upsert is false.
26
deleteOne and deleteMany deleteOne deletes a single document in the collection that match the filter.
If multiple documents match, deleteOne will delete the first matching document only.
See
db.collection.deleteOne() (page 39).
db.collection.bulkWrite([
{ deleteOne : { "filter" : <document> } }
] )
deleteMany deletes all documents in the collection that match the filter. See db.collection.deleteMany()
(page 41).
db.collection.bulkWrite([
{ deleteMany : { "filter" : <document> } }
] )
Use query selectors (page 519) such as those used with find() (page 51) for the filter field.
_id Field If the document does not specify an _id field, then mongod (page 762) adds the _id field and assign a
unique https://docs.mongodb.org/manual/reference/object-id for the document before inserting
or upserting it. Most drivers create an ObjectId and insert the _id field, but the mongod (page 762) will create and
populate the _id if the driver or application does not.
If the document contains an _id field, the _id value must be unique within the collection to avoid duplicate key error.
Update or replace operations cannot specify an _id value that differs from the original document.
Execution of Operations The ordered parameter specifies whether bulkWrite() (page 24) will execute operations in order or not. By default, operations are executed in order.
The following code represents a bulkWrite() (page 24) with five operations.
db.collection.bulkWrite(
[
{ insertOne : <document> },
{ updateOne : <document> },
{ updateMany : <document> },
{ replaceOne : <document> },
{ deleteOne : <document> },
{ deleteMany : <document> }
]
)
In the default ordered : true state, each operation will be executed in order, from the first operation
insertOne to the last operation deleteMany.
If ordered is set to false, operations may be reordered by mongod (page 762) to increase performance. Applications
should not depend on order of operation execution.
The following code represents an unordered bulkWrite() (page 24) with six operations:
db.collection.bulkWrite(
[
{ insertOne : <document> },
{ updateOne : <document> },
{ updateMany : <document> },
{ replaceOne : <document> },
{ deleteOne : <document> },
{ deleteMany : <document> }
],
27
{ ordered : false }
)
With ordered : false, the results of the operation may vary. For example, the deleteOne or deleteMany
may remove more or fewer documents depending on whether the run before or after the insertOne, updateOne,
updateMany, or replaceOne operations.
Each group of operations can have at most 1000 operations (page 937). If a group exceeds this limit
(page 937), MongoDB will divide the group into smaller groups of 1000 or less. For example, if the queue consists of 2000 operations, MongoDB creates 2 groups, each with 1000 operations.
The sizes and grouping mechanics are internal performance details and are subject to change in future versions.
Executing an ordered (page 211) list of operations on a sharded collection will generally be slower than executing
an unordered (page 212) list since with an ordered list, each operation must wait for the previous operation to finish.
Capped Collections
bulkWrite() (page 24) write operations have restrictions when used on a capped collection.
updateOne and updateMany throw a WriteError if the update criteria increases the size of the document
being modified.
replaceOne throws a WriteError if the replacement document has a larger size than the original document.
deleteOne and deleteMany throw a WriteError if used on a capped collection.
Error Handling bulkWrite() (page 24) throws a BulkWriteError exception on errors.
Excluding https://docs.mongodb.org/manual/reference/write-concern errors, ordered operations stop after an error, while unordered operations continue to process any remaining write operations in the queue.
Write concern errors are displayed in the writeConcernErrors field, while all other errors are displayed in the
writeErrors field. If an error is encountered, the number of successful write operations are displayed instead of
the inserted _id values. Ordered operations display the single error encountered while unordered operations display
each error in an array.
Examples
Bulk Write Operations The characters collection contains the following documents:
{ "_id" : 1, "char" : "Brisbane", "class" : "monk", "lvl" : 4 },
{ "_id" : 2, "char" : "Eldon", "class" : "alchemist", "lvl" : 3 },
{ "_id" : 3, "char" : "Meldane", "class" : "ranger", "lvl" : 3 }
The following bulkWrite() (page 24) performs multiple operations on the collection:
try {
db.characters.bulkWrite(
[
{ insertOne :
{
"document" :
{
"_id" : 4, "char" : "Dithras", "class" : "barbarian", "lvl" : 4
}
}
},
{ insertOne :
28
{
"document" :
{
"_id" : 5, "char" : "Taeln", "class" : "fighter", "lvl" : 3
}
}
},
{ updateOne :
{
"filter" : { "char" : "Eldon" },
"update" : { $set : { "status" : "Critical Injury" } }
}
},
{ deleteOne :
{ "filter" : { "char" : "Brisbane"} }
},
{ replaceOne :
{
"filter" : { "char" : "Meldane" },
"replacement" : { "char" : "Tanys", "class" : "oracle", "lvl" : 4 }
}
}
]
);
}
catch (e) {
print(e);
}
If the _id value for the second of the insertOne operations were a duplicate of an existing _id, the following
exception would be thrown:
BulkWriteError({
"writeErrors" : [
{
"index" : 0,
"code" : 11000,
"errmsg" : "E11000 duplicate key error collection: guidebook.characters index: _id_ dup key:
"op" : {
"_id" : 5,
"char" : "Taeln"
}
29
}
],
"writeConcernErrors" : [ ],
"nInserted" : 1,
"nUpserted" : 0,
"nMatched" : 0,
"nModified" : 0,
"nRemoved" : 0,
"upserted" : [ ]
})
Since ordered was true by default, only the first operation completes successfully. The rest are not executed. Running the bulkWrite() (page 24) with ordered : false would allow the remaining operations to complete
despite the error.
Unordered Bulk Write The characters collection contains the following documents:
{ "_id" : 1, "char" : "Brisbane", "class" : "monk", "lvl" : 4 },
{ "_id" : 2, "char" : "Eldon", "class" : "alchemist", "lvl" : 3 },
{ "_id" : 3, "char" : "Meldane", "class" : "ranger", "lvl" : 3 }
The following bulkWrite() (page 24) performs multiple unordered operations on the characters collection.
Note that one of the insertOne stages has a duplicate _id value:
try {
db.characters.bulkWrite(
[
{ insertOne :
{
"document" :
{
"_id" : 4, "char" : "Dithras", "class" : "barbarian", "lvl" : 4
}
}
},
{ insertOne :
{
"document" :
{
"_id" : 4, "char" : "Taeln", "class" : "fighter", "lvl" : 3
}
}
},
{ updateOne :
{
"filter" : { "char" : "Eldon" },
"update" : { $set : { "status" : "Critical Injury" } }
}
},
{ deleteOne :
{ "filter" : { "char" : "Brisbane"} }
},
{ replaceOne :
{
"filter" : { "char" : "Meldane" },
"replacement" : { "char" : "Tanys", "class" : "oracle", "lvl" : 4 }
}
}
30
],
{ ordered : false }
);
}
catch (e) {
print(e);
}
BulkWriteError({
"writeErrors" : [
{
"index" : 0,
"code" : 11000,
"errmsg" : "E11000 duplicate key error collection: guidebook.characters index: _id_ dup key:
"op" : {
"_id" : 4,
"char" : "Taeln"
}
}
],
"writeConcernErrors" : [ ],
"nInserted" : 1,
"nUpserted" : 0,
"nMatched" : 2,
"nModified" : 2,
"nRemoved" : 1,
"upserted" : [ ]
})
Since this was an unordered operation, the writes remaining in the queue were processed despite the exception.
Bulk Write with Write Concern The enemies collection contains the following documents:
{
{
{
{
"_id"
"_id"
"_id"
"_id"
:
:
:
:
1,
2,
3,
4,
"char"
"char"
"char"
"char"
:
:
:
:
The following bulkWrite() (page 24) performs multiple operations on the collection using a write concern value
of "majority" and timeout value of 100 milliseconds:
try {
db.enemies.bulkWrite(
[
{ updateMany :
{
"filter" : { "rating" : { $gte : 3} },
"update" : { $inc : { "encounter" : 0.1 } }
},
},
{ updateMany :
{
"filter" : { "rating" : { $lt : 2} },
"update" : { $inc : { "encounter" : -0.25 } }
},
31
},
{ deleteMany : { "filter" : { "encounter" { $lt : 0 } } } },
{ insertOne :
{
"document" :
{
"_id" :5, "char" : "ogrekin" , "rating" : 2, "encounter" : 0.31
}
}
}
],
{ writeConcern : { w : "majority", wtimeout : 100 } }
);
}
catch (e) {
print(e);
}
If the total time required for all required nodes in the replica set to acknowledge the write operation is greater than
wtimeout, the following writeConcernError is displayed when the wtimeout period has passed.
BulkWriteError({
"writeErrors" : [ ],
"writeConcernErrors" : [
{
"code" : 64,
"errInfo" : {
"wtimeout" : true
},
"errmsg" : "waiting for replication timed out"
}
],
"nInserted" : 1,
"nUpserted" : 0,
"nMatched" : 4,
"nModified" : 4,
"nRemoved" : 1,
"upserted" : [ ]
})
The result set shows the operations executed since writeConcernErrors errors are not an indicator that any write
operations failed.
db.collection.count()
On this page
Definition (page 32)
Behavior (page 33)
Examples (page 34)
Definition
db.collection.count(query, options)
Returns the count of documents that
32
would
match
find()
(page
51)
query.
The
db.collection.count() (page 32) method does not perform the find() (page 51) operation but
instead counts and returns the number of results that match a query.
param document query The query selection criteria.
param document options Optional. Extra options for modifying the count.
The options document contains the following fields:
field integer limit Optional. The maximum number of documents to count.
field integer skip Optional. The number of documents to skip before counting.
field string, document hint Optional. An index name hint or specification for the query.
New in version 2.6.
field integer maxTimeMS Optional. The maximum amount of time to allow the query to run.
field string readConcern Optional. Specifies the read concern.
To use a read concern level of "majority", you must use the WiredTiger storage engine
and start the mongod (page 762) instances with the --enableMajorityReadConcern
(page 773) command line option (or the replication.enableMajorityReadConcern
(page 914) setting if using a configuration file).
Only replica sets using protocol version 1 support "majority" read concern. Replica
sets running protocol version 0 do not support "majority" read concern.
To use a read concern level of "majority", you must specify a nonempty query condition.
New in version 3.2.
count() (page 32) is equivalent to the db.collection.find(query).count() construct.
See also:
cursor.count() (page 136)
Behavior
Sharded Clusters On a sharded cluster, db.collection.count() (page 32) can result in an inaccurate count
if orphaned documents exist or if a chunk migration is in progress.
To avoid these situations, on a sharded cluster, use the $group (page 636) stage of the
db.collection.aggregate() (page 20) method to $sum (page 721) the documents. For example, the
following operation counts the documents in a collection:
db.collection.aggregate(
[
{ $group: { _id: null, count: { $sum: 1 } } }
]
)
To get a count of documents that match a query condition, include the $match (page 627) stage as well:
db.collection.aggregate(
[
{ $match: <query condition> },
{ $group: { _id: null, count: { $sum: 1 } } }
]
)
33
When performing a count, MongoDB can return the count using only the index if:
the query can use an index,
the query only contains conditions on the keys of the index, and
the query predicates access a single contiguous range of index keys.
For example, the following operations can return the count using only the index:
db.collection.find( { a: 5, b: 5 } ).count()
db.collection.find( { a: { $gt: 5 } } ).count()
db.collection.find( { a: 5, b: { $gt: 10 } } ).count()
If, however, the query can use an index but the query predicates do not access a single contiguous range of index keys
or the query also contains conditions on fields outside the index, then in addition to using the index, MongoDB must
also read the documents to return the count.
db.collection.find( { a: 5, b: { $in: [ 1, 2, 3 ] } } ).count()
db.collection.find( { a: { $gt: 5 }, b: 5 } ).count()
db.collection.find( { a: 5, b: 5, c: 5 } ).count()
In such cases, during the initial read of the documents, MongoDB pages the documents into memory such that subsequent calls of the same count operation will have better performance.
Unexpected Shutdown and Count For MongoDB instances using the WiredTiger storage engine, after an unclean shutdown, statistics on size and count may off by up to 1000 documents as reported by collStats (page 472),
dbStats (page 480), count (page 306). To restore the correct statistics for the collection, run validate
(page 484) on the collection.
Examples
Count all Documents in a Collection To count the number of all documents in the orders collection, use the
following operation:
db.orders.count()
Count all Documents that Match a Query Count the number of the documents in the orders collection with the
field ord_dt greater than new Date(01/01/2012):
db.orders.count( { ord_dt: { $gt: new Date('01/01/2012') } } )
34
db.collection.copyTo()
On this page
Definition (page 35)
Behavior (page 35)
Example (page 35)
Definition
db.collection.copyTo(newCollection)
Deprecated since version 3.0.
Copies all documents from collection into newCollection using server-side JavaScript.
newCollection does not exist, MongoDB creates it.
If
If authorization is enabled, you must have access to all actions on all resources in order to run
db.collection.copyTo() (page 35). Providing such access is not recommended, but if your organization requires a user to run db.collection.copyTo() (page 35), create a role that grants anyAction
on resource-anyresource. Do not assign this role to any other user.
param string newCollection The name of the collection to write data to.
Warning: When using db.collection.copyTo() (page 35) check field types to ensure that the
operation does not remove type information from documents during the translation from BSON to JSON.
The db.collection.copyTo() (page 35) method uses the eval (page 357) command internally. As
a result, the db.collection.copyTo() (page 35) operation takes a global lock that blocks all other
read and write operations until the db.collection.copyTo() (page 35) completes.
copyTo() (page 35) returns the number of documents copied. If the copy fails, it throws an exception.
Behavior Because copyTo() (page 35) uses eval (page 357) internally, the copy operations will block all other
operations on the mongod (page 762) instance.
Example The following operation copies all documents from the source collection into the target collection.
db.source.copyTo(target)
db.collection.createIndex()
On this page
35
Definition
db.collection.createIndex(keys, options)
Creates indexes on collections.
Changed in version 3.2: Starting in MongoDB 3.2, MongoDB disallows the creation of version 0 (page 987)
indexes. To upgrade existing version 0 indexes, see Version 0 Indexes (page 987).
param document keys A document that contains the field and value pairs where the field is the
index key and the value describes the type of index for that field. For an ascending index on a
field, specify a value of 1; for descending index, specify a value of -1.
MongoDB supports several different index types including text, geospatial, and hashed
indexes. See https://docs.mongodb.org/manual/core/index-types for more
information.
param document options Optional. A document that contains a set of options that controls the
creation of the index. See Options (page 36) for details.
Options The options document contains a set of options that controls the creation of the index. Different index
types can have additional options specific for that type.
Options for All Index Types The following options are available for all index types unless otherwise specified:
Changed in version 3.0: The dropDups option is no longer available.
param boolean background Optional. Builds the index in the background so that building an index
does not block other database activities. Specify true to build in the background. The default
value is false.
param boolean unique Optional. Creates a unique index so that the collection will not accept insertion
of documents where the index key or keys match an existing value in the index. Specify true to
create a unique index. The default value is false.
The option is unavailable for hashed indexes.
param string name Optional. The name of the index. If unspecified, MongoDB generates an index
name by concatenating the names of the indexed fields and the sort order.
Whether user specified or MongoDB generated, index names including their full namespace (i.e.
database.collection) cannot be longer than the Index Name Limit (page 933).
param document partialFilterExpression Optional.
If
specified,
the
index only references documents that match the filter expression.
See
https://docs.mongodb.org/manual/core/index-partial for more information.
A filter expression can include:
equality expressions (i.e. field:
$exists:
$gt (page 521), $gte (page 522), $lt (page 522), $lte (page 523) expressions,
$type (page 532) expressions,
$and (page 527) operator at the top-level only
You can specify a partialFilterExpression option for all MongoDB index types.
New in version 3.2.
36
param boolean sparse Optional. If true, the index only references documents with the specified field.
These indexes use less space but behave differently in some situations (particularly sorts). The default value is false. See https://docs.mongodb.org/manual/core/index-sparse
for more information.
Changed in version 3.2: Starting in MongoDB 3.2, MongoDB provides the option to create partial
indexes. Partial indexes offer a superset of the functionality of sparse indexes. If you are using
MongoDB 3.2 or later, partial indexes should be preferred over sparse indexes.
Changed in version 2.6: 2dsphere indexes are sparse by default and ignore this option. For a
compound index that includes 2dsphere index key(s) along with keys of other types, only the
2dsphere index fields determine whether the index references a document.
2d, geoHaystack, and text indexes behave similarly to the 2dsphere indexes.
param integer expireAfterSeconds Optional.
Specifies a value, in seconds, as a
TTL to control how long MongoDB retains documents in this collection.
See
https://docs.mongodb.org/manual/tutorial/expire-data for more information on this functionality. This applies only to TTL indexes.
param document storageEngine Optional. Allows users to specify configuration to the storage engine
on a per-index basis when creating an index. The value of the storageEngine option should take
the following form:
{ <storage-engine-name>: <options> }
Storage engine configuration specified when creating indexes are validated and logged to the oplog
during replication to support replica sets with members that use different storage engines.
New in version 3.0.
Options for text Indexes The following options are available for text indexes only:
param document weights Optional.
For text indexes, a document that contains field
and weight pairs.
The weight is an integer ranging from 1 to 99,999 and denotes the significance of the field relative to the other indexed fields in terms of
the score.
You can specify weights for some or all the indexed fields.
See
https://docs.mongodb.org/manual/tutorial/control-results-of-text-search
to adjust the scores. The default value is 1.
param string default_language Optional.
For
text
indexes,
the
language
that determines the list of stop words and the rules for the stemmer
and tokenizer.
See text-search-languages for the available languages and
https://docs.mongodb.org/manual/tutorial/specify-language-for-text-index
for more information and examples. The default value is english.
param string language_override Optional. For text indexes, the name of the field, in the collections
documents, that contains the override language for the document. The default value is language.
See specify-language-field-text-index-example for an example.
param integer textIndexVersion Optional. For text indexes, the text index version number. Version
can be either 1 or 2.
In MongoDB 2.6, the default version is 2. MongoDB 2.4 can only support version 1.
New in version 2.6.
Options for 2dsphere Indexes The following option is available for 2dsphere indexes only:
37
param integer 2dsphereIndexVersion Optional. For 2dsphere indexes, the 2dsphere index version number. Version can be either 1 or 2.
In MongoDB 2.6, the default version is 2. MongoDB 2.4 can only support version 1.
New in version 2.6.
Options for 2d Indexes The following options are available for 2d indexes only:
param integer bits Optional. For 2d indexes, the number of precision of the stored geohash value of the
location data.
The bits value ranges from 1 to 32 inclusive. The default value is 26.
param number min Optional. For 2d indexes, the lower inclusive boundary for the longitude and latitude values. The default value is -180.0.
param number max Optional. For 2d indexes, the upper inclusive boundary for the longitude and latitude values. The default value is 180.0.
Options for geoHaystack Indexes The following option is available for geoHaystack indexes only:
param number bucketSize For geoHaystack indexes, specify the number of units within which to
group the location values; i.e. group in the same bucket those location values that are within the
specified number of units to each other.
The value must be greater than 0.
Behaviors The createIndex() (page 36) method has the behaviors described here.
To add or change index options you must drop the index using the dropIndex() (page 46) method and issue
another createIndex() (page 36) operation with the new options.
If you create an index with one set of options, and then issue the createIndex() (page 36) method with the
same index fields and different options without first dropping the index, createIndex() (page 36) will not
rebuild the existing index with the new options.
If you call multiple createIndex() (page 36) methods with the same index specification at the same time,
only the first operation will succeed, all other operations will have no effect.
Non-background indexing operations will block all other operations on a database.
MongoDB will not create an index (page 36) on a collection if the index entry for an existing document
exceeds the Maximum Index Key Length. Previous versions of MongoDB would create the index but not
index such documents.
Changed in version 2.6.
Examples
Create an Ascending Index on a Single Field The following example creates an ascending index on the field
orderDate.
db.collection.createIndex( { orderDate: 1 } )
If the keys document specifies more than one field, then createIndex() (page 36) creates a compound index.
38
Create an Index on a Multiple Fields The following example creates a compound index on the orderDate field
(in ascending order) and the zipcode field (in descending order.)
db.collection.createIndex( { orderDate: 1, zipcode: -1 } )
Additional Information
Use db.collection.createIndex() (page 36) rather than db.collection.ensureIndex()
(page 47) to create indexes.
The https://docs.mongodb.org/manual/indexes section of this manual for full documentation of
indexes and indexing in MongoDB.
db.collection.getIndexes() (page 72) to view the specifications of existing indexes for a collection.
https://docs.mongodb.org/manual/core/index-text for details on creating text indexes.
index-feature-geospatial and index-geohaystack-index for geospatial queries.
index-feature-ttl for expiration of data.
db.collection.dataSize()
db.collection.dataSize()
Returns The size of the collection. This method provides a wrapper around the size (page 474)
output of the collStats (page 472) (i.e. db.collection.stats() (page 106)) command.
db.collection.deleteOne()
On this page
Definition (page 39)
Behavior (page 40)
Examples (page 40)
Definition
db.collection.deleteOne()
Removes a single document from a collection.
db.collection.deleteOne(
<filter>,
{
writeConcern: <document>
}
)
39
param document filter Specifies deletion criteria using query operators (page 519).
Specify an empty document { } to delete the first document returned in the collection.
param document writeConcern Optional. A document expressing the write concern. Omit
to use the default write concern.
Returns
A document containing:
A boolean acknowledged as true if the operation ran with write concern or false if
write concern was disabled
deletedCount containing the number of deleted documents
Behavior
Deletion Order deleteOne (page 39) deletes the first document that matches the filter. Use a field that is part of
a unique index such as _id for precise deletions.
Capped Collections deleteOne() (page 39) throws a WriteError exception if used on a capped collection.
To remove documents from a capped collection, use db.collection.drop() (page 45) instead.
Examples
Delete a Single Document The orders collection has documents with the following structure:
{
_id: ObjectId("563237a41a4d68582c2509da"),
stock: "Brent Crude Futures",
qty: 250,
type: "buy-limit",
limit: 48.90
creationts: ISODate("2015-11-01T12:30:15Z"),
expiryts: ISODate("2015-11-01T12:35:15Z"),
client: "Crude Traders Inc."
}
ObjectId("563237a41a4d68582c2509da") :
try {
db.orders.deleteOne( { "_id" : ObjectId("563237a41a4d68582c2509da") } );
}
catch (e) {
print(e);
}
The
following
operation
deletes
the
ISODate("2015-11-01T12:40:15Z")
40
first
document
with
expiryts
greater
than
try {
db.orders.deleteOne( { "expiryts" : { $lt: ISODate("2015-11-01T12:40:15Z") } } );
}
catch (e) {
print(e);
}
try {
db.orders.deleteOne(
{ "_id" : ObjectId("563237a41a4d68582c2509da") },
{ w : "majority", wtimeout : 100 }
);
}
catch (e) {
print (e);
}
If the acknowledgement takes longer than the wtimeout limit, the following exception is thrown:
WriteConcernError({
"code" : 64,
"errInfo" : {
"wtimeout" : true
},
"errmsg" : "waiting for replication timed out"
})
See also:
To delete multiple documents, see db.collection.deleteMany() (page 41)
db.collection.deleteMany()
On this page
Definition (page 41)
Behavior (page 42)
Examples (page 42)
Definition
db.collection.deleteMany()
Removes all documents that match the filter from a collection.
db.collection.deleteMany(
<filter>,
{
writeConcern: <document>
41
}
)
param document filter Specifies deletion criteria using query operators (page 519).
To delete all documents in a collection, pass in an empty document ({ }).
param document writeConcern Optional. A document expressing the write concern. Omit
to use the default write concern.
Returns
A document containing:
A boolean acknowledged as true if the operation ran with write concern or false if
write concern was disabled
deletedCount containing the number of deleted documents
Behavior
Capped Collections deleteMany() (page 41) throws a WriteError exception if used on a capped collection.
To remove all documents from a capped collection, use db.collection.drop() (page 45) instead.
Delete a Single Document To delete a single document, use db.collection.deleteOne() (page 39) instead.
Alternatively, use a field that is a part of a unique index such as _id.
Examples
Delete Multiple Documents The orders collection has documents with the following structure:
{
_id: ObjectId("563237a41a4d68582c2509da"),
stock: "Brent Crude Futures",
qty: 250,
type: "buy-limit",
limit: 48.90
creationts: ISODate("2015-11-01T12:30:15Z"),
expiryts: ISODate("2015-11-01T12:35:15Z"),
client: "Crude Traders Inc."
}
try {
db.orders.deleteMany( { "client" : "Crude Traders Inc." } );
}
catch (e) {
print (e);
}
42
try {
db.orders.deleteMany( { "stock" : "Brent Crude Futures", "limit" : { $gt : 48.88 } } );
}
catch (e) {
print (e);
}
deleteMany() with Write Concern Given a three member replica set, the following operation specifies a w of
majority and wtimeout of 100:
try {
db.orders.deleteMany(
{ "client" : "Crude Traders Inc." },
{ w : "majority", wtimeout : 100 }
);
}
catch (e) {
print (e);
}
If the acknowledgement takes longer than the wtimeout limit, the following exception is thrown:
WriteConcernError({
"code" : 64,
"errInfo" : {
"wtimeout" : true
},
"errmsg" : "waiting for replication timed out"
})
db.collection.distinct()
On this page
Definition (page 43)
Behavior (page 44)
Examples (page 44)
Definition
db.collection.distinct(field, query)
Finds the distinct values for a specified field across a single collection and returns the results in an array.
param string field The field for which to return distinct values.
43
param document query A query that specifies the documents from which to retrieve the distinct
values.
The db.collection.distinct() (page 43) method provides a wrapper around the distinct
(page 309) command. Results must not be larger than the maximum BSON size (page 932).
Behavior
Array Fields If the value of the specified field is an array, db.collection.distinct() (page 43) considers each element of the array as a separate value.
For instance, if a field has as its value [ 1, [1], 1 ], then db.collection.distinct() (page 43) considers 1, [1], and 1 as separate values.
For an example, see Return Distinct Values for an Array Field (page 44).
Index Use When possible, db.collection.distinct() (page 43) operations can use indexes.
Indexes can also cover db.collection.distinct() (page 43) operations. See covered-queries for more information on queries covered by indexes.
Examples The examples use the inventory collection that contains the following documents:
{
{
{
{
"_id":
"_id":
"_id":
"_id":
1,
2,
3,
4,
"dept":
"dept":
"dept":
"dept":
"A",
"A",
"B",
"A",
"item":
"item":
"item":
"item":
{
{
{
{
"sku":
"sku":
"sku":
"sku":
"111",
"111",
"222",
"333",
"color":
"color":
"color":
"color":
Return Distinct Values for a Field The following example returns the distinct values for the field dept from all
documents in the inventory collection:
db.inventory.distinct( "dept" )
Return Distinct Values for an Embedded Field The following example returns the distinct values for the field
sku, embedded in the item field, from all documents in the inventory collection:
db.inventory.distinct( "item.sku" )
See also:
document-dot-notation for information on accessing fields within embedded documents
Return Distinct Values for an Array Field The following example returns the distinct values for the field sizes
from all documents in the inventory collection:
44
db.inventory.distinct( "sizes" )
For information on distinct() (page 43) and array fields, see the Behavior (page 44) section.
Specify Query with distinct The following example returns the distinct values for the field sku, embedded in
the item field, from the documents whose dept is equal to "A":
db.inventory.distinct( "item.sku", { dept: "A" } )
db.collection.drop()
On this page
Definition (page 45)
Behavior (page 45)
Example (page 45)
Definition
db.collection.drop()
Removes a collection from the database. The method also removes any indexes associated with the dropped
collection. The method provides a wrapper around the drop (page 438) command.
db.collection.drop() (page 45) has the form:
db.collection.drop()
db.collection.drop() (page 45) takes no arguments and will produce an error if called with any arguments.
Returns
true when successfully drops a collection.
false when collection to drop does not exist.
Behavior This method obtains a write lock on the affected database and will block other operations until it has
completed.
Example The following operation drops the students collection in the current database.
db.students.drop()
45
db.collection.dropIndex()
On this page
Definition (page 46)
Example (page 46)
Definition
db.collection.dropIndex(index)
Drops or removes the specified index from a collection. The db.collection.dropIndex() (page 46)
method provides a wrapper around the dropIndexes (page 449) command.
Note: You cannot drop the default index on the _id field.
The db.collection.dropIndex() (page 46) method takes the following parameter:
param string, document index Specifies the index to drop. You can specify the index either by the
index name or by the index specification document. 2
To drop a text index, specify the index name.
To get the index name or the index specification document for the db.collection.dropIndex()
(page 46) method, use the db.collection.getIndexes() (page 72) method.
Example Consider a pets collection. Calling the getIndexes() (page 72) method on the pets collection
returns the following indexes:
[
{
"v" : 1,
"key" : { "_id" : 1 },
"ns" : "test.pets",
"name" : "_id_"
},
{
"v" : 1,
"key" : { "cat" : -1 },
"ns" : "test.pets",
"name" : "catIdx"
},
{
"v" : 1,
"key" : { "cat" : 1, "dog" : -1 },
"ns" : "test.pets",
"name" : "cat_1_dog_-1"
}
]
The single field index on the field cat has the user-specified name of catIdx 3 and the index specification document
of { "cat" : -1 }.
To drop the index catIdx, you can use either the index name:
2 When using a mongo (page 794) shell version earlier than 2.2.2, if you specified a name during the index creation, you must use the name to
drop the index.
3 During index creation, if the user does not specify an index name, the system generates the name by concatenating the index key field and
value with an underscore, e.g. cat_1.
46
db.pets.dropIndex( "catIdx" )
-1 }:
db.pets.dropIndex( { "cat" : -1 } )
db.collection.dropIndexes()
db.collection.dropIndexes()
Drops all indexes other than the required index on the _id field. Only call dropIndexes() (page 47) as a
method on a collection object.
db.collection.ensureIndex()
On this page
Definition (page 47)
Additional Information (page 47)
Definition
db.collection.ensureIndex(keys, options)
Deprecated since version 3.0.0: db.collection.ensureIndex() (page 47) is now an alias for
db.collection.createIndex() (page 36).
Creates an index on the specified field if the index does not already exist.
Additional Information
Use db.collection.createIndex() (page 36) rather than db.collection.ensureIndex()
(page 47) to create new indexes.
The https://docs.mongodb.org/manual/indexes section of this manual for full documentation of
indexes and indexing in MongoDB.
db.collection.getIndexes() (page 72) to view the specifications of existing indexes for a collection.
db.collection.explain()
On this page
47
Description
db.collection.explain()
Changed in version 3.2: Adds support for db.collection.distinct() (page 43)
New in version 3.0.
Returns information on the query plan for the following operations: aggregate() (page 20); count()
(page 32); distinct() (page 43); find() (page 51); group() (page 75); remove() (page 100); and
update() (page 116) methods.
To use db.collection.explain() (page 48), append to db.collection.explain() (page 48) the
method(s) available to explain:
db.collection.explain().<method(...)>
For example,
db.products.explain().remove( { category: "apparel" }, { justOne: true } )
48
db.collection.explain() (page 48) returns the queryPlanner (page 939) and executionStats
(page 941) information for the evaluated method. However, executionStats (page 941) does not provide query
execution information for the rejected plans.
allPlansExecution Mode MongoDB runs the query optimizer to choose the winning plan and executes
the winning plan to completion. In "allPlansExecution" mode, MongoDB returns statistics describing the
execution of the winning plan as well as statistics for the other candidate plans captured during plan selection.
For write operations, db.collection.explain() (page 48) returns information about the update or delete operations that would be performed, but does not apply the modifications to the database.
db.collection.explain() (page 48) returns the queryPlanner (page 939) and executionStats
(page 941) information for the evaluated method. The executionStats (page 941) includes the completed query
execution information for the winning plan.
If the query optimizer considered more than one plan, executionStats (page 941) information also includes the
partial execution information captured during the plan selection phase for both the winning and rejected candidate
plans.
explain() Mechanics The db.collection.explain() (page 48) method wraps the explain (page 466)
command and is the preferred way to run explain (page 466).
db.collection.explain().find() is similar to db.collection.find().explain() (page 139)
with the following key differences:
The db.collection.explain().find() construct allows for the additional chaining of query modifiers. For list of query modifiers, see db.collection.explain().find().help() (page 49).
The db.collection.explain().find() returns a cursor, which requires a call to .next(), or its
alias .finish(), to return the explain() results. If run interactively in the mongo (page 794) shell,
the mongo (page 794) shell automatically calls .finish() to return the results. For scripts, however, you
must explicitly call .next(), or .finish(), to return the results. For list of cursor-related methods, see
db.collection.explain().find().help() (page 49).
db.collection.explain().aggregate() is equivalent to passing the explain (page 23) option to the
db.collection.aggregate() (page 20) method.
help() To see the list of operations supported by db.collection.explain() (page 48), run:
db.collection.explain().help()
db.collection.explain().find() returns a cursor, which allows for the chaining of query modifiers. To
see the list of query modifiers supported by db.collection.explain().find() (page 48) as well as cursorrelated methods, run:
db.collection.explain().find().help()
You can chain multiple modifiers to db.collection.explain().find(). For an example, see Explain find()
with Modifiers (page 50).
Examples
49
queryPlanner Mode By default, db.collection.explain() (page 48) runs in "queryPlanner" verbosity mode.
The following example runs db.collection.explain() (page 48) in queryPlanner (page 48) verbosity
mode to return the query planning information for the specified count() (page 32) operation:
db.products.explain().count( { quantity: { $gt: 50 } } )
executionStats Mode The following example runs db.collection.explain() (page 48) in executionStats (page 48) verbosity mode to return the query planning and execution information for the specified find()
(page 51) operation:
db.products.explain("executionStats").find(
{ quantity: { $gt: 50 }, category: "apparel" }
)
Explain find() with Modifiers db.collection.explain().find() construct allows for the chaining of
query modifiers. For example, the following operation provides information on the find() (page 51) method with
sort() (page 155) and hint() (page 141) query modifiers.
db.products.explain("executionStats").find(
{ quantity: { $gt: 50 }, category: "apparel" }
).sort( { quantity: -1 } ).hint( { category: 1, quantity: -1 } )
For a list of query modifiers available, run in the mongo (page 794) shell:
db.collection.explain().find().help()
50
executionStats (page 940), which details the execution of the winning plan and the rejected plans; and
serverInfo (page 942), which provides information on the MongoDB instance.
The verbosity mode (i.e. queryPlanner, executionStats, allPlansExecution) determines whether the
results include executionStats (page 940) and whether executionStats (page 940) includes data captured during plan
selection.
For details on the output, see Explain Results (page 938).
For a mixed version sharded cluster with version 3.0 mongos (page 784) and at least one 2.6 mongod (page 762)
shard, when you run db.collection.explain() (page 48) in a version 3.0 mongo (page 794) shell,
db.collection.explain() (page 48) will retry with the $explain operator to return results in the 2.6 format.
db.collection.find()
On this page
Definition (page 51)
Examples (page 52)
Definition
db.collection.find(query, projection)
Selects documents in a collection and returns a cursor to the selected documents.
param document query Optional. Specifies selection criteria using query operators (page 519). To
return all documents in a collection, omit this parameter or pass an empty document ({}).
param document projection Optional. Specifies the fields to return using projection operators
(page 580). To return all fields in the matching document, omit this parameter.
Returns
A cursor to the documents that match the query criteria. When the find() (page 51) method
returns documents, the method is actually returning a cursor to the documents.
If find() (page 51) includes a projection argument, the matching documents contain only
the projection fields and the _id field. You can optionally exclude the _id field.
Executing find() (page 51) directly in the mongo (page 794) shell automatically iterates the
cursor to display up to the first 20 documents. Type it to continue iteration.
To access the returned documents with a driver, use the appropriate cursor handling mechanism
for the driver language.
The projection parameter takes a document of the following form:
{ field1: <boolean>, field2: <boolean> ... }
51
Examples
Find All Documents in a Collection The find() (page 51) method with no parameters returns all documents
from a collection and returns all fields for the documents. For example, the following operation returns all documents
in the bios collection:
db.bios.find()
Find Documents that Match Query Criteria To find documents that match a set of selection criteria, call find()
with the <criteria> parameter. The following operation returns all the documents from the collection products
where qty is greater than 25:
db.products.find( { qty: { $gt: 25 } } )
Query for Equality The following operation returns documents in the bios collection where _id equals 5:
db.bios.find( { _id: 5 } )
Query Using Operators The following operation returns documents in the bios collection where _id equals
either 5 or ObjectId("507c35dd8fada716c89d0013"):
db.bios.find(
{
_id: { $in: [ 5,
}
)
ObjectId("507c35dd8fada716c89d0013") ] }
Query for Ranges Combine comparison operators to specify ranges. The following operation returns documents
with field between value1 and value2:
db.collection.find( { field: { $gt: value1, $lt: value2 } } );
Query a Field that Contains an Array If a field contains an array and your query has multiple conditional operators,
the field as a whole will match if either a single array element meets the conditions or a combination of array elements
meet the conditions.
Given a collection students that contains the following documents:
{ "_id" : 1, "score" : [ -1, 3 ] }
{ "_id" : 2, "score" : [ 1, 5 ] }
{ "_id" : 3, "score" : [ 5, 5 ] }
52
In the document with _id equal to 1, the score: [ -1, 3 ] meets the conditions because the element -1
meets the $lt: 2 condition and the element 3 meets the $gt: 0 condition.
In the document with _id equal to 2, the score: [ 1, 5 ] meets the conditions because the element 1 meets
both the $lt: 2 condition and the $gt: 0 condition.
See also:
specify-multiple-criteria-for-array-elements
Query Arrays
Query for an Array Element The following operation returns documents in the bios collection where the
array field contribs contains the element "UNIX":
db.bios.find( { contribs: "UNIX" } )
Query an Array of Documents The following operation returns documents in the bios collection where
awards array contains an embedded document element that contains the award field equal to "Turing Award"
and the year field greater than 1980:
db.bios.find(
{
awards: {
$elemMatch: {
award: "Turing Award",
year: { $gt: 1980 }
}
}
}
)
The name field must match the embedded document exactly. The query does not match documents with the following
name fields:
{
first: "Yukihiro",
aka: "Matz",
last: "Matsumoto"
}
53
{
last: "Matsumoto",
first: "Yukihiro"
}
Query Fields of an Embedded Document The following operation returns documents in the bios collection
where the embedded document name contains a field first with the value "Yukihiro" and a field last with the
value "Matsumoto". The query uses dot notation to access fields in an embedded document:
db.bios.find(
{
"name.first": "Yukihiro",
"name.last": "Matsumoto"
}
)
The query matches the document where the name field contains an embedded document with the field first with
the value "Yukihiro" and a field last with the value "Matsumoto". For instance, the query would match
documents with name fields that held either of the following values:
{
first: "Yukihiro",
aka: "Matz",
last: "Matsumoto"
}
{
last: "Matsumoto",
first: "Yukihiro"
}
Projections The projection parameter specifies which fields to return. The parameter contains either include or
exclude specifications, not both, unless the exclude is for the _id field.
Specify the Fields to Return The following operation returns all the documents from the products collection
where qty is greater than 25 and returns only the _id, item and qty fields:
db.products.find( { qty: { $gt: 25 } }, { item: 1, qty: 1 } )
The following operation finds all documents in the bios collection and returns only the name field, contribs
field and _id field:
db.bios.find( { }, { name: 1, contribs: 1 } )
Explicitly Excluded Fields The following operation queries the bios collection and returns all fields except
the first field in the name embedded document and the birth field:
54
db.bios.find(
{ contribs: 'OOP' },
{ 'name.first': 0, birth: 0 }
)
Explicitly Exclude the _id Field The following operation excludes the _id and qty fields from the result set:
db.products.find( { qty: { $gt: 25 } }, { _id: 0, qty: 0 } )
The documents in the result set contain all fields except the _id and qty fields:
{ "item" : "pencil", "type" : "no.2" }
{ "item" : "bottle", "type" : "blue" }
{ "item" : "paper" }
The following operation finds documents in the bios collection and returns only the name field and the
contribs field:
db.bios.find(
{ },
{ name: 1, contribs: 1, _id: 0 }
)
On Arrays and Embedded Documents The following operation queries the bios collection and returns the
last field in the name embedded document and the first two elements in the contribs array:
db.bios.find(
{ },
{
_id: 0,
'name.last': 1,
contribs: { $slice: 2 }
}
)
Iterate the Returned Cursor The find() (page 51) method returns a cursor to the results.
In the mongo (page 794) shell, if the returned cursor is not assigned to a variable using the var keyword, the
cursor is automatically iterated to access up to the first 20 documents that match the query. You can set the
DBQuery.shellBatchSize variable to change the number of automatically iterated documents.
To manually iterate over the results, assign the returned cursor to a variable with the var keyword, as shown in the
following sections.
With Variable Name The following example uses the variable myCursor to iterate over the cursor and print the
matching documents:
var myCursor = db.bios.find( );
myCursor
With next() Method The following example uses the cursor method next() (page 149) to access the documents:
55
To print, you can also use the printjson() method instead of print(tojson()):
if (myDocument) {
var myName = myDocument.name;
printjson(myName);
}
With forEach() Method The following example uses the cursor method forEach() (page 140) to iterate the
cursor and access the documents:
var myCursor = db.bios.find( );
myCursor.forEach(printjson);
Modify the Cursor Behavior The mongo (page 794) shell and the drivers provide several cursor methods that
call on the cursor returned by the find() (page 51) method to modify its behavior.
Order Documents in the Result Set The sort() (page 155) method orders the documents in the result set. The
following operation returns documents in the bios collection sorted in ascending order by the name field:
db.bios.find().sort( { name: 1 } )
The following statements chain cursor methods limit() (page 143) and sort()
56
The two statements are equivalent; i.e. the order in which you chain the limit() (page 143) and the sort()
(page 155) methods is not significant. Both statements return the first five documents, as determined by the ascending
sort order on name.
db.collection.findAndModify()
On this page
Definition
db.collection.findAndModify(document)
Modifies and returns a single document. By default, the returned document does not include the modifications
made on the update. To return the document with the modifications made on the update, use the new option. The
findAndModify() (page 57) method is a shell helper around the findAndModify (page 346) command.
The findAndModify() (page 57) method has the following form:
db.collection.findAndModify({
query: <document>,
sort: <document>,
remove: <boolean>,
update: <document>,
new: <boolean>,
fields: <document>,
upsert: <boolean>,
bypassDocumentValidation: <boolean>,
writeConcern: <document>
});
The db.collection.findAndModify() (page 57) method takes a document parameter with the following embedded document fields:
param document query Optional. The selection criteria for the modification. The query field employs the same query selectors (page 519) as used in the db.collection.find() (page 51)
method. Although the query may match multiple documents, findAndModify() (page 57)
will only select one document to modify.
param document sort Optional. Determines which document the operation modifies if the query
selects multiple documents. findAndModify() (page 57) modifies the first document in the
sort order specified by this argument.
param boolean remove Must specify either the remove or the update field. Removes the document specified in the query field. Set this to true to remove the selected document . The
default is false.
param document update Must specify either the remove or the update field. Performs an update of the selected document. The update field employs the same update operators (page 587)
or field: value specifications to modify the selected document.
param boolean new Optional. When true, returns the modified document rather than the original.
The findAndModify() (page 57) method ignores the new option for remove operations.
57
otherwise, null.
Changed in version 3.0: In previous versions, if for the update, sort is specified, and upsert: true, and the
new option is not set or new: false, db.collection.findAndModify() (page 57) returns an empty
document {} instead of null.
Behavior
Upsert and Unique Index When findAndModify() (page 57) includes the upsert: true option and the
query field(s) is not uniquely indexed, the method could insert a document multiple times in certain circumstances.
In the following example, no document with the name Andy exists, and multiple clients issue the following command:
db.people.findAndModify({
query: { name: "Andy" },
sort: { rating: 1 },
update: { $inc: { score: 1 } },
upsert: true
})
58
Then, if these clients findAndModify() (page 57) methods finish the query phase before any command starts
the modify phase, and there is no unique index on the name field, the commands may all perform an upsert, creating
multiple duplicate documents.
To prevent the creation of multiple duplicate documents, create a unique index on the name field. With the unique
index in place, the multiple methods will exhibit one of the following behaviors:
Exactly one findAndModify() (page 57) successfully inserts a new document.
Zero or more findAndModify() (page 57) methods update the newly inserted document.
Zero or more findAndModify() (page 57) methods fail when they attempt to insert a duplicate. If the
method fails due to a unique index constraint violation, you can retry the method. Absent a delete of the
document, the retry should not fail.
Sharded Collections When using findAndModify (page 346) in a sharded environment, the query must contain the shard key for all operations against the shard cluster for the sharded collections.
findAndModify (page 346) operations issued against mongos (page 784) instances for non-sharded collections
function normally.
Document Validation The db.collection.findAndModify() (page 57) method adds support for the
bypassDocumentValidation option, which lets you bypass document validation (page 977) when inserting
or updating documents in a collection with validation rules.
Comparisons with the update Method When updating a document, findAndModify() (page 57) and the
update() (page 116) method operate differently:
By default, both operations modify a single document. However, the update() (page 116) method with its
multi option can modify more than one document.
If multiple documents match the update criteria, for findAndModify() (page 57), you can specify a sort
to provide some measure of control on which document to update.
With the default behavior of the update() (page 116) method, you cannot specify which single document to
update when multiple documents match.
By default, findAndModify() (page 57) returns the pre-modified version of the document. To obtain the
updated document, use the new option.
The update() (page 116) method returns a WriteResult (page 288) object that contains the status of the
operation. To return the updated document, use the find() (page 51) method. However, other updates may
have modified the document between your update and the document retrieval. Also, if the update modified only
a single document but multiple documents matched, you will need to use additional logic to identify the updated
document.
When
modifying
a
single
document,
both
findAndModify()
(page
57)
and
the
update()
(page
116)
method
atomically
update
the
document.
See
https://docs.mongodb.org/manual/core/write-operations-atomicity for more details
about interactions and order of operations of these methods.
Examples
59
Update and Return The following method updates and returns an existing document in the people collection where
the document matches the query criteria:
db.people.findAndModify({
query: { name: "Tom", state: "active", rating: { $gt: 10 } },
sort: { rating: 1 },
update: { $inc: { score: 1 } }
})
To return the modified document, add the new:true option to the method.
If no document matched the query condition, the method returns null.
Upsert The following method includes the upsert: true option for the update operation to either update a
matching document or, if no matching document exists, create a new document:
db.people.findAndModify({
query: { name: "Gus", state: "active", rating: 100 },
sort: { rating: 1 },
update: { $inc: { score: 1 } },
upsert: true
})
If the method did not include a sort option, the method returns null.
null
Return New Document The following method includes both the upsert: true option and the new:true option. The method either updates a matching document and returns the updated document or, if no matching document
exists, inserts a document and returns the newly inserted document in the value field.
In the following example, no document in the people collection matches the query condition:
60
db.people.findAndModify({
query: { name: "Pascal", state: "active", rating: 25 },
sort: { rating: 1 },
update: { $inc: { score: 1 } },
upsert: true,
new: true
})
Sort and Remove By including a sort specification on the rating field, the following example removes from
the people collection a single document with the state value of active and the lowest rating among the
matching documents:
db.people.findAndModify(
{
query: { state: "active" },
sort: { rating: 1 },
remove: true
}
)
See also:
https://docs.mongodb.org/manual/tutorial/perform-findAndModify-quorum-reads
db.collection.findOne()
On this page
Definition (page 61)
Examples (page 62)
Definition
db.collection.findOne(query, projection)
Returns one document that satisfies the specified query criteria. If multiple documents satisfy the query, this
2.1. mongo Shell Methods
61
method returns the first document according to the natural order which reflects the order of documents on the
disk. In capped collections, natural order is the same as insertion order. If no document satisfies the query, the
method returns null.
param document query Optional.
(page 519).
param document projection Optional. Specifies the fields to return using projection operators
(page 580). Omit this parameter to return all fields in the matching document.
The projection parameter takes a document of the following form:
{ field1: <boolean>, field2: <boolean> ... }
With a Query Specification The following operation returns the first matching document from the bios
collection where either the field first in the embedded document name starts with the letter G or where
the field birth is less than new Date(01/01/1945):
db.bios.findOne(
{
$or: [
{ 'name.first' : /^G/ },
{ birth: { $lt: new Date('01/01/1945') } }
]
}
)
With a Projection The projection parameter specifies which fields to return. The parameter contains either
include or exclude specifications, not both, unless the exclude is for the _id field.
62
Specify the Fields to Return The following operation finds a document in the bios collection and returns
only the name, contribs and _id fields:
db.bios.findOne(
{ },
{ name: 1, contribs: 1 }
)
Return All but the Excluded Fields The following operation returns a document in the bios collection
where the contribs field contains the element OOP and returns all fields except the _id field, the first field in
the name embedded document, and the birth field:
db.bios.findOne(
{ contribs: 'OOP' },
{ _id: 0, 'name.first': 0, birth: 0 }
)
The findOne Result Document You cannot apply cursor methods to the result of findOne() (page 61) because
a single document is returned. You have access to the document directly:
var myDocument = db.bios.findOne();
if (myDocument) {
var myName = myDocument.name;
print (tojson(myName));
}
db.collection.findOneAndDelete()
On this page
Definition (page 63)
Behavior (page 64)
Definition
db.collection.findOneAndDelete(filter, options)
New in version 3.2.
Deletes a single document based on the filter and sort criteria, returning the deleted document.
The findOneAndDelete() (page 63) method has the following form:
db.collection.findOneAndDelete(
<filter>,
{
projection: <document>,
sort: <document>,
maxTimeMS: <number>,
}
)
63
param document filter The selection criteria for the update. The same query selectors (page 519)
as in the find() (page 51) method are available.
Specify an empty document { } to delete the first document returned in the collection.
param document projection Optional. A subset of fields to return.
To return all fields in the returned document, omit this parameter.
param document sort Optional.
filter.
_id:
_id:
_id:
_id:
_id:
_id:
6305,
6308,
6312,
6319,
6322,
6234,
name
name
name
name
name
name
:
:
:
:
:
:
"A.
"B.
"M.
"R.
"A.
"R.
db.scores.findOneAndDelete(
{ "name" : "M. Tagnum" }
)
The operation returns the original document that has been deleted:
{ _id: 6312, name: "M. Tagnum", "assignment" : 5, "points" : 30 }
Sort And Delete A Document The grades collection contains documents similar to the following:
64
{
{
{
{
{
{
_id:
_id:
_id:
_id:
_id:
_id:
6305,
6308,
6312,
6319,
6322,
6234,
name
name
name
name
name
name
:
:
:
:
:
:
"A.
"B.
"M.
"R.
"A.
"R.
The following operation first finds all documents where name : "A. MacDyver". It then sorts by points
ascending before deleting the document with the lowest points value:
db.scores.findOneAndDelete(
{ "name" : "A. MacDyver" },
{ sort : { "points" : 1 } }
)
The operation returns the original document that has been deleted:
{ _id: 6322, name: "A. MacDyver", "assignment" : 2, "points" : 14 }
Projecting the Deleted Document The following operation uses projection to only return the _id and
assignment fields in the returned document:
db.scores.findOneAndDelete(
{ "name" : "A. MacDyver" },
{ sort : { "points" : 1 }, projection: { "assignment" : 1 } }
)
The operation returns the original document with the assignment and _id fields:
{ _id: 6322, "assignment" : 2 }
Update Document with Time Limit The following operation sets a 5ms time limit to complete the deletion:
try {
db.scores.findOneAndDelete(
{ "name" : "A. MacDyver" },
{ sort : { "points" : 1 }, maxTimeMS : 5 };
);
}
catch(e){
print(e);
}
Error: findAndModifyFailed failed: { "ok" : 0, "errmsg" : "operation exceeded time limit", "code" : 5
db.collection.findOneAndReplace()
On this page
Definition (page 66)
Behavior (page 67)
65
Definition
db.collection.findOneAndReplace(filter, replacement, options)
New in version 3.2.
Modifies and replaces a single document based on the filter and sort criteria.
The findOneAndReplace() (page 66) method has the following form:
db.collection.findOneAndReplace(
<filter>,
<replacement>,
{
projection: <document>,
sort: <document>,
maxTimeMS: <number>,
upsert: <boolean>,
returnNewDocument: <boolean>
}
)
66
Behavior findOneAndReplace() (page 66) replaces the first matching document in the collection that matches
the filter. The sort parameter can be used to influence which document is modified.
The projection parameter takes a document in the following form:
{ field1 : < boolean >, field2 : < boolean> ... }
"_id"
"_id"
"_id"
"_id"
"_id"
:
:
:
:
:
1521,
2231,
4511,
5331,
3412,
"team"
"team"
"team"
"team"
"team"
:
:
:
:
:
The following operation finds the first document with score less than 20000 and replaces it:
db.scores.findOneAndReplace(
{ "score" : { $lt : 20000 } },
{ "team" : "Observant Badgers", "score" : 20000 }
)
The operation returns the original document that has been replaced:
{ "_id" : 2512, "team" : "Aquatic Ponies", "score" : 19250 }
If returnNewDocument was true, the operation would return the replacement document instead.
Sort and Replace A Document The scores collection contains documents similar to the following:
{
{
{
{
{
"_id"
"_id"
"_id"
"_id"
"_id"
:
:
:
:
:
1521,
2231,
4511,
5331,
3412,
"team"
"team"
"team"
"team"
"team"
:
:
:
:
:
Sorting by score changes the result of the operation. The following operation sorts the result of the filter by
score ascending, and replaces the lowest scoring document:
db.scores.findOneAndReplace(
{ "score" : { $lt : 20000 } },
{ "team" : "Observant Badgers", "score" : 20000 },
{ sort: { "score" : 1 } }
)
The operation returns the original document that has been replaced:
67
See Replace A Document (page 67) for the non-sorted result of this command.
Project the Returned Document The scores collection contains documents similar to the following:
{
{
{
{
{
"_id"
"_id"
"_id"
"_id"
"_id"
:
:
:
:
:
1521,
2231,
4511,
5331,
3412,
"team"
"team"
"team"
"team"
"team"
:
:
:
:
:
The following operation uses projection to only display the team field in the returned document:
db.scores.findOneAndReplace(
{ "score" : { $lt : 22250 } },
{ "team" : "Therapeutic Hamsters", "score" : 22250 },
{ sort : { "score" : 1 }, project: { "_id" : 0, "team" : 1 } }
)
The operation returns the original document with only the team field:
{ "team" : "Aquatic Ponies"}
Replace Document with Time Limit The following operation sets a 5ms time limit to complete:
try {
db.scores.findOneAndReplace(
{ "score" : { $gt : 25000 } },
{ "team" : "Emphatic Rhinos", "score" : 25010 },
{ maxTimeMS: 5 }
);
}
catch(e){
print(e);
}
Error: findAndModifyFailed failed: { "ok" : 0, "errmsg" : "operation exceeded time limit", "code" : 5
Replace Document with Upsert The following operation uses the upsert field to insert the replacement document
if nothing matches the filter:
try {
db.scores.findOneAndReplace(
{ "team" : "Fortified Lobsters" },
{ "_id" : 6019, "team" : "Fortified Lobsters" , "score" : 32000},
{ upsert : true, returnNewDocument: true }
);
}
catch (e){
print(e);
}
68
{
"_id" : 6019,
"team" : "Fortified Lobsters",
"score" : 32000
}
If returnNewDocument was false, the operation would return null as there is no original document to return.
db.collection.findOneAndUpdate()
On this page
Definition (page 69)
Behavior (page 70)
Definition
db.collection.findOneAndUpdate(filter, update, options)
New in version 3.2.
Updates a single document based on the filter and sort criteria.
The findOneAndUpdate() (page 69) method has the following form:
db.collection.findOneAndUpdate(
<filter>,
<update>,
{
projection: <document>,
sort: <document>,
maxTimeMS: <number>,
upsert: <boolean>,
returnNewDocument: <boolean>
}
)
69
param number maxTimeMS Optional. Specifies a time limit in milliseconds within which the
operation must complete within. Throws an error if the limit is exceeded.
param boolean upsert Optional. When true, findOneAndUpdate() (page 69) creates a new
document if no document matches the filter. If a document matches the filter, the method
performs an update.
The new document is created using the equality conditions from the filter with the modifications from the update document.
Comparison conditions like $gt (page 521) or $lt (page 522) are ignored.
Returns null after inserting the new document, unless returnNewDocument is true.
Defaults to false.
param boolean returnNewDocument Optional. When true, returns the replacement document
instead of the original document.
Defaults to false.
Returns Returns either the original document or, if returnNewDocument:
document.
Behavior findOneAndUpdate() (page 69) updates the first matching document in the collection that matches
the filter. The sort parameter can be used to influence which document is updated.
The projection parameter takes a document in the following form:
{ field1 : < boolean >, field2 : < boolean> ... }
_id:
_id:
_id:
_id:
_id:
_id:
6305,
6308,
6312,
6319,
6322,
6234,
name
name
name
name
name
name
:
:
:
:
:
:
"A.
"B.
"M.
"R.
"A.
"R.
db.scores.findOneAndUpdate(
{ "name" : "R. Stiles" },
{ $inc: { "points" : 5 } }
)
The operation returns the original document that has been replaced:
{ _id: 6319, name: "R. Stiles", "assignment" : 2, "points" : 12 }
If returnNewDocument was true, the operation would return the replacement document instead.
70
Sort And Update A Document The grades collection contains documents similar to the following:
{
{
{
{
{
{
_id:
_id:
_id:
_id:
_id:
_id:
6305,
6308,
6312,
6319,
6322,
6234,
name
name
name
name
name
name
:
:
:
:
:
:
"A.
"B.
"M.
"R.
"A.
"R.
The following operation updates a document where name : "A. MacGyver". The operation sorts the matching
documents by points ascending to update the matching document with the least points.
db.scores.findOneAndUpdate(
{ "name" : "A. MacDyver" },
{ $inc : { "points" : 5 } },
{ sort : { "points" : 1 } }
)
The operation returns the original document that has been replaced:
{ _id: 6322, name: "A. MacDyver", "assignment" : 2, "points" : 14 }
Project the Returned Document The following operation uses projection to only display the _id, points, and
assignment fields in the returned document:
db.scores.findOneAndUpdate(
{ "name" : "A. MacDyver" },
{ $inc : { "points" : 5 } },
{ sort : { "points" : 1 }, projection: { "assignment" : 1, "points" : 1 } }
)
The operation returns the original document with only the assignment field:
{ "_id" : 6322, "assignment" : 2, "points" : 14 }
Update Document with Time Limit The following operation sets a 5ms time limit to complete the update:
try {
db.scores.findOneAndUpdate(
{ "name" : "A. MacDyver" },
{ $inc : { "points" : 5 } },
{ sort: { "points" : 1 }, maxTimeMS : 5 };
);
}
catch(e){
print(e);
}
Error: findAndModifyFailed failed: { "ok" : 0, "errmsg" : "operation exceeded time limit", "code" : 5
Update Document with Upsert The following operation uses the upsert field to insert the update document if
nothing matches the filter:
71
try {
db.scores.findOneAndUpdate(
{ "name" : "A.B. Abracus" },
{ $set: { "name" : "A.B. Abracus", "assignment" : 5}, $inc : { "points" : 5 } },
{ sort: { "points" : 1 }, returnNewDocument : true }
);
}
catch (e){
print(e);
}
If returnNewDocument was false, the operation would return null as there is no original document to return.
db.collection.getIndexes()
On this page
Definition (page 72)
Considerations (page 72)
Output (page 72)
Definition
db.collection.getIndexes()
Returns an array that holds a list of documents that identify and describe the existing indexes on the collection.
You must call db.collection.getIndexes() (page 72) on a collection. For example:
db.collection.getIndexes()
Change collection to the name of the collection for which to return index information.
Considerations Changed in version 3.0.0.
For
MongoDB
3.0
deployments
using
the
WiredTiger
storage
engine,
if
you
run
db.collection.getIndexes() (page 72) from a version of the mongo (page 794) shell before 3.0 or a
version of the driver prior to 3.0 compatible version (page 1037), db.collection.getIndexes() (page 72)
will return no data, even if there are existing indexes. For more information, see WiredTiger and Driver Version
Compatibility (page 1033).
Output db.collection.getIndexes() (page 72) returns an array of documents that hold index information
for the collection. Index information includes the keys and options used to create the index. For information on the
keys and index options, see db.collection.createIndex() (page 36).
72
db.collection.getShardDistribution()
On this page
Definition (page 73)
Output (page 73)
Example Output (page 74)
Definition
db.collection.getShardDistribution()
Returns
Prints the data distribution statistics for a sharded collection.
You must call the
getShardDistribution() (page 73) method on a sharded collection, as in the following example:
db.myShardedCollection.getShardDistribution()
In the following example, the collection has two shards. The output displays both the individual shard distribution information as well the total shard distribution:
Shard <shard-a> at <host-a>
data : <size-a> docs : <count-a> chunks : <number of chunks-a>
estimated data per chunk : <size-a>/<number of chunks-a>
estimated docs per chunk : <count-a>/<number of chunks-a>
Shard <shard-b> at <host-b>
data : <size-b> docs : <count-b> chunks : <number of chunks-b>
estimated data per chunk : <size-b>/<number of chunks-b>
estimated docs per chunk : <count-b>/<number of chunks-b>
Totals
data : <stats.size> docs : <stats.count> chunks : <calc total chunks>
Shard <shard-a> contains <estDataPercent-a>% data, <estDocPercent-a>% docs in cluster, avg obj
Shard <shard-b> contains <estDataPercent-b>% data, <estDocPercent-b>% docs in cluster, avg obj
See also:
https://docs.mongodb.org/manual/sharding
Output The output information displays:
<shard-x> is a string that holds the shard name.
<host-x> is a string that holds the host name(s).
<size-x> is a number that includes the size of the data, including the unit of measure (e.g. b, Mb).
<count-x> is a number that reports the number of documents in the shard.
<number of chunks-x> is a number that reports the number of chunks in the shard.
<size-x>/<number of chunks-x> is a calculated value that reflects the estimated data size per chunk
for the shard, including the unit of measure (e.g. b, Mb).
<count-x>/<number of chunks-x> is a calculated value that reflects the estimated number of documents per chunk for the shard.
73
<stats.size> is a value that reports the total size of the data in the sharded collection, including the unit of
measure.
<stats.count> is a value that reports the total number of documents in the sharded collection.
<calc total chunks> is a calculated number that reports the number of chunks from all shards, for example:
<calc total chunks> = <number of chunks-a> + <number of chunks-b>
<estDataPercent-x> is a calculated value that reflects, for each shard, the data size as the percentage of
the collections total data size, for example:
<estDataPercent-x> = <size-x>/<stats.size>
<estDocPercent-x> is a calculated value that reflects, for each shard, the number of documents as the
percentage of the total number of documents for the collection, for example:
<estDocPercent-x> = <count-x>/<stats.count>
stats.shards[ <shard-x> ].avgObjSize is a number that reflects the average object size, including
the unit of measure, for the shard.
Example Output For example, the following is a sample output for the distribution of a sharded collection:
Shard shard-a at shard-a/MyMachine.local:30000,MyMachine.local:30001,MyMachine.local:30002
data : 38.14Mb docs : 1000003 chunks : 2
estimated data per chunk : 19.07Mb
estimated docs per chunk : 500001
Shard shard-b at shard-b/MyMachine.local:30100,MyMachine.local:30101,MyMachine.local:30102
data : 38.14Mb docs : 999999 chunks : 3
estimated data per chunk : 12.71Mb
estimated docs per chunk : 333333
Totals
data : 76.29Mb docs : 2000002 chunks : 5
Shard shard-a contains 50% data, 50% docs in cluster, avg obj size on shard : 40b
Shard shard-b contains 49.99% data, 49.99% docs in cluster, avg obj size on shard : 40b
db.collection.getShardVersion()
db.collection.getShardVersion()
This method returns information regarding the state of data in a sharded cluster that is useful when diagnosing
underlying issues with a sharded cluster.
For internal and diagnostic use only.
db.collection.group()
On this page
Definition (page 75)
Behavior (page 75)
Examples (page 76)
74
Recommended Alternatives
Because db.collection.group() (page 75) uses JavaScript, it is subject to a number of performance limitations. For most cases the $group (page 636) operator in the aggregation pipeline provides a suitable
alternative with fewer restrictions.
Definition
db.collection.group({ key, reduce, initial [, keyf] [, cond] [, finalize] })
Groups documents in a collection by the specified keys and performs simple aggregation functions such as
computing counts and sums. The method is analogous to a SELECT <...> GROUP BY statement in SQL.
The group() (page 75) method returns an array.
The db.collection.group() (page 75) accepts a single document that contains the following:
field document key The field or fields to group. Returns a key object for use as the grouping key.
field function reduce An aggregation function that operates on the documents during the grouping
operation. These functions may return a sum or a count. The function takes two arguments: the
current document and an aggregation result document for that group.
field document initial Initializes the aggregation result document.
field function keyf Optional. Alternative to the key field. Specifies a function that creates a key
object for use as the grouping key. Use keyf instead of key to group by calculated fields
rather than existing document fields.
field document cond The selection criteria to determine which documents in the collection to process. If you omit the cond field, db.collection.group() (page 75) processes all the
documents in the collection for the group operation.
field function finalize Optional.
A function that runs each item in the result set before
db.collection.group() (page 75) returns the final value. This function can either modify the result document or replace the result document as a whole.
The db.collection.group() (page 75) method is a shell wrapper for the group (page 312) command.
However, the db.collection.group() (page 75) method takes the keyf field and the reduce field
whereas the group (page 312) command takes the $keyf field and the $reduce field.
Behavior
Limits and Restrictions The db.collection.group() (page 75) method does not work with sharded clusters.
Use the aggregation framework or map-reduce in sharded environments.
The result set must fit within the maximum BSON document size (page 932).
In version 2.2, the returned array can contain at most 20,000 elements; i.e. at most 20,000 unique groupings. For group
by operations that results in more than 20,000 unique groupings, use mapReduce (page 316). Previous versions had
a limit of 10,000 elements.
Prior to 2.4, the db.collection.group() (page 75) method took the mongod (page 762) instances JavaScript
lock, which blocked all other JavaScript execution.
mongo Shell JavaScript Functions/Properties Changed in version 2.4: In MongoDB 2.4, map-reduce
operations (page 316), the group (page 312) command, and $where (page 550) operator expressions cannot
access certain global functions or properties, such as db, that are available in the mongo (page 794) shell.
75
When upgrading to MongoDB 2.4, you will need to refactor your code if your map-reduce operations
(page 316), group (page 312) commands, or $where (page 550) operator expressions include any global shell
functions or properties that are no longer available, such as db.
The following JavaScript functions and properties are available to map-reduce operations (page 316), the
group (page 312) command, and $where (page 550) operator expressions in MongoDB 2.4:
Available Properties
Available Functions
args
MaxKey
MinKey
assert()
BinData()
DBPointer()
DBRef()
doassert()
emit()
gc()
HexData()
hex_md5()
isNumber()
isObject()
ISODate()
isString()
Map()
MD5()
NumberInt()
NumberLong()
ObjectId()
print()
printjson()
printjsononeline()
sleep()
Timestamp()
tojson()
tojsononeline()
tojsonObject()
UUID()
version()
Examples The following examples assume an orders collection with documents of the following prototype:
{
_id: ObjectId("5085a95c8fada716c89d0021"),
ord_dt: ISODate("2012-07-01T04:00:00Z"),
ship_dt: ISODate("2012-07-02T04:00:00Z"),
item: { sku: "abc123",
price: 1.99,
uom: "pcs",
qty: 25 }
}
Group by Two Fields The following example groups by the ord_dt and item.sku fields those documents that
have ord_dt greater than 01/01/2011:
db.orders.group(
{
key: { ord_dt: 1, 'item.sku': 1 },
cond: { ord_dt: { $gt: new Date( '01/01/2012' ) } },
reduce: function ( curr, result ) { },
initial: { }
}
)
76
[
{
{
{
{
{
{
{
{
{
{
{
"ord_dt"
"ord_dt"
"ord_dt"
"ord_dt"
"ord_dt"
"ord_dt"
"ord_dt"
"ord_dt"
"ord_dt"
"ord_dt"
"ord_dt"
:
:
:
:
:
:
:
:
:
:
:
ISODate("2012-07-01T04:00:00Z"),
ISODate("2012-07-01T04:00:00Z"),
ISODate("2012-07-01T04:00:00Z"),
ISODate("2012-07-01T04:00:00Z"),
ISODate("2012-06-01T04:00:00Z"),
ISODate("2012-06-01T04:00:00Z"),
ISODate("2012-06-01T04:00:00Z"),
ISODate("2012-05-01T04:00:00Z"),
ISODate("2012-05-01T04:00:00Z"),
ISODate("2012-06-08T04:00:00Z"),
ISODate("2012-06-08T04:00:00Z"),
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
:
:
:
:
:
:
:
:
:
:
:
"abc123"},
"abc456"},
"bcd123"},
"efg456"},
"abc123"},
"efg456"},
"ijk123"},
"abc123"},
"abc456"},
"abc123"},
"abc456"}
Calculate the Sum The following example groups by the ord_dt and item.sku fields, those documents that
have ord_dt greater than 01/01/2011 and calculates the sum of the qty field for each grouping:
db.orders.group(
{
key: { ord_dt: 1, 'item.sku': 1 },
cond: { ord_dt: { $gt: new Date( '01/01/2012' ) } },
reduce: function( curr, result ) {
result.total += curr.item.qty;
},
initial: { total : 0 }
}
)
The result is an array of documents that contain the group by fields and the calculated aggregation field:
[ {
{
{
{
{
{
{
{
{
{
{
"ord_dt"
"ord_dt"
"ord_dt"
"ord_dt"
"ord_dt"
"ord_dt"
"ord_dt"
"ord_dt"
"ord_dt"
"ord_dt"
"ord_dt"
:
:
:
:
:
:
:
:
:
:
:
ISODate("2012-07-01T04:00:00Z"),
ISODate("2012-07-01T04:00:00Z"),
ISODate("2012-07-01T04:00:00Z"),
ISODate("2012-07-01T04:00:00Z"),
ISODate("2012-06-01T04:00:00Z"),
ISODate("2012-06-01T04:00:00Z"),
ISODate("2012-06-01T04:00:00Z"),
ISODate("2012-05-01T04:00:00Z"),
ISODate("2012-05-01T04:00:00Z"),
ISODate("2012-06-08T04:00:00Z"),
ISODate("2012-06-08T04:00:00Z"),
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
:
:
:
:
:
:
:
:
:
:
:
"abc123",
"abc456",
"bcd123",
"efg456",
"abc123",
"efg456",
"ijk123",
"abc123",
"abc456",
"abc123",
"abc456",
"total"
"total"
"total"
"total"
"total"
"total"
"total"
"total"
"total"
"total"
"total"
:
:
:
:
:
:
:
:
:
:
:
25
25
10
10
25
15
20
45
25
25
25
},
},
},
},
},
},
},
},
},
},
} ]
77
Calculate Sum, Count, and Average The following example groups by the calculated day_of_week field, those
documents that have ord_dt greater than 01/01/2011 and calculates the sum, count, and average of the qty field
for each grouping:
db.orders.group(
{
keyf: function(doc) {
return { day_of_week: doc.ord_dt.getDay() };
},
cond: { ord_dt: { $gt: new Date( '01/01/2012' ) } },
reduce: function( curr, result ) {
result.total += curr.item.qty;
result.count++;
},
initial: { total : 0, count: 0 },
finalize: function(result) {
var weekdays = [
"Sunday", "Monday", "Tuesday",
"Wednesday", "Thursday",
"Friday", "Saturday"
];
result.day_of_week = weekdays[result.day_of_week];
result.avg = Math.round(result.total / result.count);
}
}
)
The result is an array of documents that contain the group by fields and the calculated aggregation field:
[
{ "day_of_week" : "Sunday", "total" : 70, "count" : 4, "avg" : 18 },
{ "day_of_week" : "Friday", "total" : 110, "count" : 6, "avg" : 18 },
{ "day_of_week" : "Tuesday", "total" : 70, "count" : 3, "avg" : 23 }
]
See also:
https://docs.mongodb.org/manual/aggregation
db.collection.insert()
On this page
Definition
db.collection.insert()
Inserts a document or documents into a collection.
The insert() (page 78) method has the following syntax:
Changed in version 2.6.
78
db.collection.insert(
<document or array of documents>,
{
writeConcern: <document>,
ordered: <boolean>
}
)
param document, array document A document or array of documents to insert into the collection.
param document writeConcern Optional. A document expressing the write concern. Omit
to use the default write concern. See Write Concern (page 79).
New in version 2.6.
param boolean ordered Optional. If true, perform an ordered insert of the documents in the
array, and if an error occurs with one of documents, MongoDB will return without processing
the remaining documents in the array.
If false, perform an unordered insert, and if an error occurs with one of documents, continue
processing the remaining documents in the array.
Defaults to true.
New in version 2.6.
Changed in version 2.6: The insert() (page 78) returns an object that contains the status of the operation.
Returns
A WriteResult (page 81) object for single inserts.
A BulkWriteResult (page 81) object for bulk inserts.
Behaviors
Write Concern Changed in version 2.6.
The insert() (page 78) method uses the insert (page 336) command, which uses the default write concern.
To specify a different write concern, include the write concern in the options parameter.
Create Collection If the collection does not exist, then the insert() (page 78) method will create the collection.
_id Field If the document does not specify an _id field, then MongoDB will add the _id field and assign a unique
https://docs.mongodb.org/manual/reference/object-id for the document before inserting. Most
drivers create an ObjectId and insert the _id field, but the mongod (page 762) will create and populate the _id if the
driver or application does not.
If the document contains an _id field, the _id value must be unique within the collection to avoid duplicate key error.
Examples The following examples insert documents into the products collection. If the collection does not exist,
the insert() (page 78) method creates the collection.
79
Insert a Document without Specifying an _id Field In the following example, the document passed to the
insert() (page 78) method does not contain the _id field:
db.products.insert( { item: "card", qty: 15 } )
During the insert, mongod (page 762) will create the _id field and assign it a unique
https://docs.mongodb.org/manual/reference/object-id value, as verified by the inserted
document:
{ "_id" : ObjectId("5063114bd386d8fadbd6b004"), "item" : "card", "qty" : 15 }
The ObjectId values are specific to the machine and time when the operation is run. As such, your values may
differ from those in the example.
Insert a Document Specifying an _id Field In the following example, the document passed to the insert()
(page 78) method includes the _id field. The value of _id must be unique within the collection to avoid duplicate
key error.
db.products.insert( { _id: 10, item: "box", qty: 20 } )
Insert Multiple Documents The following example performs a bulk insert of three documents by passing an array
of documents to the insert() (page 78) method. By default, MongoDB performs an ordered insert. With ordered
inserts, if an error occurs during an insert of one of the documents, MongoDB returns on error without processing the
remaining documents in the array.
The documents in the array do not need to have the same fields. For instance, the first document in the array has an
_id field and a type field. Because the second and third documents do not contain an _id field, mongod (page 762)
will create the _id field for the second and third documents during the insert:
db.products.insert(
[
{ _id: 11, item: "pencil", qty: 50, type: "no.2" },
{ item: "pen", qty: 20 },
{ item: "eraser", qty: 25 }
]
)
Perform an Unordered Insert The following example performs an unordered insert of three documents. With
unordered inserts, if an error occurs during an insert of one of the documents, MongoDB continues to insert the
remaining documents in the array.
db.products.insert(
[
{ _id: 20, item: "lamp", qty: 50, type: "desk" },
{ _id: 21, item: "lamp", qty: 20, type: "floor" },
{ _id: 22, item: "bulk", qty: 100 }
80
],
{ ordered: false }
)
Override Default Write Concern The following operation to a replica set specifies a write concern of "w:
majority" with a wtimeout of 5000 milliseconds such that the method returns after the write propagates to a
majority of the voting replica set members or the method times out after 5 seconds.
Changed in version 3.0: In previous versions, majority referred to the majority of all members of the replica set.
db.products.insert(
{ item: "envelopes", qty : 100, type: "Clasp" },
{ writeConcern: { w: "majority", wtimeout: 5000 } }
)
Write Concern Errors If the insert() (page 78) method encounters write concern errors, the results include the
WriteResult.writeConcernError (page 289) field:
WriteResult({
"nInserted" : 1,
"writeConcernError" : {
"code" : 64,
"errmsg" : "waiting for replication timed out at shard-a"
}
})
Errors Unrelated to Write Concern If the insert() (page 78) method encounters a non-write concern error, the
results include the WriteResult.writeError (page 289) field:
WriteResult({
"nInserted" : 0,
"writeError" : {
"code" : 11000,
"errmsg" : "insertDocument :: caused by :: 11000 E11000 duplicate key error index: test.foo.$_i
}
})
81
db.collection.insertOne()
On this page
Definition (page 82)
Behaviors (page 82)
Examples (page 83)
Definition
db.collection.insertOne()
New in version 3.2.
Inserts a document into a collection.
The insertOne() (page 82) method has the following syntax:
db.collection.insertOne(
<document>,
{
writeConcern: <document>
}
)
If the collection does not exist, then the insertOne() (page 82) method creates the collec-
_id Field If the document does not specify an _id field, then mongod (page 762) will add the _id field and
assign a unique https://docs.mongodb.org/manual/reference/object-id for the document before
inserting. Most drivers create an ObjectId and insert the _id field, but the mongod (page 762) will create and populate
the _id if the driver or application does not.
If the document contains an _id field, the _id value must be unique within the collection to avoid duplicate key error.
Explainability insertOne() (page 82) is not compatible with db.collection.explain() (page 48).
Use insert() (page 78) instead.
82
Error Handling On error, insertOne() (page 82) throws either a writeError or writeConcernError
exception.
Examples
Insert a Document without Specifying an _id Field In the following example, the document passed to the
insertOne() (page 82) method does not contain the _id field:
try {
db.products.insertOne( { item: "card", qty: 15 } );
};
catch (e) {
print (e);
};
Because the documents did not include _id, mongod (page 762) creates and adds the _id field and assigns it a
unique https://docs.mongodb.org/manual/reference/object-id value.
The ObjectId values are specific to the machine and time when the operation is run. As such, your values may
differ from those in the example.
Insert a Document Specifying an _id Field In the following example, the document passed to the insertOne()
(page 82) method includes the _id field. The value of _id must be unique within the collection to avoid duplicate
key error.
try {
db.products.insertOne( { _id: 10, item: "box", qty: 20 } );
}
catch (e) {
print (e);
}
Inserting an duplicate value for any key that is part of a unique index, such as _id, throws an exception. The following
attempts to insert a document with a _id value that already exists:
try {
db.products.insertOne( { _id: 10, "item" : "packing peanuts", "qty" : 200 } );
}
catch (e) {
print (e);
}
83
Since _id:
WriteError({
"index" : 0,
"code" : 11000,
"errmsg" : "E11000 duplicate key error collection: inventory.products index: _id_ dup key: { : 1.0
"op" : {
"_id" : 10,
"item" : "packing peanuts",
"qty" : 200
}
})
Increasing Write Concern Given a three member replica set, the following operation specifies a w of majority,
wtimeout of 100:
try {
db.products.insertOne(
{ "item": "envelopes", "qty": 100, type: "Self-Sealing" },
{ writeConcern: { w : "majority", wtimeout : 100 } };
)
}
catch (e) {
print (e);
}
If the acknowledgement takes longer than the wtimeout limit, the following exception is thrown:
WriteConcernError({
"code" : 64,
"errInfo" : {
"wtimeout" : true
},
"errmsg" : "waiting for replication timed out"
})
See also:
To insert multiple documents, see db.collection.insertMany() (page 84)
db.collection.insertMany()
On this page
Definition (page 84)
Behaviors (page 85)
Examples (page 86)
Definition
db.collection.insertMany()
New in version 3.2.
Inserts multiple documents into a collection.
The insertMany() (page 84) method has the following syntax:
84
db.collection.insertMany(
{ [ <document 1> , <document 2>, ... ] },
{
writeConcern: <document>,
ordered: <boolean>
}
)
85
uses
try {
db.products.insertMany( [
{ item: "card", qty: 15 },
{ item: "envelope", qty: 20 },
{ item: "stamps" , qty: 30 }
] );
}
catch (e) {
print (e);
}
Because the documents did not include _id, mongod (page 762) creates and adds the _id field for each document
and assigns it a unique https://docs.mongodb.org/manual/reference/object-id value.
The ObjectId values are specific to the machine and time when the operation is run. As such, your values may
differ from those in the example.
Insert Several Document Specifying an _id Field The following example/operation uses insertMany()
(page 84) to insert documents that include the _id field. The value of _id must be unique within the collection
to avoid a duplicate key error.
try {
db.products.insertMany( [
{ _id: 10, item: "large box", qty: 20 },
{ _id: 11, item: "small box", qty: 55 },
{ _id: 12, item: "medium box", qty: 30 }
] );
}
catch (e) {
print (e);
}
86
Inserting a duplicate value for any key that is part of a unique index, such as _id, throws an exception. The following
attempts to insert a document with a _id value that already exists:
try {
db.products.insertMany( [
{ _id: 13, item: "envelopes", qty: 60 },
{ _id: 13, item: "stamps", qty: 110 },
{ _id: 14, item: "packing tape", qty: 38 }
] );
}
catch (e) {
print (e);
}
Since _id:
BulkWriteError({
"writeErrors" : [
{
"index" : 0,
"code" : 11000,
"errmsg" : "E11000 duplicate key error collection: restaurant.test index: _id_ dup key: { :
"op" : {
"_id" : 13,
"item" : "envelopes",
"qty" : 60
}
}
],
"writeConcernErrors" : [ ],
"nInserted" : 0,
"nUpserted" : 0,
"nMatched" : 0,
"nModified" : 0,
"nRemoved" : 0,
"upserted" : [ ]
})
Note that one document was inserted: The first document of _id: 13 will insert successfully, but the second insert
will fail. This will also stop additional documents left in the queue from being inserted.
With ordered to false, the insert operation would continue with any remaining documents.
Unordered Inserts The following attempts to insert multiple documents with _id field and ordered:
The array of documents contains two documents with duplicate _id fields.
false.
try {
db.products.insertMany( [
{ _id: 10, item: "large box", qty: 20 },
{ _id: 11, item: "small box", qty: 55 },
{ _id: 11, item: "medium box", qty: 30 },
{ _id: 12, item: "envelope", qty: 100},
{ _id: 13, item: "stamps", qty: 125 },
{ _id: 13, item: "tape", qty: 20},
{ _id: 14, item: "bubble wrap", qty: 30}
87
], { ordered: false } );
}
catch (e) {
print (e);
}
BulkWriteError({
"writeErrors" : [
{
"index" : 2,
"code" : 11000,
"errmsg" : "E11000 duplicate key error collection: inventory.products index: _id_ dup key: {
"op" : {
"_id" : 11,
"item" : "medium box",
"qty" : 30
}
},
{
"index" : 5,
"code" : 11000,
"errmsg" : "E11000 duplicate key error collection: inventory.products index: _id_ dup key: {
"op" : {
"_id" : 13,
"item" : "tape",
"qty" : 20
}
}
],
"writeConcernErrors" : [ ],
"nInserted" : 5,
"nUpserted" : 0,
"nMatched" : 0,
"nModified" : 0,
"nRemoved" : 0,
"upserted" : [ ]
})
While the document with item: "medium box" and item: "tape" failed to insert due to duplicate _id
values, nInserted shows that the remaining 5 documents were inserted.
Using Write Concern Given a three member replica set, the following operation specifies a w of majority and
wtimeout of 100:
try {
db.products.insertMany(
[
{ _id: 10, item: "large box", qty: 20 },
{ _id: 11, item: "small box", qty: 55 },
{ _id: 12, item: "medium box", qty: 30 }
],
{ w: "majority", wtimeout: 100 }
);
}
catch (e) {
88
print (e);
}
If the primary and at least one secondary acknowledge each write operation within 100 milliseconds, it returns:
{
"acknowledged" : true,
"insertedIds" : [
ObjectId("562a94d381cb9f1cd6eb0e1a"),
ObjectId("562a94d381cb9f1cd6eb0e1b"),
ObjectId("562a94d381cb9f1cd6eb0e1c")
]
}
If the total time required for all required nodes in the replica set to acknowledge the write operation is greater than
wtimeout, the following writeConcernError is displayed when the wtimeout period has passed.
This operation returns:
WriteConcernError({
"code" : 64,
"errInfo" : {
"wtimeout" : true
},
"errmsg" : "waiting for replication timed out"
})
db.collection.isCapped()
db.collection.isCapped()
Returns Returns true if the collection is a capped collection, otherwise returns false.
See also:
https://docs.mongodb.org/manual/core/capped-collections
db.collection.mapReduce()
On this page
89
db.collection.mapReduce(
<map>,
<reduce>,
{
out: <collection>,
query: <document>,
sort: <document>,
limit: <number>,
finalize: <function>,
scope: <document>,
jsMode: <boolean>,
verbose: <boolean>,
bypassDocumentValidation: <boolean>
}
)
specifies
additional
parameters
to
90
field boolean jsMode Specifies whether to convert intermediate data into BSON format between the
execution of the map and reduce functions. Defaults to false.
If false:
Internally, MongoDB converts the JavaScript objects emitted by the map function to BSON
objects. These BSON objects are then converted back to JavaScript objects when calling the
reduce function.
The map-reduce operation places the intermediate BSON objects in temporary, on-disk storage. This allows the map-reduce operation to execute over arbitrarily large data sets.
If true:
Internally, the JavaScript objects emitted during map function remain as JavaScript objects.
There is no need to convert the objects for the reduce function, which can result in faster
execution.
You can only use jsMode for result sets with fewer than 500,000 distinct key arguments
to the mappers emit() function.
The jsMode defaults to false.
field Boolean verbose Specifies whether to include the timing information in the result information. The verbose defaults to true to include the timing information.
Note: Changed in version 2.4.
In MongoDB 2.4, map-reduce operations (page 316), the group (page 312) command, and $where
(page 550) operator expressions cannot access certain global functions or properties, such as db, that are available in the mongo (page 794) shell.
When upgrading to MongoDB 2.4, you will need to refactor your code if your map-reduce operations
(page 316), group (page 312) commands, or $where (page 550) operator expressions include any global shell
functions or properties that are no longer available, such as db.
The following JavaScript functions and properties are available to map-reduce operations (page 316),
the group (page 312) command, and $where (page 550) operator expressions in MongoDB 2.4:
Available Properties
Available Functions
args
MaxKey
MinKey
assert()
BinData()
DBPointer()
DBRef()
doassert()
emit()
gc()
HexData()
hex_md5()
isNumber()
isObject()
ISODate()
isString()
Map()
MD5()
NumberInt()
NumberLong()
ObjectId()
print()
printjson()
printjsononeline()
sleep()
Timestamp()
tojson()
tojsononeline()
tojsonObject()
UUID()
version()
91
Requirements for the map Function The map function is responsible for transforming each input document into
zero or more documents. It can access the variables defined in the scope parameter, and has the following prototype:
function() {
...
emit(key, value);
}
The following map function may call emit(key,value) multiple times depending on the number of elements in
the input documents items field:
function() {
this.items.forEach(function(item){ emit(item.sku, 1); });
}
Requirements for the reduce Function The reduce function has the following prototype:
function(key, values) {
...
return result;
}
92
Because it is possible to invoke the reduce function more than once for the same key, the following properties need
to be true:
the type of the return object must be identical to the type of the value emitted by the map function.
the reduce function must be associative. The following statement must be true:
reduce(key, [ C, reduce(key, [ A, B ]) ] ) == reduce( key, [ C, A, B ] )
the reduce function must be idempotent. Ensure that the following statement is true:
reduce( key, [ reduce(key, valuesArray) ] ) == reduce( key, valuesArray )
the reduce function should be commutative: that is, the order of the elements in the valuesArray should
not affect the output of the reduce function, so that the following statement is true:
reduce( key, [ A, B ] ) == reduce( key, [ B, A ] )
out Options You can specify the following options for the out parameter:
Output to a Collection This option outputs to a new collection, and is not available on secondary members of replica
sets.
out: <collectionName>
Output to a Collection with an Action This option is only available when passing a collection that already exists
to out. It is not available on secondary members of replica sets.
out: { <action>: <collectionName>
[, db: <dbName>]
[, sharded: <boolean> ]
[, nonAtomic: <boolean> ] }
When you output to a collection with an action, the out has the following parameters:
<action>: Specify one of the following actions:
replace
Replace the contents of the <collectionName> if the collection with the <collectionName> exists.
merge
Merge the new result with the existing result if the output collection already exists. If an existing document
has the same key as the new result, overwrite that existing document.
reduce
Merge the new result with the existing result if the output collection already exists. If an existing document
has the same key as the new result, apply the reduce function to both the new and the existing documents
and overwrite the existing document with the result.
db:
Optional. The name of the database that you want the map-reduce operation to write its output. By default this
will be the same database as the input collection.
93
sharded:
Optional. If true and you have enabled sharding on output database, the map-reduce operation will shard the
output collection using the _id field as the shard key.
nonAtomic:
New in version 2.2.
Optional. Specify output operation as non-atomic. This applies only to the merge and reduce output modes,
which may take minutes to execute.
By default nonAtomic is false, and the map-reduce operation locks the database during post-processing.
If nonAtomic is true, the post-processing step prevents MongoDB from locking the database: during this
time, other clients will be able to read intermediate states of the output collection.
Output Inline Perform the map-reduce operation in memory and return the result. This option is the only available
option for out on secondary members of replica sets.
out: { inline: 1 }
The result must fit within the maximum size of a BSON document (page 932).
Requirements for the finalize Function The finalize function has the following prototype:
function(key, reducedValue) {
...
return modifiedObject;
}
The finalize function receives as its arguments a key value and the reducedValue from the reduce function.
Be aware that:
The finalize function should not access the database for any reason.
The finalize function should be pure, or have no impact outside of the function (i.e. side effects.)
The finalize function can access the variables defined in the scope parameter.
Map-Reduce Examples Consider the following map-reduce operations on a collection orders that contains documents of the following prototype:
{
_id: ObjectId("50a8240b927d5d8b5891743c"),
cust_id: "abc123",
ord_date: new Date("Oct 04, 2012"),
status: 'A',
price: 25,
items: [ { sku: "mmm", qty: 5, price: 2.5 },
{ sku: "nnn", qty: 5, price: 2.5 } ]
}
Return the Total Price Per Customer Perform the map-reduce operation on the orders collection to group by
the cust_id, and calculate the sum of the price for each cust_id:
1. Define the map function to process each input document:
In the function, this refers to the document that the map-reduce operation is processing.
94
The function maps the price to the cust_id for each document and emits the cust_id and price
pair.
var mapFunction1 = function() {
emit(this.cust_id, this.price);
};
2. Define the corresponding reduce function with two arguments keyCustId and valuesPrices:
The valuesPrices is an array whose elements are the price values emitted by the map function and
grouped by keyCustId.
The function reduces the valuesPrice array to the sum of its elements.
var reduceFunction1 = function(keyCustId, valuesPrices) {
return Array.sum(valuesPrices);
};
3. Perform the map-reduce on all documents in the orders collection using the mapFunction1 map function
and the reduceFunction1 reduce function.
db.orders.mapReduce(
mapFunction1,
reduceFunction1,
{ out: "map_reduce_example" }
)
2. Define the corresponding reduce function with two arguments keySKU and countObjVals:
countObjVals is an array whose elements are the objects mapped to the grouped keySKU values
passed by map function to the reducer function.
95
The function reduces the countObjVals array to a single object reducedValue that contains the
count and the qty fields.
In reducedVal, the count field contains the sum of the count fields from the individual array elements, and the qty field contains the sum of the qty fields from the individual array elements.
var reduceFunction2 = function(keySKU, countObjVals) {
reducedVal = { count: 0, qty: 0 };
for (var idx = 0; idx < countObjVals.length; idx++) {
reducedVal.count += countObjVals[idx].count;
reducedVal.qty += countObjVals[idx].qty;
}
return reducedVal;
};
3. Define a finalize function with two arguments key and reducedVal. The function modifies the
reducedVal object to add a computed field named avg and returns the modified object:
var finalizeFunction2 = function (key, reducedVal) {
reducedVal.avg = reducedVal.qty/reducedVal.count;
return reducedVal;
};
using
the
mapFunction2,
db.orders.mapReduce( mapFunction2,
reduceFunction2,
{
out: { merge: "map_reduce_example" },
query: { ord_date:
{ $gt: new Date('01/01/2012') }
},
finalize: finalizeFunction2
}
)
This operation uses the query field to select only those documents with ord_date greater than new
Date(01/01/2012). Then it output the results to a collection map_reduce_example. If the
map_reduce_example collection already exists, the operation will merge the existing contents with the
results of this map-reduce operation.
Output The output of the db.collection.mapReduce() (page 89) method is identical to that of the
mapReduce (page 316) command. See the Output (page 324) section of the mapReduce (page 316) command
for information on the db.collection.mapReduce() (page 89) output.
Additional Information
https://docs.mongodb.org/manual/tutorial/troubleshoot-map-function
https://docs.mongodb.org/manual/tutorial/troubleshoot-reduce-function
mapReduce (page 316) command
96
https://docs.mongodb.org/manual/aggregation
Map-Reduce
https://docs.mongodb.org/manual/tutorial/perform-incremental-map-reduce
db.collection.reIndex()
On this page
Behavior (page 97)
db.collection.reIndex()
The db.collection.reIndex() (page 97) drops all indexes on a collection and recreates them. This
operation may be expensive for collections that have a large amount of data and/or a large number of indexes.
Call this method, which takes no arguments, on a collection object. For example:
db.collection.reIndex()
db.collection.replaceOne()
97
On this page
Definition (page 98)
Behavior (page 98)
Examples (page 99)
Definition
db.collection.replaceOne(filter, replacement, options)
New in version 3.2.
Replaces a single document within the collection based on the filter.
The replaceOne() (page 98) method has the following form:
db.collection.replaceOne(
<filter>,
<replacement>,
{
upsert: <boolean>,
writeConcern: <document>
}
)
98
Capped Collections replaceOne() (page 98) throws a WriteError if the replacement document has a larger
size in bytes than the original document.
Examples
Replace The restaurant collection contains the following documents:
{ "_id" : 1, "name" : "Central Perk Cafe", "Borough" : "Manhattan" },
{ "_id" : 2, "name" : "Rock A Feller Bar and Grill", "Borough" : "Queens", "violations" : "2" },
{ "_id" : 3, "name" : "Empire State Pub", "Borough" : "Brooklyn", "violations" : "0" }
try {
db.inventory.replaceOne(
{ "name" : "Central Perk Cafe" },
{ "name" : "Central Pork Cafe", "Borough" : "Manhattan" }
);
}
catch (e){
print(e);
}
Setting upsert:
true would insert the document if no match was found. See Replace with Upsert (page 99)
Replace with Upsert The restaurant collection contains the following documents:
{ "_id" : 1, "name" : "Central Perk Cafe", "Borough" : "Manhattan", "violations" : 3 },
{ "_id" : 2, "name" : "Rock A Feller Bar and Grill", "Borough" : "Queens", "violations" : "2" },
{ "_id" : 3, "name" : "Empire State Pub", "Borough" : "Brooklyn", "violations" : "0" }
try {
db.restaurant.replaceOne(
{ "name" : "Pizza Rat's Pizzaria" },
{ "_id:" 4, "name" : "Pizza Rat's Pizzaria", "Borough" : "Manhattan", "violations" : 8 },
{ upsert: true }
)
}
catch (e){
print(e);
}
Since upsert :
true the document is inserted based on the replacement document. The operation returns:
{
"acknowledged" : true,
"matchedCount" : 0,
99
"modifiedCount" : 0,
"upsertedId" : 4
}
"_id"
"_id"
"_id"
"_id"
:
:
:
:
1,
2,
3,
4,
"name"
"name"
"name"
"name"
:
:
:
:
Given a three member replica set, the following operation specifies a w of majority
try {
db.restaurant.replaceOne(
{ "name" : "Pizza Rat's Pizzaria" },
{ "name" : "Pizza Rat's Pub", "Borough" : "Manhattan", "violations" : 3 },
{ w: "majority", wtimeout: 100 }
);
}
catch (e) {
print(e);
}
If the acknowledgement takes longer than the wtimeout limit, the following exception is thrown:
try {
WriteConcernError({
"code" : 64,
"errInfo" : {
"wtimeout" : true
},
"errmsg" : "waiting for replication timed out"
});
}
catch {
print(e);
}
db.collection.remove()
On this page
Definition
db.collection.remove()
Removes documents from a collection.
100
The db.collection.remove() (page 100) method can have one of two syntaxes. The remove()
(page 100) method can take a query document and an optional justOne boolean:
db.collection.remove(
<query>,
<justOne>
)
Or the method can take a query document and an optional remove options document:
New in version 2.6.
db.collection.remove(
<query>,
{
justOne: <boolean>,
writeConcern: <document>
}
)
param document query Specifies deletion criteria using query operators (page 519). To delete all
documents in a collection, pass an empty document ({}).
Changed in version 2.6: In previous versions, the method invoked with no query parameter
deleted all documents in a collection.
param boolean justOne Optional. To limit the deletion to just one document, set to true. Omit to
use the default value of false and delete all documents matching the deletion criteria.
param document writeConcern Optional. A document expressing the write concern. Omit
to use the default write concern. See Write Concern (page 101).
New in version 2.6.
Changed in version 2.6: The remove() (page 100) returns an object that contains the status of the operation.
Returns A WriteResult (page 102) object that contains the status of the operation.
Behavior
Write Concern Changed in version 2.6.
The remove() (page 100) method uses the delete (page 343) command, which uses the default write
concern. To specify a different write concern, include the write concern in the options parameter.
Query Considerations By default, remove() (page 100) removes all documents that match the query expression. Specify the justOne option to limit the operation to removing a single document. To delete a single document
sorted by a specified order, use the findAndModify() (page 61) method.
When removing multiple documents, the remove operation may interleave with other read and/or write operations to
the collection. For unsharded collections, you can override this behavior with the $isolated (page 621) operator,
which isolates the remove operation and disallows yielding during the operation. This ensures that no client can see
the affected documents until they are all processed or an error stops the remove operation.
See Isolate Remove Operations (page 102) for an example.
Capped Collections You cannot use the remove() (page 100) method with a capped collection.
101
Sharded Collections All remove() (page 100) operations for a sharded collection that specify the justOne
option must include the shard key or the _id field in the query specification. remove() (page 100) operations
specifying justOne in a sharded collection without the shard key or the _id field return an error.
Examples The following are examples of the remove() (page 100) method.
Remove All Documents from a Collection To remove all documents in a collection, call the remove (page 100)
method with an empty query document {}. The following operation deletes all documents from the bios
collection:
db.bios.remove( { } )
Override Default Write Concern The following operation to a replica set removes all the documents from the
collection products where qty is greater than 20 and specifies a write concern of "w: majority" with
a wtimeout of 5000 milliseconds such that the method returns after the write propagates to a majority of the voting
replica set members or the method times out after 5 seconds.
Changed in version 3.0: In previous versions, majority referred to the majority of all members of the replica set.
db.products.remove(
{ qty: { $gt: 20 } },
{ writeConcern: { w: "majority", wtimeout: 5000 } }
)
Remove a Single Document that Matches a Condition To remove the first document that match a deletion criteria,
call the remove (page 100) method with the query criteria and the justOne parameter set to true or 1.
The following operation removes the first document from the collection products where qty is greater than 20:
db.products.remove( { qty: { $gt: 20 } }, true )
102
Successful Results The remove() (page 100) returns a WriteResult (page 288) object that contains the status of the operation. Upon success, the WriteResult (page 288) object contains information on the number of
documents removed:
WriteResult({ "nRemoved" : 4 })
See also:
WriteResult.nRemoved (page 289)
Write Concern Errors If the remove() (page 100) method encounters write concern errors, the results include
the WriteResult.writeConcernError (page 289) field:
WriteResult({
"nRemoved" : 21,
"writeConcernError" : {
"code" : 64,
"errInfo" : {
"wtimeout" : true
},
"errmsg" : "waiting for replication timed out"
}
})
See also:
WriteResult.hasWriteConcernError() (page 290)
Errors Unrelated to Write Concern If the remove() (page 100) method encounters a non-write concern error,
the results include WriteResult.writeError (page 289) field:
WriteResult({
"nRemoved" : 0,
"writeError" : {
"code" : 2,
"errmsg" : "unknown top level operator: $invalidFieldName"
}
})
See also:
WriteResult.hasWriteError() (page 289)
db.collection.renameCollection()
On this page
Definition (page 103)
Example (page 104)
Limitations (page 104)
Definition
db.collection.renameCollection(target, dropTarget)
Renames a collection. Provides a wrapper for the renameCollection (page 430) database command.
103
param string target The new name of the collection. Enclose the string in quotes.
param boolean dropTarget Optional. If true, mongod (page 762) drops the target of
renameCollection (page 430) prior to renaming the collection. The default value is
false.
Example Call the db.collection.renameCollection() (page 103) method on a collection object. For
example:
db.rrecord.renameCollection("record")
This operation will rename the rrecord collection to record. If the target name (i.e. record) is the name of an
existing collection, then the operation will fail.
Limitations The method has the following limitations:
db.collection.renameCollection() (page 103) cannot move a collection between databases. Use
renameCollection (page 430) for these rename operations.
db.collection.renameCollection() (page 103) is not supported on sharded collections.
The db.collection.renameCollection() (page 103) method operates within a collection by changing the
metadata associated with a given collection.
Refer to the documentation renameCollection (page 430) for additional warnings and messages.
Warning: The db.collection.renameCollection() (page 103) method and renameCollection
(page 430) command will invalidate open cursors which interrupts queries that are currently returning data.
db.collection.save()
On this page
Definition
db.collection.save()
Updates an existing document or inserts a new document, depending on its document parameter.
The save() (page 104) method has the following form:
Changed in version 2.6.
db.collection.save(
<document>,
{
writeConcern: <document>
}
)
104
Update If the document contains an _id field, then the save() (page 104) method is equivalent to an update with
the upsert option (page 118) set to true and the query predicate on the _id field.
Examples
Save a New Document without Specifying an _id Field In the following example, save() (page 104) method
performs an insert since the document passed to the method does not contain the _id field:
db.products.save( { item: "book", qty: 40 } )
During
the
insert,
the
shell
will
create
the
_id
field
with
a
unique
https://docs.mongodb.org/manual/reference/object-id value, as verified by the inserted
document:
{ "_id" : ObjectId("50691737d386d8fadbd6b01d"), "item" : "book", "qty" : 40 }
The ObjectId values are specific to the machine and time when the operation is run. As such, your values may
differ from those in the example.
Save a New Document Specifying an _id Field In the following example, save() (page 104) performs an update
with upsert:true since the document contains an _id field:
db.products.save( { _id: 100, item: "water", qty: 30 } )
105
Because the _id field holds a value that does not exist in the collection, the update operation results in an insertion of
the document. The results of these operations are identical to an update() method with the upsert option (page 118) set
to true.
The operation results in the following new document in the products collection:
{ "_id" : 100, "item" : "water", "qty" : 30 }
Replace an Existing Document The products collection contains the following document:
{ "_id" : 100, "item" : "water", "qty" : 30 }
The save() (page 104) method performs an update with upsert:true since the document contains an _id field:
db.products.save( { _id : 100, item : "juice" } )
Because the _id field holds a value that exists in the collection, the operation performs an update to replace the
document and results in the following document:
{ "_id" : 100, "item" : "juice" }
Override Default Write Concern The following operation to a replica set specifies a write concern of "w:
majority" with a wtimeout of 5000 milliseconds such that the method returns after the write propagates to a
majority of the voting replica set members or the method times out after 5 seconds.
Changed in version 3.0: In previous versions, majority referred to the majority of all members of the replica set.
db.products.save(
{ item: "envelopes", qty : 100, type: "Clasp" },
{ writeConcern: { w: "majority", wtimeout: 5000 } }
)
On this page
Definition (page 106)
Behavior (page 107)
Examples (page 108)
Definition
db.collection.stats(scale | options)
Returns statistics about the collection. The method includes the following parameters:
param number scale Optional. The scale used in the output to display the sizes of items. By
default, output displays sizes in bytes. To display kilobytes rather than bytes, specify a scale
value of 1024.
106
Changed in version 3.0: Legacy parameter format. Mutually exclusive with options as a
document.
param document options Optional. Alternative to scale parameter. Use the options document
to specify options, including scale.
New in version 3.0.
The options document can contain the following fields and values:
field number scale Optional. The scale used in the output to display the sizes of items. By default,
output displays sizes in bytes. To display kilobytes rather than bytes, specify a scale value of
1024.
New in version 3.0.
field boolean indexDetails Optional. If true, db.collection.stats() (page 106) returns
index details (page 476) in addition to the collection stats.
Only works for WiredTiger storage engine.
Defaults to false.
New in version 3.0.
field document indexDetailsKey Optional.
If indexDetails is true, you can use
indexDetailsKey to filter index details by specifying the index key specification. Only
the index that exactly matches indexDetailsKey will be returned.
If no match is found, indexDetails (page 476) will display statistics for all indexes.
Use getIndexes() (page 72) to discover index keys. You cannot use indexDetailsKey
with indexDetailsName.
New in version 3.0.
field string indexDetailsName Optional.
If indexDetails is true, you can use
indexDetailsName to filter index details by specifying the index name. Only the index
name that exactly matches indexDetailsName will be returned.
If no match is found, indexDetails (page 476) will display statistics for all indexes.
Use getIndexes() (page 72) to discover index names.
indexDetailsName with indexDetailsField.
See collStats
The db.collection.stats() (page 106) method provides a wrapper around the database command
collStats (page 472).
Behavior This method returns a JSON document with statistics related to the current mongod (page 762) instance.
Unless otherwise specified by the key name, values related to size are displayed in bytes and can be overridden by
scale.
Note: The scale factor rounds values to whole numbers.
Depending on the storage engine, the data returned may differ. For details on the fields, see output details (page 474).
107
Where <string>> is the field that is indexed and <value> is either the direction of the index, or the special index
type such as text or 2dsphere. See https://docs.mongodb.org/manual/core/index-types/ for
the full list of index types.
Unexpected Shutdown and Count For MongoDB instances using the WiredTiger storage engine, after an unclean shutdown, statistics on size and count may off by up to 1000 documents as reported by collStats (page 472),
dbStats (page 480), count (page 306). To restore the correct statistics for the collection, run validate
(page 484) on the collection.
Examples
Note: You can find the collection data used for these examples in our Getting Started Guide4
Basic Stats Lookup The following operation returns stats on the restaurants collection:
db.restaurants.stats()
"ns" : "guidebook.restaurants",
"count" : 25359,
"size" : 10630398,
"avgObjSize" : 419,
"storageSize" : 4104192
"capped" : false,
"wiredTiger" : {
"metadata" : {
"formatVersion" : 1
},
"creationString" : "allocation_size=4KB,app_metadata=(formatVersion=1),block_allocation=best
"type" : "file",
"uri" : "statistics:table:collection-2-7253336746667145592",
"LSM" : {
"bloom filters in the LSM tree" : 0,
"bloom filter false positives" : 0,
"bloom filter hits" : 0,
"bloom filter misses" : 0,
"bloom filter pages evicted from cache" : 0,
"bloom filter pages read into cache" : 0,
"total size of bloom filters" : 0,
"sleep for LSM checkpoint throttle" : 0,
"chunks in the LSM tree" : 0,
"highest merge generation in the LSM tree" : 0,
"queries that could have benefited from a Bloom filter that did not exist" : 0,
"sleep for LSM merge throttle" : 0
},
"block-manager" : {
4 https://docs.mongodb.org/getting-started/shell/import-data/
108
109
As stats was not give a scale parameter, all size values are in bytes.
Stats Lookup With Scale The following operation changes the scale of data from bytes to kilobytes by
specifying a scale of 1024:
db.restaurants.stats( { scale : 1024 } )
110
{
"ns" : "guidebook.restaurants",
"count" : 25359,
"size" : 10381,
"avgObjSize" : 419,
"storageSize" : 4008,
"capped" : false,
"wiredTiger" : {
...
},
"nindexes" : 4,
"totalIndexSize" : 612,
"indexSizes" : {
"_id_" : 212,
"borough_1_cuisine_1" : 136,
"cuisine_1" : 128,
"borough_1_address.zipcode_1" : 136
},
"ok" : 1
}
Statistics Lookup With Index Details The following operation creates an indexDetails document that contains
information related to each of the indexes within the collection:
db.restaurant.stats( { indexDetails : true } )
"ns" : "guidebook.restaurants",
"count" : 25359,
"size" : 10630398,
"avgObjSize" : 419,
"storageSize" : 4104192,
"capped" : false,
"wiredTiger" : {
...
},
"nindexes" : 4,
"indexDetails" : {
"_id_" : {
"metadata" : {
"formatVersion" : 6,
"infoObj" : "{ \"v\" : 1, \"key\" : { \"_id\" : 1 }, \"name\" : \"_id_\", \"ns\" : \"blog
},
"creationString" : "allocation_size=4KB,app_metadata=(formatVersion=6,infoObj={ \"v\" : 1, \
"type" : "file",
"uri" : "statistics:table:index-3-7253336746667145592",
"LSM" : {
...
},
"block-manager" : {
...
},
"btree" : {
...
},
"cache" : {
111
...
},
"compression" : {
...
},
"cursor" : {
...
},
"reconciliation" : {
...
},
"session" : {
...
},
"transaction" : {
...
}
},
"borough_1_cuisine_1" : {
"metadata" : {
"formatVersion" : 6,
"infoObj" : "{ \"v\" : 1, \"key\" : { \"borough\" : 1, \"cuisine\" : 1 }, \"name\" : \"bo
},
"creationString" : "allocation_size=4KB,app_metadata=(formatVersion=6,infoObj={ \"v\" : 1, \
"type" : "file",
"uri" : "statistics:table:index-4-7253336746667145592",
"LSM" : {
...
},
"block-manager" : {
...
},
"btree" : {
...
},
"cache" : {
...
},
"compression" : {
...
},
"cursor" : {
...
},
"reconciliation" : {
...
},
"session" : {
"object compaction" : 0,
"open cursor count" : 0
},
"transaction" : {
"update conflicts" : 0
}
},
"cuisine_1" : {
"metadata" : {
"formatVersion" : 6,
"infoObj" : "{ \"v\" : 1, \"key\" : { \"cuisine\" : 1 }, \"name\" : \"cuisine_1\", \"ns\"
112
},
"creationString" : "allocation_size=4KB,app_metadata=(formatVersion=6,infoObj={ \"v\" : 1, \
"type" : "file",
"uri" : "statistics:table:index-5-7253336746667145592",
"LSM" : {
...
},
"block-manager" : {
...
},
"btree" : {
...
},
"cache" : {
...
},
"compression" : {
...
},
"cursor" : {
...
},
"reconciliation" : {
...
},
"session" : {
...
},
"transaction" : {
...
}
},
"borough_1_address.zipcode_1" : {
"metadata" : {
"formatVersion" : 6,
"infoObj" : "{ \"v\" : 1, \"key\" : { \"borough\" : 1, \"address.zipcode\" : 1 }, \"name\
},
"creationString" : "allocation_size=4KB,app_metadata=(formatVersion=6,infoObj={ \"v\" : 1, \
"type" : "file",
"uri" : "statistics:table:index-6-7253336746667145592",
"LSM" : {
...
},
"block-manager" : {
...
},
"btree" : {
...
},
"cache" : {
...
},
"compression" : {
...
},
"cursor" : {
...
},
113
"reconciliation" : {
...
},
"session" : {
...
},
"transaction" : {
...
}
}
},
"totalIndexSize" : 626688,
"indexSizes" : {
"_id_" : 217088,
"borough_1_cuisine_1" : 139264,
"cuisine_1" : 131072,
"borough_1_address.zipcode_1" : 139264
},
"ok" : 1
}
Statistics Lookup With Filtered Index Details To filter the indexes in the indexDetails (page 476)
field, you can either specify the index keys using the indexDetailsKey option or specify the index name using the indexDetailsName. To discover index keys and names for the collection, use
db.collection.getIndexes() (page 72).
Given the following index:
{
"ns" : "guidebook.restaurants",
"v" : 1,
"key" : {
"borough" : 1,
"cuisine" : 1
},
"name" : "borough_1_cuisine_1"
}
The following operation filters the indexDetails document to a single index as defined by the
indexDetailsKey document.
db.restaurants.stats(
{
'indexDetails' : true,
'indexDetailsKey' :
{
'borough' : 1,
'cuisine' : 1
}
}
)
The following operation filters the indexDetails document to a single index as defined by the
indexDetailsName document.
db.restaurants.stats(
{
'indexDetails' : true,
114
'indexDetailsName' : 'borough_1_cuisine_1'
}
)
"ns" : "blogs.restaurants",
"count" : 25359,
"size" : 10630398,
"avgObjSize" : 419,
"storageSize" : 4104192,
"capped" : false,
"wiredTiger" : {
...
},
"nindexes" : 4,
"indexDetails" : {
"borough_1_cuisine_1" : {
"metadata" : {
"formatVersion" : 6,
"infoObj" : "{ \"v\" : 1, \"key\" : { \"borough\" : 1, \"cuisine\" : 1 }, \"name\" : \"bo
},
"creationString" : "allocation_size=4KB,app_metadata=(formatVersion=6,infoObj={ \"v\" : 1, \
"type" : "file",
"uri" : "statistics:table:index-4-7253336746667145592",
"LSM" : {
...
},
"block-manager" : {
...
},
"btree" : {
...
},
"cache" : {
...
},
"compression" : {
...
},
"cursor" : {
...
},
"reconciliation" : {
...
},
"session" : {
...
},
"transaction" : {
...
}
}
},
"totalIndexSize" : 626688,
"indexSizes" : {
"_id_" : 217088,
"borough_1_cuisine_1" : 139264,
115
"cuisine_1" : 131072,
"borough_1_address.zipcode_1" : 139264
},
"ok" : 1
}
db.collection.storageSize()
Returns The total amount of storage allocated to this collection for document storage. Provides
a wrapper around the storageSize (page 475) field of the collStats (page 472) (i.e.
db.collection.stats() (page 106)) output.
db.collection.totalSize()
db.collection.totalSize()
Returns The total size in bytes of the data in the collection plus the size of every indexes on the
collection.
db.collection.totalIndexSize()
db.collection.totalIndexSize()
Returns The total size of all indexes for the collection. This method provides a wrapper
around the totalIndexSize (page 475) output of the collStats (page 472) (i.e.
db.collection.stats() (page 106)) operation.
db.collection.update()
On this page
Definition
db.collection.update(query, update, options)
Modifies an existing document or documents in a collection. The method can modify specific fields of an
existing document or documents or replace an existing document entirely, depending on the update parameter
(page 117).
By default, the update() (page 116) method updates a single document. Set the Multi Parameter (page 119)
to update all documents that match the query criteria.
The update() (page 116) method has the following form:
116
117
The update() (page 116) method updates only the corresponding fields in the document.
To update an embedded document or an array as a whole, specify the replacement value for the field. To update
particular fields in an embedded document or in an array, use dot notation to specify the field.
Replace a Document Entirely If the <update> document contains only field:value expressions, then:
The update() (page 116) method replaces the matching document with the <update> document. The
update() (page 116) method does not replace the _id value. For an example, see Replace All Fields
(page 120).
update() (page 116) cannot update multiple documents (page 119).
Upsert Option
Upsert Behavior If upsert is true and no document matches the query criteria, update() (page 116) inserts
a single document. The update creates the new document with either:
The fields and values of the <update> parameter if the <update> parameter contains only field and value
pairs, or
The fields and values of both the <query> and <update> parameters if the <update> parameter contains
update operator (page 587) expressions. The update creates a base document from the equality clauses in the
<query> parameter, and then applies the update expressions from the <update> parameter.
If upsert is true and there are documents that match the query criteria, update() (page 116) performs an update.
See also:
$setOnInsert (page 591)
Warning: To avoid inserting the same document more than once, only use upsert:
is uniquely indexed.
Given a collection named people where no documents have a name field that holds the value Andy. Consider when
multiple clients issue the following update with upsert: true at the same time:
db.people.update(
{ name: "Andy" },
{
name: "Andy",
rating: 1,
score: 1
},
{ upsert: true }
)
If all update() (page 116) operations complete the query portion before any client successfully inserts data, and
there is no unique index on the name field, then each update operation may result in an insert.
To prevent MongoDB from inserting the same document more than once, create a unique index on the name field.
With a unique index, if multiple applications issue the same update with upsert: true, exactly one update()
(page 116) would successfully insert a new document.
The remaining operations would either:
update the newly inserted document, or
118
Multi Parameter If multi is set to true, the update() (page 116) method updates all documents that meet
the <query> criteria. The multi update operation may interleave with other operations, both read and/or write
operations. For unsharded collections, you can override this behavior with the $isolated (page 621) operator,
which isolates the update operation and disallows yielding during the operation. This isolates the update so that no
client can see the updated documents until they are all processed, or an error stops the update operation.
If the <update> (page 117) document contains only field:value expressions, then update() (page 116) cannot
update multiple documents.
For an example, see Update Multiple Documents (page 121).
Sharded Collections All update() (page 116) operations for a sharded collection that specify the multi:
false option must include the shard key or the _id field in the query specification. update() (page 116) operations specifying multi: false in a sharded collection without the shard key or the _id field return an error.
See also:
findAndModify() (page 57)
Examples
Update Specific Fields To update specific fields in a document, use update operators (page 587) in the <update>
parameter.
For example, given a books collection with the following document:
{
_id: 1,
item: "TBD",
stock: 0,
info: { publisher: "1111", pages: 430 },
tags: [ "technology", "computer" ],
ratings: [ { by: "ijk", rating: 4 }, { by: "lmn", rating: 5 } ],
reorder: false
}
119
See also:
$set (page 592), $inc (page 587), Update Operators (page 586), dot notation
Remove Fields The following operation uses the $unset (page 594) operator to remove the tags field:
db.books.update( { _id: 1 }, { $unset: { tags: 1 } } )
See also:
$unset (page 594), $rename (page 590), Update Operators (page 586)
Replace All Fields Given the following document in the books collection:
{
_id: 2,
item: "XYZ123",
stock: 15,
info: { publisher: "5555", pages: 150 },
tags: [ ],
ratings: [ { by: "xyz", rating: 5, comment: "ratings and reorder will go away after update"} ],
reorder: false
}
The following operation passes an <update> document that contains only field and value pairs. The <update>
document completely replaces the original document except for the _id field.
120
db.books.update(
{ item: "XYZ123" },
{
item: "XYZ123",
stock: 10,
info: { publisher: "2255", pages: 150 },
tags: [ "baking", "cooking" ]
}
)
The updated document contains only the fields from the replacement document and the _id field. That is, the fields
ratings and reorder no longer exist in the updated document since the fields were not in the replacement document.
{
"_id" : 2,
"item" : "XYZ123",
"stock" : 10,
"info" : { "publisher" : "2255", "pages" : 150 },
"tags" : [ "baking", "cooking" ]
}
Insert a New Document if No Match Exists The following update sets the upsert (page 118) option to true so
that update() (page 116) creates a new document in the books collection if no document matches the <query>
parameter:
db.books.update(
{ item: "ZZZ135" },
{
item: "ZZZ135",
stock: 5,
tags: [ "database" ]
},
{ upsert: true }
)
If no document matches the <query> parameter, the update operation inserts a document with only the fields and
values of the <update> document and a new unique ObjectId for the _id field:
{
"_id" : ObjectId("542310906694ce357ad2a1a9"),
"item" : "ZZZ135",
"stock" : 5,
"tags" : [ "database" ]
}
For more information on upsert option and the inserted document, Upsert Option (page 118).
Update Multiple Documents To update multiple documents, set the multi option to true. For example, the
following operation updates all documents where stock is less than or equal to 10:
db.books.update(
{ stock: { $lte: 10 } },
{ $set: { reorder: true } },
{ multi: true }
)
121
If the reorder field does not exist in the matching document(s), the $set (page 592) operator will add the field
with the specified value. See $set (page 592) for more information.
Override Default Write Concern The following operation on a replica set specifies a write concern of "w:
majority" with a wtimeout of 5000 milliseconds such that the method returns after the write propagates to a
majority of the voting replica set members or the method times out after 5 seconds.
Changed in version 3.0: In previous versions, majority referred to the majority of all members of the replica set.
db.books.update(
{ stock: { $lte: 10 } },
{ $set: { reorder: true } },
{
multi: true,
writeConcern: { w: "majority", wtimeout: 5000 }
}
)
Combine the upsert and multi Options Given a books collection that includes the following documents:
{
_id: 5,
item: "EFG222",
stock: 18,
info: { publisher: "0000", pages: 70 },
reorder: true
}
{
_id: 6,
item: "EFG222",
stock: 15,
info: { publisher: "1111", pages: 72 },
reorder: true
}
The following operation specifies both the multi option and the upsert option. If matching documents exist, the
operation updates all matching documents. If no matching documents exist, the operation inserts a new document.
db.books.update(
{ item: "EFG222" },
{ $set: { reorder: false, tags: [ "literature", "translated" ] } },
{ upsert: true, multi: true }
)
The operation updates all matching documents and results in the following:
{
"_id" : 5,
"item" : "EFG222",
"stock" : 18,
"info" : { "publisher" : "0000", "pages" : 70 },
"reorder" : false,
"tags" : [ "literature", "translated" ]
}
{
"_id" : 6,
"item" : "EFG222",
"stock" : 15,
122
If the collection had no matching document, the operation would result in the insertion of a document using the fields
from both the <query> and the <update> specifications:
{
"_id" : ObjectId("5423200e6694ce357ad2a1ac"),
"item" : "EFG222",
"reorder" : false,
"tags" : [ "literature", "translated" ]
}
For more information on upsert option and the inserted document, Upsert Option (page 118).
WriteResult Changed in version 2.6.
Successful Results The update() (page 116) method returns a WriteResult (page 288) object that contains
the status of the operation. Upon success, the WriteResult (page 288) object contains the number of documents
that matched the query condition, the number of documents inserted by the update, and the number of documents
modified:
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
See
WriteResult.nMatched (page 288), WriteResult.nUpserted (page 288), WriteResult.nModified
(page 288)
Write Concern Errors If the update() (page 116) method encounters write concern errors, the results include
the WriteResult.writeConcernError (page 289) field:
WriteResult({
"nMatched" : 1,
"nUpserted" : 0,
"nModified" : 1,
"writeConcernError" : {
"code" : 64,
"errmsg" : "waiting for replication timed out at shard-a"
}
})
See also:
WriteResult.hasWriteConcernError() (page 290)
Errors Unrelated to Write Concern If the update() (page 116) method encounters a non-write concern error,
the results include the WriteResult.writeError (page 289) field:
WriteResult({
"nMatched" : 0,
"nUpserted" : 0,
"nModified" : 0,
123
"writeError" : {
"code" : 7,
"errmsg" : "could not contact primary for replica set shard-a"
}
})
See also:
WriteResult.hasWriteError() (page 289)
Additional Resources
Quick Reference Cards5
db.collection.updateOne()
On this page
Definition (page 124)
Behavior (page 125)
Examples (page 125)
Definition
db.collection.updateOne(filter, update, options)
New in version 3.2.
Updates a single document within the collection based on the filter.
The updateOne() (page 124) method has the following form:
db.collection.updateOne(
<filter>,
<update>,
{
upsert: <boolean>,
writeConcern: <document>
}
)
5 https://www.mongodb.com/lp/misc/quick-reference-cards?jmp=docs
124
param boolean upsert Optional. When true, if no documents match the filter, a new document is created using the equality comparisons in filter with the modifications from
update.
Comparison (page 519) operations from the filter will not be included in the new document.
If the filter only has comparison operations, then only the modifications from the update
will be applied to the new document.
See Update with Upsert (page 126)
param document writeConcern Optional. A document expressing the write concern. Omit
to use the default write concern.
Returns
A document containing:
A boolean acknowledged as true if the operation ran with write concern or false if
write concern was disabled
matchedCount containing the number of matched documents
modifiedCount containing the number of modified documents
upsertedId containing the _id for the upserted document
Behavior updateOne() (page 124) updates the first matching document in the collection that matches the
filter, using the update instructions to apply modifications.
If upsert: true and no documents match the filter, updateOne() (page 124) creates a new document
based on the filter criteria and update modifications. See Update with Upsert (page 126).
Capped Collection updateOne() (page 124) throws a WriteError exception if the update criteria increases
the size of the first matching document in a capped collection.
Explainability updateOne() (page 124) is not compatible with db.collection.explain() (page 48).
Use update() (page 116) instead.
Examples
Update The restaurant collection contains the following documents:
{ "_id" : 1, "name" : "Central Perk Cafe", "Borough" : "Manhattan" },
{ "_id" : 2, "name" : "Rock A Feller Bar and Grill", "Borough" : "Queens", "violations" : "2" },
{ "_id" : 3, "name" : "Empire State Pub", "Borough" : "Brooklyn", "violations" : "0" }
try {
db.inventory.updateOne(
{ "name" : "Central Perk Cafe" },
{ $set: { "violations" : 3 } }
);
}
catch (e) {
125
print(e);
}
Setting upsert:
true would insert the document if no match was found. See Update with Upsert (page 126)
Update with Upsert The restaurant collection contains the following documents:
{ "_id" : 1, "name" : "Central Perk Cafe", "Borough" : "Manhattan", "violations" : 3 },
{ "_id" : 2, "name" : "Rock A Feller Bar and Grill", "Borough" : "Queens", "violations" : "2" },
{ "_id" : 3, "name" : "Empire State Pub", "Borough" : "Brooklyn", "violations" : "0" }
try {
db.restaurant.updateOne(
{ "name" : "Pizza Rat's Pizzaria" },
{ $set: {"_id" : 4, "violations" : "7", "borough" : "Manhattan" } },
{ upsert: true }
);
}
catch (e) {
print(e);
}
Since upsert:true the document is inserted based on the filter and update criteria. The operation
returns:
{
"acknowledged" : true,
"matchedCount" : 0,
"modifiedCount" : 0,
"upsertedId" : 4
}
"_id"
"_id"
"_id"
"_id"
:
:
:
:
1,
2,
3,
4,
"name"
"name"
"name"
"name"
:
:
:
:
The name field was filled in using the filter criteria, while the update operators were used to create the rest of
the document.
The following operation updates the first document with violations that are greater than 10:
try {
db.restaurant.updateOne(
{ "violations" : { $gt: 10} },
{ $set: { "Closed" : true } },
{ upsert: true }
126
);
}
catch {
print(e);
}
"_id"
"_id"
"_id"
"_id"
"_id"
:
:
:
:
:
Since no documents matched the filter, and upsert was true, updateOne (page 124) inserted the document with
a generated _id and the update criteria only.
Update with Write Concern
wtimeout of 100:
Given a three member replica set, the following operation specifies a w of majority,
try {
db.restaurant.updateOne(
{ "name" : "Pizza Rat's Pizzaria" },
{ $inc: { "violations" : 3}, $set: { "Closed" : true } },
{ w: "majority", wtimeout: 100 }
);
}
catch {
print(e);
}
If the primary and at least one secondary acknowledge each write operation within 100 milliseconds, it returns:
{ "acknowledged" : true, "matchedCount" : 1, "modifiedCount" : 1 }
If the acknowledgement takes longer than the wtimeout limit, the following exception is thrown:
WriteConcernError({
"code" : 64,
"errInfo" : {
"wtimeout" : true
},
"errmsg" : "waiting for replication timed out"
}) :
See also:
To update multiple documents, see db.collection.updateMany() (page 128).
127
db.collection.updateMany()
On this page
Definition (page 128)
Behavior (page 129)
Examples (page 129)
Definition
db.collection.updateMany(filter, update, options)
New in version 3.2.
Updates multiple documents within the collection based on the filter.
The updateMany() (page 128) method has the following form:
db.collection.updateMany(
<filter>,
<update>,
{
upsert: <boolean>,
writeConcern: <document>
}
)
param boolean upsert Optional. When true, if no documents match the filter, a new document is created using the equality comparisons in filter with the modifications from
update.
Comparison (page 519) operations from the filter will not be included in the new document.
If the filter only has comparison operations, then only the modifications from the update
will be applied to the new document.
See Update Multiple Documents with Upsert (page 130)
param document writeConcern Optional. A document expressing the write concern. Omit
to use the default write concern.
Returns
A document containing:
A boolean acknowledged as true if the operation ran with write concern or false if
write concern was disabled
128
"_id"
"_id"
"_id"
"_id"
:
:
:
:
1,
2,
3,
4,
"name"
"name"
"name"
"name"
:
:
:
:
The following operation updates all documents where violations are greater than 4 and $set (page 592) a flag
for review:
try {
db.inventory.updateMany(
{ violations: { $gt: 4 } },
{ $set: { "Review" : true } }
);
}
catch (e) {
print(e);
}
"_id"
"_id"
"_id"
"_id"
:
:
:
:
1,
2,
3,
4,
"name"
"name"
"name"
"name"
:
:
:
:
Setting upsert:
129
Update Multiple Documents with Upsert The inspectors collection contains the following documents:
{
{
{
{
"_id"
"_id"
"_id"
"_id"
:
:
:
:
92412,
92413,
92414,
92415,
"inspector"
"inspector"
"inspector"
"inspector"
:
:
:
:
"F.
"J.
"J.
"R.
Drebin", "Sector" :
Clouseau", "Sector"
Clouseau", "Sector"
Coltrane", "Sector"
1, "Patrolling" :
: 2, "Patrolling"
: 3, "Patrolling"
: 3, "Patrolling"
true },
: false },
: true },
: false }
The following operation updates all documents with violations that are greater than 10:
try {
db.inspectors.updateMany(
{ "inspector" : "J. Clouseau", "Sector" : 4 },
{ $set: { "Patrolling" : false } },
{ upsert: true }
);
}
catch (e) {
print(e);
}
"_id"
"_id"
"_id"
"_id"
"_id"
:
:
:
:
:
92412,
92413,
92414,
92415,
92416,
"inspector"
"inspector"
"inspector"
"inspector"
"inspector"
:
:
:
:
:
"F.
"J.
"J.
"R.
"J.
Drebin", "Sector" :
Clouseau", "Sector"
Clouseau", "Sector"
Coltrane", "Sector"
Clouseau", "Sector"
1, "Patrolling" :
: 2, "Patrolling"
: 3, "Patrolling"
: 3, "Patrolling"
: 4, "Patrolling"
true },
: false },
: true },
: false },
: false }
No documents in the collection matched the filter, so a new document was created.
The following operation updates all documents with Sector greater than 4 for inspector :
Coltrane":
"R.
try {
db.inspectors.updateMany(
{ "Sector" : { $gt : 4 }, "inspector" : "R. Coltrane" },
{ $set: { "Patrolling" : false } },
{ upsert: true }
);
}
catch (e) {
print(e);
}
130
"_id"
"_id"
"_id"
"_id"
"_id"
"_id"
:
:
:
:
:
:
92412,
92413,
92414,
92415,
92416,
92417,
"inspector"
"inspector"
"inspector"
"inspector"
"inspector"
"inspector"
:
:
:
:
:
:
"F.
"J.
"J.
"R.
"J.
"R.
true },
: false },
: true },
: false },
: false },
Since no documents matched the filter, and upsert was true, updateMany (page 128) inserted the document
with a generated _id, the equality operator from filter, and the update modifiers.
Update with Write Concern Given a three member replica set, the following operation specifies a w of majority
and wtimeout of 100:
try {
db.restaurant.updateMany(
{ "name" : "Pizza Rat's Pizzaria" },
{ $inc: { "violations" : 3}, $set: { "Closed" : true } },
{ w: "majority", wtimeout: 100 }
);
}
catch (e) {
print(e);
}
If the acknowledgement takes longer than the wtimeout limit, the following exception is thrown:
WriteConcernError({
"code" : 64,
"errInfo" : {
"wtimeout" : true
},
"errmsg" : "waiting for replication timed out"
}) :
undefined
The wtimeout error only indicates that the operation did not complete on time. The write operation itself can still
succeed outside of the set time limit.
db.collection.validate()
On this page
Description (page 131)
Description
db.collection.validate(full)
Validates a collection. The method scans a collections data structures for correctness and returns a single
document that describes the relationship between the logical collection and the physical representation of the
data.
The validate() (page 131) method has the following parameter:
131
param boolean full Optional. Specify true to enable a full validation and to return full statistics. MongoDB disables full validation by default because it is a potentially resource-intensive
operation.
The validate() (page 131) method output provides an in-depth view of how the collection uses storage. Be
aware that this command is potentially resource intensive and may impact the performance of your MongoDB
instance.
The validate() (page 131) method is a wrapper around the validate (page 484) database command.
See also:
validate (page 484)
2.1.2 Cursor
Cursor Methods
These methods modify the way that the underlying query is executed.
132
Name
Description
cursor.batchSize() Controls the number of documents MongoDB will return to the client in a single
(page 134)
network message.
cursor.close()
Close a cursor and free associated server resources.
(page 134)
cursor.comment()
Attaches a comment to the query to allow for traceability in the logs and the
(page 135)
system.profile collection.
cursor.count()
Modifies the cursor to return the number of documents in the result set rather than
(page 136)
the documents themselves.
cursor.explain()
Reports on the query execution plan for a cursor.
(page 139)
cursor.forEach()
Applies a JavaScript function for every document in a cursor.
(page 140)
cursor.hasNext()
Returns true if the cursor has documents and can be iterated.
(page 141)
cursor.hint()
Forces MongoDB to use a specific index for a query.
(page 141)
cursor.itcount()
Computes the total number of documents in the cursor client-side by fetching and
(page 142)
iterating the result set.
cursor.limit()
Constrains the size of a cursors result set.
(page 143)
cursor.map()
Applies a function to each document in a cursor and collects the return values in an
(page 143)
array.
cursor.maxScan()
Specifies the maximum number of items to scan; documents for collection scans,
(page 144)
keys for index scans.
cursor.maxTimeMS() Specifies a cumulative time limit in milliseconds for processing operations on a
(page 145)
cursor.
cursor.max()
Specifies an exclusive upper index bound for a cursor. For use with
(page 146)
cursor.hint() (page 141)
cursor.min()
Specifies an inclusive lower index bound for a cursor. For use with
(page 147)
cursor.hint() (page 141)
cursor.next()
Returns the next document in a cursor.
(page 149)
cursor.noCursorTimeout()
Instructs the server to avoid closing a cursor automatically after a period of
(page 149)
inactivity.
cursor.objsLeftInBatch()
Returns the number of documents left in the current cursor batch.
(page 150)
cursor.pretty()
Configures the cursor to display results in an easy-to-read format.
(page 150)
cursor.readConcern()Specifies a read concern for a find() (page 51) operation.
(page 151)
cursor.readPref()
Specifies a read preference to a cursor to control how the client directs queries to a
(page 151)
replica set.
cursor.returnKey() Modifies the cursor to return index keys rather than the documents.
(page 152)
cursor.showRecordId()
Adds an internal storage engine ID field to each document returned by the cursor.
(page 153)
cursor.size()
Returns a count of the documents in the cursor after applying skip() (page 154)
(page 154)
and limit() (page 143) methods.
cursor.skip()
Returns a cursor that begins returning results only after passing or skipping a
(page 154)
number of documents.
cursor.snapshot()
Forces the cursor to use the index on the _id field. Ensures that the cursor returns
(page 155)
each document, with regards to the value of the _id field, only once.
cursor.sort()
Returns results ordered according to a sort specification.
(page 155)
cursor.tailable()
Marks the cursor as tailable. Only valid for cursors over capped collections.
2.1.
mongo
133
(page
159) Shell Methods
cursor.toArray()
Returns an array that contains all documents returned by the cursor.
(page 160)
cursor.batchSize()
On this page
Definition (page 134)
Example (page 134)
Definition
cursor.batchSize(size)
Specifies the number of documents to return in each batch of the response from the MongoDB instance. In most
cases, modifying the batch size will not affect the user or the application, as the mongo (page 794) shell and
most drivers return results as if MongoDB returned a single batch.
The batchSize() (page 134) method takes the following parameter:
param integer size The number of documents to return per batch. Do not use a batch size of 1.
Note: Specifying 1 or a negative number is analogous to using the limit() (page 143) method.
Example The following example sets the batch size for the results of a query (i.e. find() (page 51)) to 10. The
batchSize() (page 134) method does not change the output in the mongo (page 794) shell, which, by default,
iterates over the first 20 documents.
db.inventory.find().batchSize(10)
cursor.close()
Definition
cursor.close()
Instructs the server to close a cursor and free associated server resources. The server will automatically close
cursors that have no remaining results, as well as cursors that have been idle for a period of time and lack the
cursor.noCursorTimeout() (page 149) option.
The close() (page 134) method has the following prototype form:
db.collection.find(<query>).close()
cursor.comment()
On this page
134
Definition
cursor.comment()
New in version 3.2.
Adds a comment field to the query.
cursor.comment() (page 135) has the following syntax:
cursor.comment( <string> )
Output Examples
system.profile The following is an excerpt from the system.profile (page 885):
{
"op" : "query",
"ns" : "guidebook.restaurant",
"query" : {
"find" : "restaurant",
"filter" : {
"borough" : "Manhattan"
},
"comment" : "Find all Manhattan restaurants"
},
...
}
mongod log The following is an excerpt from the mongod (page 762) log. It has been formatted for readability.
Important: The verbosity level for QUERY (page 956) must be greater than 0. See Configure Log Verbosity Levels
(page 957)
135
db.currentOp() Suppose the following operation is currently running on a mongod (page 762) instance:
db.restaurant.find(
{ "borough" : "Manhattan" }
).comment("Find all Manhattan restaurants")
cursor.count()
On this page
Definition (page 136)
Behavior (page 137)
Examples (page 138)
Definition
cursor.count()
Counts the number of documents referenced by a cursor. Append the count() (page 136) method to a find()
136
(page 51) query to return the number of matching documents. The operation does not perform the query but
instead counts the results that would be returned by the query.
Changed in version 2.6: MongoDB supports the use of hint() (page 141) with count() (page 136). See
Specify the Index to Use (page 138) for an example.
The count() (page 136) method has the following prototype form:
db.collection.find(<query>).count()
To get a count of documents that match a query condition, include the $match (page 627) stage as well:
db.collection.aggregate(
[
{ $match: <query condition> },
{ $group: { _id: null, count: { $sum: 1 } } }
]
)
When performing a count, MongoDB can return the count using only the index if:
the query can use an index,
the query only contains conditions on the keys of the index, and
2.1. mongo Shell Methods
137
If, however, the query can use an index but the query predicates do not access a single contiguous range of index keys
or the query also contains conditions on fields outside the index, then in addition to using the index, MongoDB must
also read the documents to return the count.
db.collection.find( { a: 5, b: { $in: [ 1, 2, 3 ] } } ).count()
db.collection.find( { a: { $gt: 5 }, b: 5 } ).count()
db.collection.find( { a: 5, b: 5, c: 5 } ).count()
In such cases, during the initial read of the documents, MongoDB pages the documents into memory such that subsequent calls of the same count operation will have better performance.
Examples The following are examples of the count() (page 136) method.
Count All Documents The following operation counts the number of all documents in the orders collection:
db.orders.find().count()
Count Documents That Match a Query The following operation counts the number of the documents in the
orders collection with the field ord_dt greater than new Date(01/01/2012):
db.orders.find( { ord_dt: { $gt: new Date('01/01/2012') } } ).count()
Limit Documents in Count The following operation counts the number of the documents in the orders collection
with the field ord_dt greater than new Date(01/01/2012) taking into account the effect of the limit(5):
db.orders.find( { ord_dt: { $gt: new Date('01/01/2012') } } ).limit(5).count(true)
Specify the Index to Use The following operation uses the index named "status_1", which has the index key
specification of { status: 1 }, to return a count of the documents in the orders collection with the field
ord_dt greater than new Date(01/01/2012) and the status field is equal to "D":
db.orders.find(
{ ord_dt: { $gt: new Date('01/01/2012') }, status: "D" }
).hint( "status_1" ).count()
cursor.explain()
On this page
138
Definition
cursor.explain(verbosity)
Changed in version 3.0: The parameter to the method and the output format have changed in 3.0.
Provides information on the query plan for the db.collection.find() (page 51) method.
The explain() (page 139) method has the following form:
db.collection.find().explain()
139
db.collection.explain().find() db.collection.explain().find()
db.collection.find().explain() (page 139) with the following key differences:
is
similar
to
The db.collection.explain().find() construct allows for the additional chaining of query modifiers. For list of query modifiers, see db.collection.explain().find().help() (page 49).
The db.collection.explain().find() returns a cursor, which requires a call to .next(), or its
alias .finish(), to return the explain() results.
See db.collection.explain() (page 48) for more information.
Example The following example runs cursor.explain() (page 139) in executionStats (page 48) verbosity
mode to return the query planning and execution information for the specified db.collection.find() (page 51)
operation:
db.products.find(
{ quantity: { $gt: 50 }, category: "apparel" }
).explain("executionStats")
On this page
Description (page 140)
Example (page 141)
Description
cursor.forEach(function)
Iterates the cursor to apply a JavaScript function to each document from the cursor.
The forEach() (page 140) method has the following prototype form:
db.collection.find().forEach(<function>)
140
param JavaScript function A JavaScript function to apply to each document from the cursor. The
<function> signature includes a single argument that is passed the current document to process.
Example The following example invokes the forEach() (page 140) method on the cursor returned by find()
(page 51) to print the name of each user in the collection:
db.users.find().forEach( function(myDoc) { print( "user: " + myDoc.name ); } );
See also:
cursor.map() (page 143) for similar functionality.
cursor.hasNext()
cursor.hasNext()
Returns Boolean.
cursor.hasNext() (page 141) returns true if the cursor returned by the db.collection.find()
(page 51) query can iterate further to return more documents.
cursor.hint()
On this page
Definition (page 141)
Behavior (page 141)
Example (page 141)
Definition
cursor.hint(index)
Call this method on a query to override MongoDBs default index selection and query optimization
process. Use db.collection.getIndexes() (page 72) to return the list of current indexes on a collection.
The cursor.hint() (page 141) method has the following parameter:
param string, document index The index to hint or force MongoDB to use when performing the
query. Specify the index either by the index name or by the index specification document.
You can also specify { $natural : 1 } to force the query to perform a forwards collection scan, or { $natural : -1 } for a reverse collection scan.
Behavior When an index filter exists for the query shape, MongoDB ignores the hint() (page 141).
You cannot use hint() (page 141) if the query includes a $text (page 541) query expression.
Example The following example returns all documents in the collection named users using the index on the age
field.
141
db.users.find().hint( { age: 1 } )
You can also specify the index using the index name:
db.users.find().hint( "age_1" )
db.users.find().hint( { $natural : 1 } )
db.users.find().hint( { $natural : -1 } )
See also:
https://docs.mongodb.org/manual/core/indexes-introduction
https://docs.mongodb.org/manual/administration/indexes
https://docs.mongodb.org/manual/core/query-plans
index-filters
$hint
cursor.itcount()
On this page
Definition (page 142)
Definition
cursor.itcount()
Counts the number of documents remaining in a cursor.
itcount() (page 142) is similar to cursor.count() (page 136), but actually executes the query on an
existing iterator, exhausting its contents in the process.
The itcount() (page 142) method has the following prototype form:
db.collection.find(<query>).itcount()
See also:
cursor.count() (page 136)
cursor.limit()
On this page
Definition (page 143)
Behavior (page 143)
142
Definition
cursor.limit()
Use the limit() (page 143) method on a cursor to specify the maximum number of documents the cursor will
return. limit() (page 143) is analogous to the LIMIT statement in a SQL database.
Note: You must apply limit() (page 143) to the cursor before retrieving any documents from the database.
Use limit() (page 143) to maximize performance and prevent MongoDB from returning more results than
required for processing.
Behavior
Supported Values
The behavior of limit() (page 143) is undefined for values less than -231 and greater than 231 .
Zero Value A limit() (page 143) value of 0 (i.e. .limit(0) (page 143)) is equivalent to setting no limit.
Negative Values A negative limit is similar to a positive limit but closes the cursor after returning a single batch of
results. As such, with a negative limit, if the limited result set does not fit into a single batch, the number of documents
received will be less than the specified limit. By passing a negative limit, the client indicates to the server that it will
not ask for a subsequent batch via getMore.
cursor.map()
On this page
Example (page 143)
cursor.map(function)
Applies function to each document visited by the cursor and collects the return values from successive
application into an array.
The cursor.map() (page 143) method has the following parameter:
param function function A function to apply to each document visited by the cursor.
Example
db.users.find().map( function(u) { return u.name; } );
See also:
cursor.forEach() (page 140) for similar functionality.
cursor.maxScan()
143
On this page
Definition (page 144)
Behavior (page 144)
Example (page 144)
Definition
cursor.maxScan()
New in version 3.2.
Specifies a maximum number of documents or index keys the query plan will scan. Once the limit is reached,
the query terminates execution and returns the current batch of results.
maxScan() (page 144) has the following syntax:
cursor.maxScan( <maxScan> )
_id
_id
_id
_id
_id
_id
_id
_id
_id
_id
:
:
:
:
:
:
:
:
:
:
1, ts : 100, status :
2, ts : 200, status :
3, ts : 300, status :
4, ts : 400, status :
5, ts : 500, status :
6, ts : 600, status :
7, ts : 700, status :
8, ts : 800, status :
9, ts : 900, status :
10, ts : 1000, status
"OK" },
"OK" },
"WARN" },
"DANGER" },
"WARN" },
"OK" },
"OK" },
"WARN" },
"WARN" },
: "OK" }
Assuming this query were answered with a collection scan, the following limits the number of documents to scan to
5:
db.collection.find ( { "status" : "OK" } ).maxScan(5)
144
{
{
{
{
"_id"
"_id"
"_id"
"_id"
:
:
:
:
1,
2,
6,
7,
"ts"
"ts"
"ts"
"ts"
:
:
:
:
100,
200,
600,
700,
"status"
"status"
"status"
"status"
:
:
:
:
"OK"
"OK"
"OK"
"OK"
}
}
}
}
cursor.maxTimeMS()
On this page
Definition (page 145)
Behaviors (page 145)
Examples (page 145)
Behaviors MongoDB targets operations for termination if the associated cursor exceeds its allotted time limit.
MongoDB terminates operations that exceed their allotted time limit, using the same mechanism as db.killOp()
(page 191). MongoDB only terminates an operation at one of its designated interrupt points.
MongoDB does not count network latency towards a cursors time limit.
Queries that generate multiple batches of results continue to return batches until the cursor exceeds its allotted time
limit.
Examples
Example
The following query specifies a time limit of 50 milliseconds:
db.collection.find({description: /August [0-9]+, 1969/}).maxTimeMS(50)
cursor.max()
145
On this page
Definition (page 146)
Behavior (page 146)
Example (page 147)
Definition
cursor.max()
Specifies the exclusive upper bound for a specific index in order to constrain the results of find() (page 51).
max() (page 146) provides a way to specify an upper bound on compound key indexes.
The max() (page 146) method has the following parameter:
param document indexBounds The exclusive upper bound for the index keys.
The indexBounds parameter has the following prototype form:
{ field1: <max value>, field2: <max value2> ... fieldN:<max valueN> }
The fields correspond to all the keys of a particular index in order. You can explicitly specify the particular
index with the hint() (page 141) method. Otherwise, mongod (page 762) selects the index using the fields
in the indexBounds; however, if multiple indexes exist on same fields with different sort orders, the selection
of the index may be ambiguous.
See also:
min() (page 147).
max() (page 146) exists primarily to support the mongos (page 784) (sharding) process, and is a shell wrapper
around the query modifier $max.
Behavior
Interaction with Index Selection Because max() (page 146) requires an index on a field, and forces the query to
use this index, you may prefer the $lt (page 522) operator for the query if possible. Consider the following example:
db.products.find( { _id: 7 } ).max( { price: 1.39 } )
The query will use the index on the price field, even if the index on _id may be better.
Index Bounds If you use max() (page 146) with min() (page 147) to specify a range, the index bounds specified
in min() (page 147) and max() (page 146) must both refer to the keys of the same index.
max() without min() The min and max operators indicate that the system should avoid normal query planning.
Instead they construct an index scan where the index bounds are explicitly specified by the values given in min and
max.
Warning: If one of the two boundaries is not specified, the query plan will be an index scan that is unbounded
on one side. This may degrade performance compared to a query containing neither operator, or one that uses both
operators to more tightly constrain the index scan.
146
Example This example assumes a collection named products that holds the following documents:
{
{
{
{
{
{
{
{
{
{
"_id"
"_id"
"_id"
"_id"
"_id"
"_id"
"_id"
"_id"
"_id"
"_id"
:
:
:
:
:
:
:
:
:
:
"_id" : 1 }
"item" : 1, "type" : 1 }
"item" : 1, "type" : -1 }
"price" : 1 }
Using the ordering of { item: 1, type: 1 } index, max() (page 146) limits the query to the documents that are below the bound of item equal to apple and type equal to jonagold:
db.products.find().max( { item: 'apple', type: 'jonagold' } ).hint( { item: 1, type: 1 } )
If the query did not explicitly specify the index with the hint() (page 141) method, it is ambiguous as to
whether mongod (page 762) would select the { item: 1, type: 1 } index ordering or the { item:
1, type: -1 } index ordering.
Using the ordering of the index { price: 1 }, max() (page 146) limits the query to the documents that are
below the index key bound of price equal to 1.99 and min() (page 147) limits the query to the documents
that are at or above the index key bound of price equal to 1.39:
db.products.find().min( { price: 1.39 } ).max( { price: 1.99 } ).hint( { price: 1 } )
cursor.min()
On this page
Definition (page 147)
Behaviors (page 148)
Example (page 148)
Definition
147
cursor.min()
Specifies the inclusive lower bound for a specific index in order to constrain the results of find() (page 51).
min() (page 147) provides a way to specify lower bounds on compound key indexes.
The min() (page 147) method has the following parameter:
param document indexBounds The inclusive lower bound for the index keys.
The indexBounds parameter has the following prototype form:
{ field1: <min value>, field2: <min value2>, fieldN:<min valueN> }
The fields correspond to all the keys of a particular index in order. You can explicitly specify the particular
index with the hint() (page 141) method. Otherwise, MongoDB selects the index using the fields in the
indexBounds; however, if multiple indexes exist on same fields with different sort orders, the selection of the
index may be ambiguous.
See also:
max() (page 146).
min() (page 147) exists primarily to support the mongos (page 784) process, and is a shell wrapper around
the query modifier $min.
Behaviors
Interaction with Index Selection Because min() (page 147) requires an index on a field, and forces the query
to use this index, you may prefer the $gte (page 522) operator for the query if possible. Consider the following
example:
db.products.find( { _id: 7 } ).min( { price: 1.39 } )
The query will use the index on the price field, even if the index on _id may be better.
Index Bounds If you use min() (page 147) with max() (page 146) to specify a range, the index bounds specified
in min() (page 147) and max() (page 146) must both refer to the keys of the same index.
min() without max() The min and max operators indicate that the system should avoid normal query planning.
Instead they construct an index scan where the index bounds are explicitly specified by the values given in min and
max.
Warning: If one of the two boundaries is not specified, the query plan will be an index scan that is unbounded
on one side. This may degrade performance compared to a query containing neither operator, or one that uses both
operators to more tightly constrain the index scan.
Example This example assumes a collection named products that holds the following documents:
{
{
{
{
{
{
{
{
"_id"
"_id"
"_id"
"_id"
"_id"
"_id"
"_id"
"_id"
148
:
:
:
:
:
:
:
:
"_id" : 1 }
"item" : 1, "type" : 1 }
"item" : 1, "type" : -1 }
"price" : 1 }
Using the ordering of the { item: 1, type: 1 } index, min() (page 147) limits the query to the
documents that are at or above the index key bound of item equal to apple and type equal to jonagold,
as in the following:
db.products.find().min( { item: 'apple', type: 'jonagold' } ).hint( { item: 1, type: 1 } )
"_id"
"_id"
"_id"
"_id"
"_id"
"_id"
"_id"
:
:
:
:
:
:
:
If the query did not explicitly specify the index with the hint() (page 141) method, it is ambiguous as to
whether mongod (page 762) would select the { item: 1, type: 1 } index ordering or the { item:
1, type: -1 } index ordering.
Using the ordering of the index { price: 1 }, min() (page 147) limits the query to the documents that
are at or above the index key bound of price equal to 1.39 and max() (page 146) limits the query to the
documents that are below the index key bound of price equal to 1.99:
db.products.find().min( { price: 1.39 } ).max( { price: 1.99 } ).hint( { price: 1 } )
cursor.next()
cursor.next()
Returns The next document in the cursor returned by the db.collection.find() (page 51)
method. See cursor.hasNext() (page 141) related functionality.
cursor.noCursorTimeout()
Definition
cursor.noCursorTimeout()
Instructs the server to avoid closing a cursor automatically after a period of inactivity.
The noCursorTimeout() (page 149) method has the following prototype form:
db.collection.find(<query>).noCursorTimeout()
149
cursor.objsLeftInBatch()
cursor.objsLeftInBatch()
cursor.objsLeftInBatch() (page 150) returns the number of documents remaining in the current batch.
The MongoDB instance returns response in batches. To retrieve all the documents from a cursor may require
multiple batch responses from the MongoDB instance. When there are no more documents remaining in the
current batch, the cursor will retrieve another batch to get more documents until the cursor exhausts.
cursor.pretty()
On this page
Definition (page 150)
Examples (page 150)
Definition
cursor.pretty()
Configures the cursor to display results in an easy-to-read format.
The pretty() (page 150) method has the following prototype form:
db.collection.find(<query>).pretty()
db.books.save({
"_id" : ObjectId("54f612b6029b47909a90ce8d"),
"title" : "A Tale of Two Cities",
"text" : "It was the best of times, it was the worst of times, it was the age of wisdom, it was t
"authorship" : "Charles Dickens"})
db.books.find()
{ "_id" : ObjectId("54f612b6029b47909a90ce8d"), "title" : "A Tale of Two Cities", "text" : "It was th
By using cursor.pretty() (page 150) you can set the cursor to return data in a format that is easier for humans
to parse:
db.books.find().pretty()
{
"_id" : ObjectId("54f612b6029b47909a90ce8d"),
"title" : "A Tale of Two Cities",
"text" : "It was the best of times, it was the worst of times, it was the age of wisdom, it was t
"authorship" : "Charles Dickens"
}
cursor.readConcern()
150
On this page
Definition (page 151)
Definition
cursor.readConcern(level)
New in version 3.2.
Specify a read concern for the db.collection.find() (page 51) method.
The readConcern() (page 151) method has the following form:
db.collection.find().readConcern(<level>)
"local" or
To use a read concern level of "majority", you must use the WiredTiger storage engine and start the mongod
(page 762) instances with the --enableMajorityReadConcern (page 773) command line option (or the
replication.enableMajorityReadConcern (page 914) setting if using a configuration file).
Only replica sets using protocol version 1 support "majority" read concern. Replica sets running
protocol version 0 do not support "majority" read concern.
See also:
https://docs.mongodb.org/manual/reference/read-concern
cursor.readPref()
On this page
Definition (page 151)
Definition
cursor.readPref(mode, tagSet)
Append readPref() (page 151) to a cursor to control how the client routes the query to members of the
replica set.
param string mode One of the following read preference modes:
primaryPreferred, secondary, secondaryPreferred, or nearest
primary,
param array tagSet Optional. A tag set used to specify custom read preference modes. For details,
see replica-set-read-preference-tag-sets.
Note: You must apply readPref() (page 151) to the cursor before retrieving any documents from the
database.
cursor.returnKey()
151
On this page
Definition (page 152)
Behavior (page 152)
Example (page 152)
Definition
cursor.returnKey()
New in version 3.2.
Modifies the cursor to return index keys rather than the documents.
The cursor.returnKey() (page 152) has the following form:
cursor.returnKey()
Returns The cursor that returnKey() (page 152) is attached to with a modified result set. This
allows for additional cursor modifiers to be chained.
Behavior If the query does not use an index to perform the read operation, the cursor returns empty documents.
Example The restaurants collection contains documents with the following schema:
{
"_id" : ObjectId("564f3a35b385149fc7e3fab9"),
"address" : {
"building" : "2780",
"coord" : [
-73.98241999999999,
40.579505
],
"street" : "Stillwell Avenue",
"zipcode" : "11224"
},
"borough" : "Brooklyn",
"cuisine" : "American ",
"grades" : [
{
"date" : ISODate("2014-06-10T00:00:00Z"),
"grade" : "A",
"score" : 5
},
{
"date" : ISODate("2013-06-05T00:00:00Z"),
"grade" : "A",
"score" : 7
}
],
"name" : "Riviera Caterer",
"restaurant_id" : "40356018"
}
The collection has two indexes in addition to the default _id index:
152
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "guidebook.restaurant"
},
{
"v" : 1,
"key" : {
"cuisine" : 1
},
"name" : "cuisine_1",
"ns" : "guidebook.restaurant"
},
{
"v" : 1,
"key" : {
"_fts" : "text",
"_ftsx" : 1
},
"name" : "name_text",
"ns" : "guidebook.restaurant",
"weights" : {
"name" : 1
},
"default_language" : "english",
"language_override" : "language",
"textIndexVersion" : 3
}
The following code uses the cursor.returnKey() (page 152) method to return only the indexed fields used for
executing the query:
var csr = db.restaurant.find( { "cuisine" : "Japanese" } )
csr.returnKey()
:
:
:
:
"Japanese"
"Japanese"
"Japanese"
"Japanese"
}
}
}
}
cursor.showRecordId()
On this page
Example (page 154)
cursor.showRecordId()
Changed in version 3.2: This method replaces the previous cursor.showDiskLoc().
153
Modifies the output of a query by adding a field $recordId to matching documents. $recordId is the
internal key which uniquely identifies a document in a collection. It has the form:
"$recordId": NumberLong(<int>)
Returns A modified cursor object that contains documents with appended information describing
the internal record key.
Example The following operation appends the showRecordId() (page 153) method to the
db.collection.find() (page 51) method in order to include storage engine record information in the
matching documents:
db.collection.find( { a: 1 } ).showRecordId()
The operation returns the following documents, which include the $recordId field:
{
"_id" : ObjectId("53908ccb18facd50a75bfbac"),
"a" : 1,
"b" : 1,
"$recordId" : NumberLong(168112)
}
{
"_id" : ObjectId("53908cd518facd50a75bfbad"),
"a" : 1,
"b" : 2,
"$recordId" : NumberLong(168176)
}
You can project the added field $recordId, as in the following example:
db.collection.find( { a: 1 }, { $recordId: 1 } ).showRecordId()
This query returns only the _id field and the $recordId field in the matching documents:
{
"_id" : ObjectId("53908ccb18facd50a75bfbac"),
"$recordId" : NumberLong(168112)
}
{
"_id" : ObjectId("53908cd518facd50a75bfbad"),
"$recordId" : NumberLong(168176)
}
cursor.size()
cursor.size()
Returns A count of the number of documents that match the db.collection.find()
(page 51) query after applying any cursor.skip() (page 154) and cursor.limit()
(page 143) methods.
cursor.skip()
cursor.skip()
Call the cursor.skip() (page 154) method on a cursor to control where MongoDB begins returning results.
154
The cursor.skip() (page 154) method is often expensive because it requires the server to walk from the
beginning of the collection or index to get the offset or skip position before beginning to return results. As the
offset (e.g. pageNumber above) increases, cursor.skip() (page 154) will become slower and more CPU
intensive. With larger collections, cursor.skip() (page 154) may become IO bound.
Consider using range-based pagination for these kinds of tasks. That is, query for a range of objects, using logic
within the application to determine the pagination rather than the database itself. This approach features better
index utilization, if you do not need to easily jump to a specific page.
cursor.snapshot()
cursor.snapshot()
Append the snapshot() (page 155) method to a cursor to toggle the snapshot mode. This ensures that the
query will not return a document multiple times, even if intervening write operations result in a move of the
document due to the growth in document size.
Warning:
You must apply snapshot() (page 155) to the cursor before retrieving any documents from the
database.
You can only use snapshot() (page 155) with unsharded collections.
The snapshot() (page 155) does not guarantee isolation from insertion or deletions.
The snapshot() (page 155) traverses the index on the _id field. As such, snapshot() (page 155) cannot
be used with sort() (page 155) or hint() (page 141).
cursor.sort()
On this page
Definition
cursor.sort(sort)
Specifies the order in which the query returns matching documents. You must apply sort() (page 155) to the
cursor before retrieving any documents from the database.
155
The sort document can specify ascending or descending sort on existing fields (page 156) or sort on computed
metadata (page 157).
Behaviors
Result Ordering Unless you specify the sort() (page 155) method or use the $near (page 557) operator, MongoDB does not guarantee the order of query results.
Ascending/Descending Sort Specify in the sort parameter the field or fields to sort by and a value of 1 or -1 to
specify an ascending or descending sort respectively.
The following sample document specifies a descending sort by the age field and then an ascending sort by the posts
field:
{ age : -1, posts: 1 }
When comparing values of different BSON types, MongoDB uses the following comparison order, from lowest to
highest:
1. MinKey (internal type)
2. Null
3. Numbers (ints, longs, doubles)
4. Symbol, String
5. Object
6. Array
7. BinData
8. ObjectId
9. Boolean
10. Date
11. Timestamp
12. Regular Expression
13. MaxKey (internal type)
MongoDB treats some types as equivalent for comparison purposes. For instance, numeric types undergo conversion
before comparison.
Changed in version 3.0.0: Date objects sort before Timestamp objects. Previously Date and Timestamp objects sorted
together.
The comparison treats a non-existent field as it would an empty BSON Object. As such, a sort on the a field in
documents { } and { a: null } would treat the documents as equivalent in sort order.
156
With arrays, a less-than comparison or an ascending sort compares the smallest element of arrays, and a greater-than
comparison or a descending sort compares the largest element of the arrays. As such, when comparing a field whose
value is a single-element array (e.g. [ 1 ]) with non-array fields (e.g. 2), the comparison is between 1 and 2. A
comparison of an empty array (e.g. [ ]) treats the empty array as less than null or a missing field.
MongoDB sorts BinData in the following order:
1. First, the length or size of the data.
2. Then, by the BSON one-byte subtype.
3. Finally, by the data, performing a byte-by-byte comparison.
Metadata Sort Specify in the sort parameter a new field name for the computed metadata and specify the $meta
(page 584) expression as its value.
The following sample document specifies a descending sort by the "textScore" metadata:
{ score: { $meta: "textScore" } }
The specified metadata determines the sort order. For example, the "textScore" metadata sorts in descending
order. See $meta (page 584) for details.
Restrictions When unable to obtain the sort order from an index, MongoDB will sort the results in memory, which
requires that the result set being sorted is less than 32 megabytes.
When the sort operation consumes more than 32 megabytes, MongoDB returns an error. To avoid this error, either
create an index supporting the sort operation (see Sort and Index Use (page 157)) or use sort() (page 155) in
conjunction with limit() (page 143) (see Limit Results (page 157)).
Sort and Index Use The sort can sometimes be satisfied by scanning an index in order. If the query plan uses an
index to provide the requested sort order, MongoDB does not perform an in-memory sorting of the result set. For more
information, see https://docs.mongodb.org/manual/tutorial/sort-results-with-indexes.
Limit Results You can use sort() (page 155) in conjunction with limit() (page 143) to return the first (in
terms of the sort order) k documents, where k is the specified limit.
If MongoDB cannot obtain the sort order via an index scan, then MongoDB uses a top-k sort algorithm. This algorithm
buffers the first k results (or last, depending on the sort order) seen so far by the underlying index or collection access.
If at any point the memory footprint of these k results exceeds 32 megabytes, the query will fail.
Interaction with Projection When a set of results are both sorted and projected, the MongoDB query engine will
always apply the sorting first.
Examples A collection orders contain the following documents:
{
{
{
{
{
{
_id:
_id:
_id:
_id:
_id:
_id:
1,
2,
3,
4,
5,
6,
item:
item:
item:
item:
item:
item:
{
{
{
{
{
{
category:
category:
category:
category:
category:
category:
The following query, which returns all documents from the orders collection, does not specify a sort order:
157
db.orders.find()
"_id"
"_id"
"_id"
"_id"
"_id"
"_id"
:
:
:
:
:
:
1,
2,
3,
4,
5,
6,
"item"
"item"
"item"
"item"
"item"
"item"
:
:
:
:
:
:
{
{
{
{
{
{
"category"
"category"
"category"
"category"
"category"
"category"
:
:
:
:
:
:
The following query specifies a sort on the amount field in descending order.
db.orders.find().sort( { amount: -1 } )
"_id"
"_id"
"_id"
"_id"
"_id"
"_id"
:
:
:
:
:
:
2,
4,
5,
3,
1,
6,
"item"
"item"
"item"
"item"
"item"
"item"
:
:
:
:
:
:
{
{
{
{
{
{
"category"
"category"
"category"
"category"
"category"
"category"
:
:
:
:
:
:
The following query specifies the sort order using the fields from an embedded document item. The query sorts first
by the category field in ascending order, and then within each category, by the type field in ascending order.
db.orders.find().sort( { "item.category": 1, "item.type": 1 } )
The query returns the following documents, ordered first by the category field, and within each category, by the
type field:
{
{
{
{
{
{
"_id"
"_id"
"_id"
"_id"
"_id"
"_id"
:
:
:
:
:
:
6,
5,
1,
4,
2,
3,
"item"
"item"
"item"
"item"
"item"
"item"
:
:
:
:
:
:
{
{
{
{
{
{
"category"
"category"
"category"
"category"
"category"
"category"
:
:
:
:
:
:
Return in Natural Order The $natural parameter returns items according to their natural order within the
database. This ordering is an internal implementation feature, and you should not rely on any particular structure
within it.
Index Use Queries that include a sort by $natural order do not use indexes to fulfill the query predicate with the
following exception: If the query predicate is an equality condition on the _id field { _id: <value> }, then
the query with the sort by $natural order can use the _id index.
MMAPv1 Typically, the natural order reflects insertion order with the following exception for the MMAPv1 storage
engine. For the MMAPv1 storage engine, the natural order does not reflect insertion order if the documents relocate
because of document growth or remove operations free up space which are then taken up by newly inserted documents.
Consider to following example which uses the MMAPv1 storage engine.
The following sequence of operations inserts documents into the trees collection:
158
db.trees.insert(
db.trees.insert(
db.trees.insert(
db.trees.insert(
{
{
{
{
_id:
_id:
_id:
_id:
1,
2,
3,
4,
common_name:
common_name:
common_name:
common_name:
"_id"
"_id"
"_id"
"_id"
:
:
:
:
1,
2,
3,
4,
"common_name"
"common_name"
"common_name"
"common_name"
:
:
:
:
Update a document such that the document outgrows its current allotted space:
db.trees.update(
{ _id: 1 },
{ $set: { famous_oaks: [ "Emancipation Oak", "Goethe Oak" ] } }
)
For MongoDB instances using MMAPv1, the documents return in the following natural order, which no longer reflects
the insertion order:
{
{
{
{
"_id"
"_id"
"_id"
"_id"
:
:
:
:
2,
3,
4,
1,
"common_name"
"common_name"
"common_name"
"common_name"
:
:
:
:
See also:
$natural
cursor.tailable()
On this page
Definition (page 159)
Behavior (page 160)
Definition
cursor.tailable()
New in version 3.2.
Marks the cursor as tailable.
For use against a capped collection only. Using tailable (page 159) against a non-capped collection will
return an error.
cursor.tailable() (page 159) uses the following syntax:
159
cursor.toArray()
The toArray() (page 160) method returns an array that contains all the documents from a cursor. The method
iterates completely the cursor, loading all the documents into RAM and exhausting the cursor.
Returns An array of documents.
Consider the following example that applies toArray() (page 160) to the cursor returned from the find()
(page 51) method:
var allProductsArray = db.products.find().toArray();
if (allProductsArray.length > 0) { printjson (allProductsArray[0]); }
The variable allProductsArray holds the array of documents returned by toArray() (page 160).
2.1.3 Database
Database Methods
Name
db.cloneCollection() (page 161)
db.cloneDatabase() (page 162)
db.commandHelp() (page 163)
db.copyDatabase() (page 163)
db.createCollection() (page 166)
db.currentOp() (page 170)
db.dropDatabase() (page 176)
db.eval() (page 177)
db.fsyncLock() (page 179)
db.fsyncUnlock() (page 180)
db.getCollection() (page 180)
160
Description
Copies data directly between MongoDB instances. Wraps cloneCollec
Copies a database from a remote host to the current host. Wraps clone (p
Returns help information for a database command.
Copies a database to another database on the current host. Wraps copydb
Creates a new collection. Commonly used to create a capped collection.
Reports the current in-progress operations.
Removes the current database.
Deprecated. Passes a JavaScript function to the mongod (page 762) instan
Flushes writes to disk and locks the database to prevent write operations an
Allows writes to continue on a database locked with db.fsyncLock() (
Returns a collection object. Used to access collections with names that are
Name
db.getCollectionInfos() (page 181)
db.getCollectionNames() (page 184)
db.getLastError() (page 184)
db.getLastErrorObj() (page 185)
db.getLogComponents() (page 186)
db.getMongo() (page 187)
db.getName() (page 187)
db.getPrevError() (page 187)
db.getProfilingLevel() (page 187)
db.getProfilingStatus() (page 188)
db.getReplicationInfo() (page 188)
db.getSiblingDB() (page 189)
db.help() (page 189)
db.hostInfo() (page 190)
db.isMaster() (page 190)
db.killOp() (page 191)
db.listCommands() (page 191)
db.loadServerScripts() (page 191)
db.logout() (page 191)
db.printCollectionStats() (page 192)
db.printReplicationInfo() (page 192)
db.printShardingStatus() (page 193)
db.printSlaveReplicationInfo() (page 194)
db.repairDatabase() (page 194)
db.resetError() (page 195)
db.runCommand() (page 195)
db.serverBuildInfo() (page 195)
db.serverCmdLineOpts() (page 195)
db.serverStatus() (page 196)
db.setLogLevel() (page 196)
db.setProfilingLevel() (page 198)
db.shutdownServer() (page 198)
db.stats() (page 198)
db.version() (page 199)
db.upgradeCheck() (page 199)
db.upgradeCheckAllDBs() (page 201)
db.cloneCollection()
On this page
Definition (page 161)
Behavior (page 162)
Definition
db.cloneCollection(from, collection, query)
Copies data directly between MongoDB instances. The db.cloneCollection() (page 161) method wraps
the cloneCollection (page 442) database command and accepts the following arguments:
161
On this page
Definition (page 162)
Example (page 162)
Definition
db.cloneDatabase(hostname)
Copies a remote database to the current database. The command assumes that the remote database has the same
name as the current database.
param string hostname The hostname of the database to copy.
This method provides a wrapper around the MongoDB database command clone (page 441). The copydb
(page 432) database command provides related functionality.
Example To clone a database named importdb on a host named hostname, issue the following:
use importdb
db.cloneDatabase("hostname")
New databases are implicitly created, so the current host does not need to have a database named importdb for this
command to succeed.
db.commandHelp()
On this page
Description (page 163)
162
Description
db.commandHelp(command)
Displays help text for the specified database command. See the Database Commands (page 302).
The db.commandHelp() (page 163) method has the following parameter:
param string command The name of a database command.
db.copyDatabase()
On this page
Definition
db.copyDatabase(fromdb, todb, fromhost, username, password, mechanism)
Changed in version 3.0: When authenticating to the fromhost instance, db.copyDatabase() (page 163)
supports MONGODB-CR and SCRAM-SHA-1 mechanisms to authenticate the fromhost user.
Copies a database either from one mongod (page 762) instance to the current mongod (page 762) instance or
within the current mongod (page 762). db.copyDatabase() (page 163) wraps the copydb (page 432)
command and takes the following arguments:
param string fromdb Name of the source database.
param string todb Name of the target database.
param string fromhost Optional. The hostname of the source mongod (page 762) instance. Omit
to copy databases within the same mongod (page 762) instance.
param string username Optional. The name of the user on the fromhost MongoDB instance.
The user authenticates to the fromdb.
For more information, see Authentication to Source mongod Instance (page 164).
param string password Optional. The password on the fromhost for authentication. The method
does not transmit the password in plaintext.
For more information, see Authentication to Source mongod Instance (page 164).
param string mechanism Optional.
The mechanism to authenticate the username and
password on the fromhost. Specify either MONGODB-CR or SCRAM-SHA-1.
db.copyDatabase (page 163) defaults to SCRAM-SHA-1 if the wire protocol version
(maxWireVersion (page 409)) is greater than or equal to 3 (i.e. MongoDB versions 3.0
or greater). Otherwise, it defaults to MONGODB-CR.
Specify MONGODB-CR to authenticate to the version 2.6.x fromhost from a version 3.0 instance or greater. For an example, see Copy Database from a mongod Instances that Enforce
Authentication (page 166).
New in version 3.0.
Behavior
163
Destination
Run db.copyDatabase() (page 163) in the admin database of the destination mongod (page 762) instance, i.e. the instance receiving the copied data.
db.copyDatabase() (page 163) creates the target database if it does not exist.
db.copyDatabase() (page 163) requires enough free disk space on the host instance for the copied
database. Use the db.stats() (page 198) operation to check the size of the database on the source mongod
(page 762) instance.
Authentication to Source mongod Instance
If copying from another mongod (page 762) instance (fromhost) that enforces access control
(page 902), then you must authenticate to the fromhost instance by specifying the username, password,
and optionally mechanism. The method does not transmit the password in plaintext.
When authenticating to the fromhost instance, db.copyDatabase() (page 163) uses the fromdb as the
authentication database for the specified user.
When authenticating to the fromhost instance, db.copyDatabase() (page 163) supports MONGODBCR and SCRAM-SHA-1 mechanisms to authenticate the fromhost user.
To authenticate to a version 2.6 fromhost, you must specify MONGODB-CR as the authentication mechanism. See Copy Database from a mongod Instances that Enforce Authentication (page 166).
To copy from a version 3.0 fromhost to a version 2.6 instance, i.e. if running the method from a version
2.6 instance to copy from a version 3.0 fromhost, you can only authenticate to the fromhost as a
MONGODB-CR user.
For more information on required access and authentication, see Required Access (page 164).
Concurrency
db.copyDatabase() (page 163) and clone (page 441) do not produce point-in-time snapshots of the
source database. Write traffic to the source or destination database during the copy process will result in divergent data sets.
db.copyDatabase() (page 163) does not lock the destination server during its operation, so the copy will
occasionally yield to allow other operations to complete.
Sharded Clusters
Do not use db.copyDatabase() (page 163) from a mongos (page 784) instance.
Do not use db.copyDatabase() (page 163) to copy databases that contain sharded collections.
Required Access Changed in version 2.6.
Source Database (fromdb) If the mongod (page 762) instance of the source database (fromdb) enforces
access control (page 902), you must have proper authorization for the source database.
If copying from another mongod (page 762) instance (fromhost) that enforces access control (page 902),
then you must authenticate to the fromhost instance by specifying the username, password, and optionally
mechanism. The method does not transmit the password in plaintext.
When authenticating to the fromhost instance, db.copyDatabase() (page 163) uses the fromdb as the authentication database for the specified user.
164
When authenticating to the fromhost instance, db.copyDatabase() (page 163) supports MONGODB-CR and
SCRAM-SHA-1 mechanisms to authenticate the fromhost user.
To authenticate to a version 2.6 fromhost, you must specify MONGODB-CR as the authentication mechanism.
See Copy Database from a mongod Instances that Enforce Authentication (page 166).
To copy from a version 3.0 fromhost to a version 2.6 instance, i.e. if running the method from a version 2.6
instance to copy from a version 3.0 fromhost, you can only authenticate to the fromhost as a MONGODB-CR
user.
Source is non-admin Database Changed in version 3.0.
If the source database is a non-admin database, you must have privileges that specify find, listCollections,
and listIndexes actions on the source database, and find action on the system.js collection in the source
database.
resource:
resource:
resource:
resource:
resource:
{
{
{
{
{
db:
db:
db:
db:
db:
"admin",
"admin",
"admin",
"admin",
"admin",
collection:
collection:
collection:
collection:
collection:
Target Database (todb) If the mongod (page 762) instance of the target database (todb) enforces access
control (page 902), you must have proper authorization for the target database.
Copy from non-admin Database If the source database is not the admin database, you must have privileges
that specify insert and createIndex actions on the target database, and insert action on the system.js
collection in the target database. For example:
{ resource: { db: "myTargetDB", collection: "" }, actions: [ "insert", "createIndex" ] },
{ resource: { db: "myTargetDB", collection: "system.js" }, actions: [ "insert" ] }
Copy from admin Database If the source database is the admin database, you must have privileges that
specify insert and createIndex actions on the target database, and insert action on the system.js,
system.users, system.roles, and system.version collections in the target database. For example:
{
{
{
{
{
resource:
resource:
resource:
resource:
resource:
{
{
{
{
{
db:
db:
db:
db:
db:
"myTargetDB",
"myTargetDB",
"myTargetDB",
"myTargetDB",
"myTargetDB",
collection:
collection:
collection:
collection:
collection:
Example
165
Copy from the Same mongod Instance To copy within the same mongod (page 762) instance, omit the
fromhost.
The following operation copies a database named records into a database named archive_records:
db.copyDatabase('records', 'archive_records')
Copy Database from a mongod Instances that Enforce Authentication If copying from another mongod
(page 762) instance (fromhost) that enforces access control (page 902), then you must authenticate to the
fromhost instance by specifying the username, password, and optionally mechanism. The method does not
transmit the password in plaintext.
When authenticating to the fromhost instance, db.copyDatabase() (page 163) uses the fromdb as the authentication database for the specified user.
Changed in version 3.0: MongoDB 3.0 supports passing the authentication mechanism to use for the fromhost.
The following operation copies a database named reporting from a version 2.6 mongod (page 762) instance that
runs on example.net and enforces access control.
db.copyDatabase(
"reporting",
"reporting_copy",
"example.net",
"reportUser",
"abc123",
"MONGODB-CR"
)
See also:
clone (page 441)
db.createCollection()
On this page
Definition (page 166)
Examples (page 169)
Definition
db.createCollection(name, options)
Creates a new collection explicitly.
Because MongoDB creates a collection implicitly when the collection is first referenced in a command,
this method is used primarily for creating new collections that use specific options. For example, you use
db.createCollection() (page 166) to create a capped collection, or to create a new collection that uses
document validation. db.createCollection() (page 166) is also used to pre-allocate space for
an ordinary collection.
The db.createCollection() (page 166) method has the following prototype form:
Changed in version 3.2.
166
167
Storage engine configuration specified when creating collections are validated and logged to the
oplog during replication to support replica sets with members that use different storage engines.
field document validator Optional.
Allows
users
to
specify
validation
rules or expressions for the collection.
For more information, see
https://docs.mongodb.org/manual/core/document-validation.
New in version 3.2.
The validator option takes a document that specifies the validation rules or expressions. You
can specify the expressions using the same operators as the query operators (page 519) with the
exception of $geoNear, $near (page 557), $nearSphere (page 559), $text (page 541),
and $where (page 550).
Note:
Validation occurs during updates and inserts. Existing documents do not undergo validation
checks until modification.
You cannot specify a validator for collections in the admin, local, and config
databases.
You cannot specify a validator for system.* collections.
field string validationLevel Optional. Determines how strictly MongoDB applies the validation
rules to existing documents during an update.
New in version 3.2.
validationLevel
Description
"off"
No validation for inserts or updates.
"strict"
Default Apply validation rules to all inserts and all updates.
"moderate" Apply validation rules to inserts and to updates on existing valid documents.
Do not apply rules to updates on existing invalid documents.
field string validationAction Optional. Determines whether to error on invalid documents or just
warn about the violations but allow invalid documents to be inserted.
New in version 3.2.
Important: Validation of documents only applies to those documents as determined by the
validationLevel.
validationAction
Description
"error"
Default Documents must pass validation before the write occurs.
Otherwise, the write operation fails.
"warn"
Documents do not have to pass validation. If the document fails
validation, the write operation logs the validation failure.
field document indexOptionDefaults Optional. Allows users to specify a default configuration for
indexes when creating a collection.
The indexOptionDefaults option accepts a storageEngine document, which should
take the following form:
168
{ <storage-engine-name>: <options> }
Storage engine configuration specified when creating indexes are validated and logged to the
oplog during replication to support replica sets with members that use different storage engines.
New in version 3.2.
db.createCollection() (page 166) is a wrapper around the database command create (page 438).
Examples
Create a Capped Collection Capped collections have maximum size or document counts that prevent them from
growing beyond maximum thresholds. All capped collections must specify a maximum size and may also specify
a maximum document count. MongoDB removes older documents if a collection reaches the maximum size limit
before it reaches the maximum document count. Consider the following example:
db.createCollection("log", { capped : true, size : 5242880, max : 5000 } )
This command creates a collection named log with a maximum size of 5 megabytes and a maximum of 5000 documents.
The following command simply pre-allocates a 2-gigabyte, uncapped collection named people:
db.createCollection("people", { size: 2147483648 } )
With the validator in place, the following insert operation fails validation:
169
This operation creates a new collection named users with a specific configuration string that MongoDB will pass
to the wiredTiger storage engine. See the WiredTiger documentation of collection level options6 for specific
wiredTiger options.
db.currentOp()
On this page
Definition
db.currentOp()
Returns a document that contains information on in-progress operations for the database instance.
db.currentOp() (page 170) method has the following form:
db.currentOp(<operations>)
The db.currentOp() (page 170) method can take the following optional argument:
param boolean or document operations Optional. Specifies the operations to report on. Can pass
either a boolean or a document.
6 http://source.wiredtiger.com/2.4.1/struct_w_t___s_e_s_s_i_o_n.html#a358ca4141d59c345f401c58501276bbb
170
Specify true to include operations on idle connections and system operations. Specify a document with query conditions to report only on operations that match the conditions. See Behavior
(page 171) for details.
Behavior If you pass in true to db.currentOp() (page 170), the method returns information on all operations,
including operations on idle connections and system operations.
db.currentOp(true)
true }.
If you pass a query document to db.currentOp() (page 170), the output returns information only for the current
operations that match the query. You can query on the Output Fields (page 173). See Examples (page 171).
You can also specify { $all: true } query document to return information on all in-progress operations,
including operations on idle connections and system operations. If the query document includes $all: true
as well as other query conditions, only the $all: true applies.
Access Control On systems running with authorization (page 902), a user must have access that includes the
inprog action. For example, see create-role-to-manage-ops.
Examples The following examples use the db.currentOp() (page 170) method with various query documents
to filter the output.
Write Operations Waiting for a Lock The following example returns information on all write operations that are
waiting for a lock:
db.currentOp(
{
"waitingForLock" : true,
$or: [
{ "op" : { "$in" : [ "insert", "update", "remove" ] } },
{ "query.findandmodify": { $exists: true } }
]
}
)
Active Operations with no Yields The following example returns information on all active running operations that
have never yielded:
db.currentOp(
{
"active" : true,
"numYields" : 0,
"waitingForLock" : false
}
)
Active Operations on a Specific Database The following example returns information on all active operations for
database db1 that have been running longer than 3 seconds:
171
db.currentOp(
{
"active" : true,
"secs_running" : { "$gt" : 3 },
"ns" : /^db1\./
}
)
Active Indexing Operations The following example returns information on index creation operations:
db.currentOp(
{
$or: [
{ op: "query", "query.createIndexes": { $exists: true } },
{ op: "insert", ns: /\.system\.indexes\b/ }
]
}
)
172
"r":
"w":
"R":
"W":
<NumberLong>,
<NumberLong>,
<NumberLong>,
<NumberLong>
},
"acquireWaitCount": {
"r": <NumberLong>,
"w": <NumberLong>,
"R": <NumberLong>,
"W": <NumberLong>
},
"timeAcquiringMicros" : {
"r" : NumberLong(0),
"w" : NumberLong(0),
"R" : NumberLong(0),
"W" : NumberLong(0)
},
"deadlockCount" : {
"r" : NumberLong(0),
"w" : NumberLong(0),
"R" : NumberLong(0),
"W" : NumberLong(0)
}
},
"MMAPV1Journal": {
...
},
"Database" : {
...
},
...
}
},
...
],
"fsyncLock": <boolean>,
"info": <string>
}
Output Fields
currentOp.desc
A description of the client. This string includes the connectionId (page 173).
currentOp.threadId
An identifier for the thread that handles the operation and its connection.
currentOp.connectionId
An identifier for the connection where the operation originated.
currentOp.opid
The identifier for the operation. You can pass this value to db.killOp() (page 191) in the mongo (page 794)
shell to terminate the operation.
Warning: Terminate running operations with extreme caution. Only use db.killOp() (page 191) to
terminate operations initiated by clients and do not terminate internal database operations.
currentOp.active
173
A boolean value specifying whether the operation has started. Value is true if the operation has started or
false if the operation is idle, such as an idle connection or an internal thread that is currently idle. An
operation can be active even if the operation has yielded to another operation.
Changed in version 3.0:
For some inactive background threads,
signalProcessingThread, MongoDB suppresses various empty fields.
such
as
an
inactive
currentOp.secs_running
The duration of the operation in seconds. MongoDB calculates this value by subtracting the current time from
the start time of the operation.
Only appears if the operation is running; i.e. if active (page 173) is true.
currentOp.microsecs_running
New in version 2.6.
The duration of the operation in microseconds. MongoDB calculates this value by subtracting the current time
from the start time of the operation.
Only appears if the operation is running; i.e. if active (page 173) is true.
currentOp.op
A string that identifies the type of operation. The possible values are:
"none"
"update"
"insert"
"query"
"getmore"
"remove"
"killcursors"
"query" operations include read operations as well as most commands such as the createIndexes
(page 445) command and the findandmodify command.
Changed in version 3.0: Write operations that use the insert (page 336), update (page 339), and delete
(page 343) commands respectively display "insert", "update", and "delete" for op (page 174). Previous versions include these write commands under "query" operations.
currentOp.ns
The namespace the operation targets. A namespace consists of the database name and the collection name
concatenated with a dot (.); that is, "<database>.<collection>".
currentOp.insert
Contains the document to be inserted for operations with op (page 174) value of "insert". Only appears for
operations with op (page 174) value "insert".
Insert operations such as db.collection.insert() (page 78) that use the insert (page 336) command
will have op (page 174) value of "query".
currentOp.query
A document containing information on operations whose op (page 174) value is not "insert". For instance,
for a db.collection.find() (page 51) operation, the query (page 174) contains the query predicate.
query (page 174) does not appear for op (page 174) of "insert". query (page 174) can also be an empty
document.
For "update" (page 174) or "remove" (page 174) operations or for read operations categorized under
"query" (page 174), the query (page 174) document contains the query predicate for the operations.
174
Changed in version 3.0.4: For "getmore" (page 174) operations on cursors returned from a
db.collection.find() (page 51) or a db.collection.aggregate() (page 20), the query
(page 174) field contains respectively the query predicate or the issued aggregate (page 302) command
document. For details on the aggregate (page 302) command document, see the aggregate (page 302)
reference page.
For other commands categorized under "query" (page 174), query (page 174) contains the issued command
document. Refer to the specific command reference page for the details on the command document.
Changed in version 3.0: Previous versions categorized operations that used write commands under op
(page 174) of "query" and returned the write command information (e.g. query predicate, update statement,
and update options) in query (page 174) document.
currentOp.planSummary
A string that contains the query plan to help debug slow queries.
currentOp.client
The IP address (or hostname) and the ephemeral port of the client connection where the operation originates. If
your inprog array has operations from many different clients, use this string to relate operations to clients.
currentOp.locks
Changed in version 3.0.
The locks (page 175) document reports the type and mode of locks the operation currently holds. The possible
lock types are as follows:
Global represents global lock.
MMAPV1Journal represents MMAPv1 storage engine specific lock to synchronize journal writes; for
non-MMAPv1 storage engines, the mode for MMAPV1Journal is empty.
Database represents database lock.
Collection represents collection lock.
Metadata represents metadata lock.
oplog represents lock on the oplog.
The possible modes are as follows:
R represents Shared (S) lock.
W represents Exclusive (X) lock.
r represents Intent Shared (IS) lock.
w represents Intent Exclusive (IX) lock.
currentOp.waitingForLock
Returns a boolean value. waitingForLock (page 175) is true if the operation is waiting for a lock and
false if the operation has the required lock.
currentOp.msg
The msg (page 175) provides a message that describes the status and progress of the operation. In the case of
indexing or mapReduce operations, the field reports the completion percentage.
currentOp.progress
Reports on the progress of mapReduce or indexing operations. The progress (page 175) fields corresponds
to the completion percentage in the msg (page 175) field. The progress (page 175) specifies the following
information:
currentOp.progress.done
Reports the number completed.
175
currentOp.progress.total
Reports the total number.
currentOp.killPending
Returns true if the operation is currently flagged for termination. When the operation encounters its next safe
termination point, the operation will terminate.
currentOp.numYields
numYields (page 176) is a counter that reports the number of times the operation has yielded to allow other
operations to complete.
Typically, operations yield when they need access to data that MongoDB has not yet fully read into memory.
This allows other operations that have data in memory to complete quickly while MongoDB reads in data for
the yielding operation.
currentOp.fsyncLock
Specifies if database is currently locked for fsync write/snapshot (page 179).
Only appears if locked; i.e. if fsyncLock (page 176) is true.
currentOp.info
Information regarding how to unlock database from db.fsyncLock() (page 179).
fsyncLock (page 176) is true.
Only appears if
currentOp.lockStats
For each lock type and mode (see currentOp.locks (page 175) for descriptions of lock types and modes),
returns the following information:
currentOp.lockStats.acquireCount
Number of times the operation acquired the lock in the specified mode.
currentOp.lockStats.acquireWaitCount
Number of times the operation had to wait for the acquireCount (page 176) lock acquisitions because
the locks were held in a conflicting mode. acquireWaitCount (page 176) is less than or equal to
acquireCount (page 176).
currentOp.lockStats.timeAcquiringMicros
Cumulative time in microseconds that the operation had to wait to acquire the locks.
timeAcquiringMicros (page 176) divided by acquireWaitCount (page 176) gives an approximate average wait time for the particular lock mode.
currentOp.lockStats.deadlockCount
Number of times the operation encountered deadlocks while waiting for lock acquisitions.
db.dropDatabase()
On this page
Definition (page 176)
Behavior (page 177)
Example (page 177)
Definition
db.dropDatabase()
Removes the current database, deleting the associated data files.
176
Behavior The db.dropDatabase() (page 176) wraps the dropDatabase (page 436) command.
Warning: This command obtains a global write lock and will block other operations until it has completed.
Changed in version 2.6: This command does not delete the users associated with the current database. To drop the
associated users, run the dropAllUsersFromDatabase (page 376) command in the database you are deleting.
Example The following example in the mongo (page 794) shell uses the use <database> operation to switch
the current database to the temp database and then uses the db.dropDatabase() (page 176) method to drops the
temp database:
use temp
db.dropDatabase()
See also:
dropDatabase (page 436)
db.eval()
On this page
Definition (page 177)
Behavior (page 178)
Examples (page 178)
Definition
db.eval(function, arguments)
Deprecated since version 3.0.
Provides the ability to run JavaScript code on the MongoDB server.
The helper db.eval() (page 177) in the mongo (page 794) shell wraps the eval (page 357) command.
Therefore, the helper method shares the characteristics and behavior of the underlying command with one exception: db.eval() (page 177) method does not support the nolock option.
The method accepts the following parameters:
param function function A JavaScript function to execute.
param list arguments Optional. A list of arguments to pass to the JavaScript function. Omit if the
function does not take arguments.
The JavaScript function need not take any arguments, as in the first example, or may optionally take arguments
as in the second:
function () {
// ...
}
function (arg1, arg2) {
// ...
}
177
Behavior
Write Lock By default, db.eval() (page 177) takes a global write lock while evaluating the JavaScript function.
As a result, db.eval() (page 177) blocks all other read and write operations to the database while the db.eval()
(page 177) operation runs.
To prevent the taking of the global write lock while evaluating the JavaScript code, use the eval (page 357) command
with nolock set to true. nolock does not impact whether the operations within the JavaScript code take write
locks.
For long running db.eval() (page 177) operation, consider using either the eval command with nolock:
or using other server side code execution options.
true
Sharded Data You can not use db.eval() (page 177) with sharded collections. In general, you should avoid
using db.eval() (page 177) in sharded clusters; nevertheless, it is possible to use db.eval() (page 177) with
non-sharded collections and databases stored in a sharded cluster.
Access Control Changed in version 2.6.
If authorization is enabled, you must have access to all actions on all resources in order to run eval (page 357).
Providing such access is not recommended, but if your organization requires a user to run eval (page 357), create a
role that grants anyAction on resource-anyresource. Do not assign this role to any other user.
JavaScript Engine Changed in version 2.4.
The V8 JavaScript engine, which became the default in 2.4, allows multiple JavaScript operations to execute at the
same time. Prior to 2.4, db.eval() (page 177) executed in a single thread.
Examples The following is an example of the db.eval() (page 177) method:
db.eval( function(name, incAmount) {
var doc = db.myCollection.findOne( { name : name } );
doc = doc || { name : name , num : 0 , total : 0 , avg : 0 };
doc.num++;
doc.total += incAmount;
doc.avg = doc.total / doc.num;
db.myCollection.save( doc );
return doc;
},
"eliot", 5 );
178
"errmsg" : "exception: JavaScript execution failed: ReferenceError: x is not defined near '{ retur
"code" : 16722,
"ok" : 0
}
See also:
https://docs.mongodb.org/manual/core/server-side-javascript
db.fsyncLock()
On this page
Definition (page 179)
Behavior (page 179)
Definition
db.fsyncLock()
Forces the mongod (page 762) to flush all pending write operations to the disk and locks the entire mongod
(page 762) instance to prevent additional writes until the user releases the lock with the db.fsyncUnlock()
(page 180) command. db.fsyncLock() (page 179) is an administrative command.
This command provides a simple wrapper around a fsync (page 450) database command with the following
syntax:
{ fsync: 1, lock: true }
This function locks the database and create a window for backup operations.
Behavior
Compatibility with WiredTiger Changed in version 3.2: Starting in MongoDB 3.2, db.fsyncLock()
(page 179) can ensure that the data files do not change for MongoDB instances using either the MMAPv1 or the
WiredTiger storage engine, thus providing consistency for the purposes of creating backups.
In previous MongoDB version, db.fsyncLock() (page 179) cannot guarantee a consistent set of files for low-level
backups (e.g. via file copy cp, scp, tar) for WiredTiger.
Impact on Read Operations db.fsyncLock() (page 179) may block reads, including those necessary to verify authentication. Such reads are necessary to establish new connections to a mongod (page 762) that enforces
authorization checks.
Connection When calling db.fsyncLock() (page 179), ensure that the connection is kept open to allow a subsequent call to db.fsyncUnlock() (page 180).
Closing the connection may make it difficult to release the lock.
179
db.fsyncUnlock()
On this page
Definition (page 180)
Wired Tiger Compatibility (page 180)
Definition
db.fsyncUnlock()
Unlocks a mongod (page 762) instance to allow writes and reverses the operation of a db.fsyncLock()
(page 179) operation. Typically you will use db.fsyncUnlock() (page 180) following a database backup
operation.
db.fsyncUnlock() (page 180) is an administrative operation.
Wired Tiger Compatibility Changed in version 3.2: Starting in MongoDB 3.2, db.fsyncLock() (page 179)
can ensure that the data files do not change for MongoDB instances using either the MMAPv1 or the WiredTiger
storage engine, thus providing consistency for the purposes of creating backups.
In previous MongoDB version, db.fsyncLock() (page 179) cannot guarantee a consistent set of files for low-level
backups (e.g. via file copy cp, scp, tar) for WiredTiger.
db.getCollection()
On this page
Definition (page 180)
Behavior (page 180)
Example (page 180)
Definition
db.getCollection(name)
Returns a collection object that is functionally equivalent to using the db.<collectionName> syntax. The
method is useful for a collection whose name might interact with the shell itself, such as names that begin with
_ or that match a database shell method (page 160).
The db.getCollection() (page 180) method has the following parameter:
param string name The name of the collection.
Behavior The db.getCollection() (page 180) object can access any collection methods (page 19).
The collection specified may or may not exist on the server. If the collection does not exist, MongoDB creates it
implicitly as part of write operations like insertOne() (page 82).
Example The following example uses db.getCollection() (page 180) to access the auth collection and
insert a document into it.
180
This returns:
{
"acknowledged" : true,
"insertedId" : ObjectId("569525e144fe66d60b772763")
}
The previous example requires the use of db.getCollection("auth") (page 180) because of a name conflict
with the database method db.auth() (page 228). Calling db.auth directly to perform an insert operation would
reference the db.auth() (page 228) method and would error.
The following example attempts the same operation, but without using the db.getCollection() (page 180)
method:
db.auth.insertOne(
{
usrName : "John Doe",
usrDept : "Sales",
usrTitle : "Executive Account Manager",
authLevel : 4,
authDept : [ "Sales", "Customers"]
}
)
On this page
Definition (page 181)
Example (page 182)
Definition
db.getCollectionInfos()
New in version 3.0.0.
Returns an array of documents with collection information, i.e. collection name and options, for the current
database.
181
The db.getCollectionInfos() (page 181) helper wraps the listCollections (page 437) command.
Changed in version 3.2:
MongoDB 3.2 added support for document validation.
db.getCollectionInfos() (page 181) includes document validation information in the options
document.
db.getCollectionInfos()
(page
181)
validationAction unless they are explicitly set.
does
not
return
validationLevel
and
Example The following returns information for all collections in the example database:
use example
db.getCollectionInfos()
182
}
},
{
"restaurant_id" : {
"$exists" : true
}
}
]
},
"validationLevel" : "strict",
"validationAction" : "error"
}
},
{
"name" : "system.indexes",
"options" : {
}
}
]
To request collection information for a specific collection, specify the collection name when calling the method, as in
the following:
use example
db.getCollectionInfos( { name: "restaurants" } )
The method returns an array with a single document that details the collection information for the restaurants
collection in the example database.
[
{
"name" : "restaurants",
"options" : {
"validator" : {
"$and" : [
{
"name" : {
"$exists" : true
}
},
{
"restaurant_id" : {
"$exists" : true
}
}
]
},
"validationLevel" : "strict",
"validationAction" : "error"
}
}
]
db.getCollectionNames()
183
On this page
Definition (page 184)
Considerations (page 184)
Example (page 184)
Definition
db.getCollectionNames()
Returns an array containing the names of all collections in the current database.
Considerations Changed in version 3.0.0.
For MongoDB 3.0 deployments using the WiredTiger storage engine, if you run db.getCollectionNames()
(page 184) from a version of the mongo (page 794) shell before 3.0 or a version of the driver prior to 3.0 compatible version (page 1037), db.getCollectionNames() (page 184) will return no data, even if there are existing
collections. For more information, see WiredTiger and Driver Version Compatibility (page 1033).
Example The following returns the names of all collections in the records database:
use records
db.getCollectionNames()
db.getLastError()
On this page
Definition (page 184)
Behavior (page 185)
Example (page 185)
Definition
db.getLastError(<w>, <wtimeout>)
Specifies the level of write concern for confirming the success of previous write operation issued over the same
connection and returns the error string (page 354) for that operation.
When using db.getLastError() (page 184), clients must issue the db.getLastError() (page 184)
on the same connection as the write operation they wish to confirm.
Changed in version 2.6: A new protocol for write operations (page 1081) integrates write concerns with the
write operations, eliminating the need for a separate db.getLastError() (page 184). Most write methods
(page 1087) now return the status of the write operation, including error information. In previous versions,
clients typically used the db.getLastError() (page 184) in combination with a write operation to verify
that the write succeeded.
The db.getLastError() (page 184) can accept the following parameters:
param int, string w Optional. The write concerns w value.
184
See also:
On this page
Definition (page 185)
Behavior (page 185)
Example (page 186)
Definition
db.getLastErrorObj()
Specifies the level of write concern for confirming the success of previous write operation issued over the same
connection and returns the document (page 354) for that operation.
When using db.getLastErrorObj() (page 185), clients must issue the db.getLastErrorObj()
(page 185) on the same connection as the write operation they wish to confirm.
The db.getLastErrorObj() (page 185) is a mongo (page 794) shell wrapper around the
getLastError (page 354) command.
Changed in version 2.6: A new protocol for write operations (page 1081) integrates write concerns with the write
operations, eliminating the need for a separate db.getLastErrorObj() (page 185). Most write methods
(page 1087) now return the status of the write operation, including error information. In previous versions,
clients typically used the db.getLastErrorObj() (page 185) in combination with a write operation to
verify that the write succeeded.
The db.getLastErrorObj() (page 185) can accept the following parameters:
param int, string key Optional. The write concerns w value.
param int wtimeout Optional. The time limit in milliseconds.
Behavior The returned document (page 354) provides error information on the previous write operation.
If the db.getLastErrorObj() (page 185) method itself encounters an error, such as an incorrect write concern
value, the db.getLastErrorObj() (page 185) throws an exception.
For information on the returned document, see getLastError command (page 354).
2.1. mongo Shell Methods
185
Example The following example issues a db.getLastErrorObj() (page 185) operation that verifies that the
preceding write operation, issued over the same connection, has propagated to at least two members of the replica set.
db.getLastErrorObj(2)
See also:
https://docs.mongodb.org/manual/reference/write-concern.
db.getLogComponents()
On this page
Definition (page 186)
Output (page 186)
Definition
db.getLogComponents()
New in version 3.0.
Returns the current verbosity settings. The verbosity settings determine the amount of Log Messages (page 955)
that MongoDB produces for each log message component (page 956).
If a component inherits the verbosity level of its parent, db.getLogComponents() (page 186) displays -1
for the components verbosity.
Output The db.getLogComponents() (page 186) returns a document with the verbosity settings. For example:
{
"verbosity" : 0,
"accessControl" : {
"verbosity" : -1
},
"command" : {
"verbosity" : -1
},
"control" : {
"verbosity" : -1
},
"geo" : {
"verbosity" : -1
},
"index" : {
"verbosity" : -1
},
"network" : {
"verbosity" : -1
},
"query" : {
"verbosity" : 2
},
"replication" : {
"verbosity" : -1
},
"sharding" : {
186
"verbosity" : -1
},
"storage" : {
"verbosity" : 2,
"journal" : {
"verbosity" : -1
}
},
"write" : {
"verbosity" : -1
}
}
To modify these settings, you can configure the systemLog.verbosity (page 889) and
systemLog.component.<name>.verbosity settings in the configuration file (page 887) or set the
logComponentVerbosity (page 925) parameter using the setParameter (page 459) command or use the
db.setLogLevel() (page 196) method. For examples, see Configure Log Verbosity Levels (page 957).
db.getMongo()
db.getMongo()
Returns The current database connection.
db.getMongo() (page 187) runs when the shell initiates. Use this command to test that the mongo
(page 794) shell has a connection to the proper database instance.
db.getName()
db.getName()
Returns the current database name.
db.getPrevError()
db.getPrevError()
Returns A status document, containing the errors.
Deprecated since version 1.6.
This output reports all errors since the last time the database received a resetError (page 356) (also
db.resetError() (page 195)) command.
This method provides a wrapper around the getPrevError (page 356) command.
db.getProfilingLevel()
db.getProfilingLevel()
This method provides a wrapper around the database command profile (page 483) and returns the current
profiling level.
Deprecated since version 1.8.4: Use db.getProfilingStatus() (page 188) for related functionality.
187
db.getProfilingStatus()
db.getProfilingStatus()
Returns The current profile (page 483) level and slowOpThresholdMs (page 913) setting.
db.getReplicationInfo()
On this page
Definition (page 188)
Output (page 188)
Definition
db.getReplicationInfo()
Returns A document with the status of the replica set, using data polled from the oplog. Use this
output when diagnosing issues with replication.
Output
db.getReplicationInfo.logSizeMB
Returns the total size of the oplog in megabytes. This refers to the total amount of space allocated to the oplog
rather than the current size of operations stored in the oplog.
db.getReplicationInfo.usedMB
Returns the total amount of space used by the oplog in megabytes. This refers to the total amount of space
currently used by operations stored in the oplog rather than the total amount of space allocated.
db.getReplicationInfo.errmsg
Returns an error message if there are no entries in the oplog.
db.getReplicationInfo.oplogMainRowCount
Only present when there are no entries in the oplog. Reports a the number of items or rows in the oplog (e.g. 0).
db.getReplicationInfo.timeDiff
Returns the difference between the first and last operation in the oplog, represented in seconds.
Only present if there are entries in the oplog.
db.getReplicationInfo.timeDiffHours
Returns the difference between the first and last operation in the oplog, rounded and represented in hours.
Only present if there are entries in the oplog.
db.getReplicationInfo.tFirst
Returns a time stamp for the first (i.e. earliest) operation in the oplog. Compare this value to the last write
operation issued against the server.
Only present if there are entries in the oplog.
db.getReplicationInfo.tLast
Returns a time stamp for the last (i.e. latest) operation in the oplog. Compare this value to the last write operation
issued against the server.
Only present if there are entries in the oplog.
188
db.getReplicationInfo.now
Returns a time stamp that reflects reflecting the current time. The shell process generates this value, and the
datum may differ slightly from the server time if youre connecting from a remote host as a result. Equivalent
to Date() (page 286).
Only present if there are entries in the oplog.
db.getSiblingDB()
On this page
Definition (page 189)
Example (page 189)
Definition
db.getSiblingDB(<database>)
param string database The name of a MongoDB database.
Returns A database object.
Used to return another database without modifying the db variable in the shell environment.
Example You can use db.getSiblingDB() (page 189) as an alternative to the use <database> helper. This
is particularly useful when writing scripts using the mongo (page 794) shell where the use helper is not available.
Consider the following sequence of operations:
db = db.getSiblingDB('users')
db.active.count()
This operation sets the db object to point to the database named users, and then returns a count (page 32) of the
collection named active. You can create multiple db objects, that refer to different databases, as in the following
sequence of operations:
users = db.getSiblingDB('users')
records = db.getSiblingDB('records')
users.active.count()
users.active.findOne()
records.requests.count()
records.requests.findOne()
This operation creates two db objects referring to different databases (i.e. users and records) and then returns a
count (page 32) and an example document (page 61) from one collection in that database (i.e. active and requests
respectively.)
db.help()
db.help()
Returns Text output listing common methods on the db object.
189
db.hostInfo()
db.hostInfo()
New in version 2.2.
Returns A document with information about the underlying system that the mongod (page 762) or
mongos (page 784) runs on. Some of the returned fields are only included on some platforms.
db.hostInfo() (page 190) provides a helper in the mongo (page 794) shell around the hostInfo
(page 489) The output of db.hostInfo() (page 190) on a Linux system will resemble the following:
{
"system" : {
"currentTime" : ISODate("<timestamp>"),
"hostname" : "<hostname>",
"cpuAddrSize" : <number>,
"memSizeMB" : <number>,
"numCores" : <number>,
"cpuArch" : "<identifier>",
"numaEnabled" : <boolean>
},
"os" : {
"type" : "<string>",
"name" : "<string>",
"version" : "<string>"
},
"extra" : {
"versionString" : "<string>",
"libcVersion" : "<string>",
"kernelVersion" : "<string>",
"cpuFrequencyMHz" : "<string>",
"cpuFeatures" : "<string>",
"pageSize" : <number>,
"numPages" : <number>,
"maxOpenFiles" : <number>
},
"ok" : <return>
}
See hostInfo (page 489) for full documentation of the output of db.hostInfo() (page 190).
db.isMaster()
db.isMaster()
Returns A document that describes the role of the mongod (page 762) instance.
If the mongod (page 762) is a member of a replica set, then the ismaster (page 408) and secondary
(page 409) fields report if the instance is the primary or if it is a secondary member of the replica set.
See
isMaster (page 408) for the complete documentation of the output of db.isMaster() (page 190).
db.killOp()
190
On this page
Description (page 191)
Description
db.killOp(opid)
Terminates an operation as specified by the operation ID. To find operations and their corresponding IDs, see
db.currentOp() (page 170).
The db.killOp() (page 191) method has the following parameter:
param number opid An operation ID.
Warning: Terminate running operations with extreme caution. Only use db.killOp() (page 191) to
terminate operations initiated by clients and do not terminate internal database operations.
db.listCommands()
db.listCommands()
Provides a list of all database commands. See the Database Commands (page 302) document for a more extensive index of these options.
db.loadServerScripts()
db.loadServerScripts()
db.loadServerScripts() (page 191) loads all scripts in the system.js collection for the current
database into the mongo (page 794) shell session.
Documents in the system.js collection have the following prototype form:
{ _id : "<name>" , value : <function> } }
The documents in the system.js collection provide functions that your applications can use in any JavaScript
context with MongoDB in this database. These contexts include $where (page 550) clauses and mapReduce
(page 316) operations.
db.logout()
db.logout()
Ends the current authentication session. This function has no effect if the current session is not authenticated.
Note: If youre not logged in and using authentication, db.logout() (page 191) has no effect.
Changed in version 2.4: Because MongoDB now allows users defined in one database to have privileges on
another database, you must call db.logout() (page 191) while using the same database context that you
authenticated to.
If you authenticated to a database such as users or $external, you must issue db.logout() (page 191)
against this database in order to successfully log out.
Example
2.1. mongo Shell Methods
191
Use the use <database-name> helper in the interactive mongo (page 794) shell, or the following
db.getSiblingDB() (page 189) in the interactive shell or in mongo (page 794) shell scripts to change
the db object:
db = db.getSiblingDB('<database-name>')
When you have set the database context and db object, you can use the db.logout() (page 191) to log out
of database as in the following operation:
db.logout()
db.logout() (page 191) function provides a wrapper around the database command logout (page 370).
db.printCollectionStats()
db.printCollectionStats()
Provides a wrapper around the db.collection.stats() (page 106) method. Returns statistics from every
collection separated by three hyphen characters.
Note:
The db.printCollectionStats() (page 192) in the mongo (page 794) shell does
not return JSON. Use db.printCollectionStats() (page 192) for manual inspection, and
db.collection.stats() (page 106) in scripts.
See also:
collStats (page 471)
db.printReplicationInfo()
On this page
Definition (page 192)
Output Example (page 193)
Output Fields (page 193)
Definition
db.printReplicationInfo()
Prints a formatted report of the replica set members oplog. The displayed report formats the data returned by
db.getReplicationInfo() (page 188). 7
The output of db.printReplicationInfo()
rs.printReplicationInfo() (page 258).
(page
192)
is
identical
to
that
of
Note:
The db.printReplicationInfo() (page 192) in the mongo (page 794) shell does
not return JSON. Use db.printReplicationInfo() (page 192) for manual inspection, and
db.getReplicationInfo() (page 188) in scripts.
7 If run on a slave of a master-slave replication, the method calls db.printSlaveReplicationInfo() (page 194). See
db.printSlaveReplicationInfo() (page 194) for details.
192
Output Example The following example is a sample output from the db.printReplicationInfo()
(page 192) method run on the primary:
configured oplog size:
log length start to end:
oplog first event time:
oplog last event time:
now:
192MB
65422secs (18.17hrs)
Mon Jun 23 2014 17:47:18 GMT-0400 (EDT)
Tue Jun 24 2014 11:57:40 GMT-0400 (EDT)
Thu Jun 26 2014 14:24:39 GMT-0400 (EDT)
Output Fields db.printReplicationInfo() (page 192) formats and prints the data returned by
db.getReplicationInfo() (page 188):
configured oplog size Displays the db.getReplicationInfo.logSizeMB (page 188) value.
log length start to end Displays
the
db.getReplicationInfo.timeDiff
db.getReplicationInfo.timeDiffHours (page 188) values.
(page
188)
and
On this page
Definition (page 193)
Definition
db.printShardingStatus()
Prints a formatted report of the sharding configuration and the information regarding existing chunks in a
sharded cluster.
Only use db.printShardingStatus() (page 193) when connected to a mongos (page 784) instance.
The db.printShardingStatus() (page 193) method has the following parameter:
param boolean verbose Optional. If true, the method displays details of the document distribution across chunks when you have 20 or more chunks.
See sh.status() (page 278) for details of the output.
Note: The db.printShardingStatus() (page 193) in the mongo (page 794) shell does not return JSON. Use db.printShardingStatus() (page 193) for manual inspection, and Config Database
(page 877) in scripts.
See also:
sh.status() (page 278)
193
db.printSlaveReplicationInfo()
On this page
Definition (page 194)
Output (page 194)
Definition
db.printSlaveReplicationInfo()
Returns a formatted report of the status of a replica set from the perspective of the secondary member of the set.
The output is identical to that of rs.printSlaveReplicationInfo() (page 259).
Output The following is example output from the db.printSlaveReplicationInfo() (page 194) method
issued on a replica set with two secondary members:
source: m1.example.net:27017
syncedTo: Thu Apr 10 2014
0 secs (0 hrs) behind the
source: m2.example.net:27017
syncedTo: Thu Apr 10 2014
0 secs (0 hrs) behind the
Note: The db.printSlaveReplicationInfo() (page 194) in the mongo (page 794) shell does not return JSON. Use db.printSlaveReplicationInfo() (page 194) for manual inspection, and rs.status()
(page 261) in scripts.
A delayed member may show as 0 seconds behind the primary when the inactivity period on the primary is greater
than the members[n].slaveDelay value.
db.repairDatabase()
On this page
Behavior (page 194)
db.repairDatabase()
db.repairDatabase() (page 194) provides a wrapper around the database command repairDatabase
(page 461), and has the same effect as the run-time option mongod --repair option, limited to only the
current database. See repairDatabase (page 461) for full documentation.
Behavior
194
Warning: During normal operations, only use the repairDatabase (page 461) command and wrappers
including db.repairDatabase() (page 194) in the mongo (page 794) shell and mongod --repair, to
compact database files and/or reclaim disk space. Be aware that these operations remove and do not save any
corrupt data during the repair process.
If you are trying to repair a replica set member, and you have access to an intact copy of your data (e.g. a
recent backup or an intact member of the replica set), you should restore from that intact copy, and not use
repairDatabase (page 461).
When using journaling, there is almost never any need to run repairDatabase (page 461). In the event of an
unclean shutdown, the server will be able to restore the data files to a pristine state automatically.
Changed in version 2.6: The db.repairDatabase() (page 194) is now available for secondary as well as primary
members of replica sets.
db.resetError()
db.resetError()
Deprecated since version 1.6.
Resets the error message returned by db.getPrevError (page 187) or getPrevError (page 356). Provides a wrapper around the resetError (page 356) command.
db.runCommand()
On this page
Definition (page 195)
Behavior (page 195)
Definition
db.runCommand(command)
Provides a helper to run specified database commands (page 302). This is the preferred method to issue database
commands, as it provides a consistent interface between the shell and drivers.
param document, string command A database command, specified either in document form or
as a string. If specified as a string, db.runCommand() (page 195) transforms the string into
a document.
New
in
version
2.6:
To
specify
a
time
limit
in
milliseconds,
https://docs.mongodb.org/manual/tutorial/terminate-running-operations.
see
Behavior db.runCommand() (page 195) runs the command in the context of the current database. Some commands are only applicable in the context of the admin database, and you must change your db object to before
running these commands.
db.serverBuildInfo()
db.serverBuildInfo()
Provides a wrapper around the buildInfo (page 470) database command. buildInfo (page 470) returns a
document that contains an overview of parameters used to compile this mongod (page 762) instance.
db.serverCmdLineOpts()
db.serverCmdLineOpts()
Wraps the getCmdLineOpts (page 482) database command.
Returns a document that reports on the arguments and configuration options used to start the mongod (page 762)
or mongos (page 784) instance.
2.1. mongo Shell Methods
195
See Configuration File Options (page 887), mongod (page 761), and mongos (page 783) for additional information on available MongoDB runtime options.
db.serverStatus()
db.serverStatus()
Returns a document that provides an overview of the database processs state.
Changed in version 3.0: The server status output no longer includes the workingSet, indexCounters,
and recordStats sections.
This command provides a wrapper around the database command serverStatus (page 491).
Changed in version 2.4: In 2.4 you can dynamically suppress portions of the db.serverStatus()
(page 196) output, or include suppressed sections in a document passed to the db.serverStatus()
(page 196) method, as in the following example:
db.serverStatus( { repl: 0, locks: 0 } )
db.serverStatus( { metrics: 0, locks: 0 } )
serverStatus (page 491) includes all fields by default, except rangeDeleter (page 499) and some content in
the repl (page 496) document.
Note: You may only dynamically include top-level fields from the serverStatus (page 491) document that are
not included by default. You can exclude any field that serverStatus (page 491) includes by default.
See also:
serverStatus (page 491) for complete documentation of the output of this function. For an example of the output,
see https://docs.mongodb.org/manual/reference/server-status.
db.setLogLevel()
On this page
Definition (page 196)
Behavior (page 197)
Examples (page 197)
Definition
db.setLogLevel()
New in version 3.0.
Sets a single verbosity level for log messages (page 955).
db.setLogLevel() (page 196) has the following form:
db.setLogLevel(<level>, <component>)
196
0 is the MongoDBs default log verbosity level, to include Informational (page 956) messages.
1 to 5 increases the verbosity level to include Debug (page 956) messages.
To inherit the verbosity level of the components parent, you can also specify -1.
param string component Optional. The name of the component for which to specify the log
verbosity level. The component name corresponds to the <name> from the corresponding
systemLog.component.<name>.verbosity setting:
accessControl (page 891)
command (page 891)
control (page 891)
geo (page 891)
index (page 892)
network (page 892)
query (page 892)
replication (page 892)
sharding (page 892)
storage (page 893)
storage.journal (page 893)
write (page 893)
Omit to specify the default verbosity level for all components.
Behavior db.setLogLevel() (page 196) sets a single verbosity level. To set multiple verbosity levels in a single
operation, use either the setParameter (page 459) command to set the logComponentVerbosity (page 925)
parameter. You can also specify the verbosity settings in the configuration file (page 887). See Configure Log Verbosity
Levels (page 957) for examples.
Examples
Set Default Verbosity Level Omit the <component> parameter to set the default verbosity for all components;
i.e. the systemLog.verbosity (page 889) setting. The operation sets the default verbosity to 1:
db.setLogLevel(1)
Set Verbosity Level for a Component Specify the <component> parameter to set the verbosity for the component. The following operation updates the systemLog.component.storage.journal.verbosity
(page 893) to 2:
db.setLogLevel(2, "storage.journal" )
197
db.setProfilingLevel()
On this page
Definition (page 198)
Definition
db.setProfilingLevel(level, slowms)
Modifies the current database profiler level used by the database profiling system to capture data about performance. The method provides a wrapper around the database command profile (page 483).
param integer level Specifies a profiling level, which is either 0 for no profiling, 1 for only slow
operations, or 2 for all operations.
param integer slowms Optional. Sets the threshold in milliseconds for the profile to consider a
query or operation to be slow.
The level chosen can affect performance. It also can allow the server to write the contents of queries to the log,
which might have information security implications for your deployment.
Configure the slowOpThresholdMs (page 913) option to set the threshold for the profiler to consider a query
slow. Specify this value in milliseconds to override the default, 100 ms.
mongod (page 762) writes the output of the database profiler to the system.profile collection.
mongod (page 762) prints information about queries that take longer than the slowOpThresholdMs
(page 913) to the log even when the database profiler is not active.
db.shutdownServer()
db.shutdownServer()
Shuts down the current mongod (page 762) or mongos (page 784) process cleanly and safely.
This operation fails when the current database is not the admin database.
This command provides a wrapper around the shutdown (page 464).
db.stats()
On this page
Description (page 198)
Behavior (page 199)
Example (page 199)
Description
db.stats(scale)
Returns statistics that reflect the use state of a single database.
The db.stats() (page 198) method has the following parameter:
198
param number scale Optional. The scale at which to deliver results. Unless specified, this command returns all data in bytes.
Returns A document with statistics reflecting the database systems state. For an explanation of the
output, see dbStats (page 480).
The db.stats() (page 198) method is a wrapper around the dbStats (page 480) database command.
Behavior For MongoDB instances using the WiredTiger storage engine, after an unclean shutdown, statistics
on size and count may off by up to 1000 documents as reported by collStats (page 472), dbStats (page 480),
count (page 306). To restore the correct statistics for the collection, run validate (page 484) on the collection.
Example The following example converts the returned values to kilobytes:
db.stats(1024)
db.version()
db.version()
Returns The version of the mongod (page 762) or mongos (page 784) instance.
db.upgradeCheck()
On this page
Definition
db.upgradeCheck(<document>)
New in version 2.6.
Performs a preliminary check for upgrade preparedness to 2.6. The helper, available in the 2.6 mongo (page 794)
shell, can run connected to either a 2.4 or a 2.6 server.
The method checks for:
documents with index keys longer than the index key limit (page 1085),
documents with illegal field names (page 938),
collections without an _id index, and
indexes with invalid specifications, such as an index key with an empty or illegal field name.
199
The method can accept a document parameter which determine the scope of the check:
param document scope Optional. Document to limit the scope of the check to the specified collection in the database.
Omit to perform the check on all collections in the database.
The optional scope document has the following form:
{
collection: <string>
}
Additional 2.6 changes that affect compatibility with older versions require manual checks and intervention.
See Compatibility Changes in MongoDB 2.6 (page 1085) for details.
See also:
db.upgradeCheckAllDBs() (page 201)
Behavior db.upgradeCheck() (page 199) performs collection scans and has an impact on performance. To
mitigate the performance impact:
For sharded clusters, configure to read from secondaries and run the command on the mongos (page 784).
For replica sets, run the command on the secondary members.
db.upgradeCheck() (page 199) can miss new data during the check when run on a live system with active write
operations.
For index validation, db.upgradeCheck() (page 199) only supports the check of version 1 indexes and skips the
check of version 0 indexes.
The db.upgradeCheck() (page 199) checks all of the data stored in the mongod (page 762) instance: the time to
run db.upgradeCheck() (page 199) depends on the quantity of data stored by mongod (page 762).
Required Access On systems running with authorization (page 902), a user must have access that includes
the find action on all collections, including the system collections (page 884).
Example The following example connects to a secondary running on localhost and runs
db.upgradeCheck() (page 199) against the employees collection in the records database. Because
the output from the method can be quite large, the example pipes the output to a file.
./mongo --eval "db.getMongo().setSlaveOk(); db.upgradeCheck( { collection: 'employees' } )"
Error Output The upgrade check can return the following errors when it encounters incompatibilities in your data:
Index Key Exceed Limit
Document Error: key for index '<indexName>' (<indexSpec>) too long on document: <doc>
To resolve, remove the document. Ensure that the query to remove the document does not specify a condition on the
invalid field or field.
Documents with Illegal Field Names
200
localhos
To resolve, remove the document and re-insert with the appropriate corrections.
Index Specification Invalid
Index Error: invalid index spec for index '<indexName>': <indexSpec>
To resolve, remove the invalid index and recreate with a valid index specification.
Missing _id Index
Collection Error: lack of _id index on collection: <collectionName>
To resolve, remove the invalid index and recreate the index omitting the version specification, or reindex the collection.
Reindex operation may be expensive for collections that have a large amount of data and/or a large number of indexes.
db.upgradeCheckAllDBs()
On this page
Definition
db.upgradeCheckAllDBs()
New in version 2.6.
Performs a preliminary check for upgrade preparedness to 2.6. The helper, available in the 2.6 mongo (page 794)
shell, can run connected to either a 2.4 or a 2.6 server in the admin database.
The method cycles through all the databases and checks for:
documents with index keys longer than the index key limit (page 1085),
documents with illegal field names (page 938),
collections without an _id index, and
indexes with invalid specifications, such as an index key with an empty or illegal field name.
Additional 2.6 changes that affect compatibility with older versions require manual checks and intervention.
See Compatibility Changes in MongoDB 2.6 (page 1085) for details.
See also:
db.upgradeCheck() (page 199)
2.1. mongo Shell Methods
201
Behavior db.upgradeCheckAllDBs() (page 201) performs collection scans and has an impact on performance. To mitigate the performance impact:
For sharded clusters, configure to read from secondaries and run the command on the mongos (page 784).
For replica sets, run the command on the secondary members.
db.upgradeCheckAllDBs() (page 201) can miss new data during the check when run on a live system with
active write operations.
For index validation, db.upgradeCheckAllDBs() (page 201) only supports the check of version 1 indexes and
skips the check of version 0 indexes.
The db.upgradeCheckAllDBs() (page 201) checks all of the data stored in the mongod (page 762) instance:
the time to run db.upgradeCheckAllDBs() (page 201) depends on the quantity of data stored by mongod
(page 762).
Required Access On systems running with authorization (page 902), a user must have access that includes
the listDatabases action on all databases and the find action on all collections, including the system collections
(page 884).
You must run the db.upgradeCheckAllDBs() (page 201) operation in the admin database.
Example The following example connects to a secondary running on localhost and runs
db.upgradeCheckAllDBs() (page 201) against the admin database. Because the output from the method can
be quite large, the example pipes the output to a file.
Error Output The upgrade check can return the following errors when it encounters incompatibilities in your data:
Index Key Exceed Limit
Document Error: key for index '<indexName>' (<indexSpec>) too long on document: <doc>
To resolve, remove the document. Ensure that the query to remove the document does not specify a condition on the
invalid field or field.
Documents with Illegal Field Names
Document Error: document is no longer valid in 2.6 because <errmsg>: <doc>
To resolve, remove the document and re-insert with the appropriate corrections.
Index Specification Invalid
Index Error: invalid index spec for index '<indexName>': <indexSpec>
To resolve, remove the invalid index and recreate with a valid index specification.
Missing _id Index
Collection Error: lack of _id index on collection: <collectionName>
202
Warning Output
Warning: upgradeCheck only supports V1 indexes. Skipping index: <indexSpec>
To resolve, remove the invalid index and recreate the index omitting the version specification, or reindex the collection.
Reindex operation may be expensive for collections that have a large amount of data and/or a large number of indexes.
On this page
Definition (page 203)
Methods (page 203)
Definition
db.collection.getPlanCache()
Returns an interface to access the query plan cache for a collection. The interface provides methods to view and
clear the query plan cache.
Returns Interface to access the query plan cache.
The query optimizer only caches the plans for those query shapes that can have more than one viable plan.
Methods The following methods are available through the interface:
203
Name
PlanCache.help()
(page 204)
Description
Displays the methods available for a collections query plan cache. Accessible
through the plan cache object of a specific collection, i.e.
db.collection.getPlanCache().help().
PlanCache.listQueryShapes()
Displays the query shapes for which cached query plans exist. Accessible through
(page 204)
the plan cache object of a specific collection, i.e.
db.collection.getPlanCache().listQueryShapes().
PlanCache.getPlansByQuery()
Displays the cached query plans for the specified query shape. Accessible through
(page 206)
the plan cache object of a specific collection, i.e.
db.collection.getPlanCache().getPlansByQuery().
PlanCache.clearPlansByQuery()
Clears the cached query plans for the specified query shape. Accessible through the
(page 207)
plan cache object of a specific collection, i.e.
db.collection.getPlanCache().clearPlansByQuery()
PlanCache.clear()
Clears all the cached query plans for a collection. Accessible through the plan
(page 208)
cache object of a specific collection, i.e.
db.collection.getPlanCache().clear().
PlanCache.help()
On this page
Definition (page 204)
Definition
PlanCache.help()
Displays the methods available to view and modify a collections query plan cache.
The method is only available from the plan cache object (page 203) of a specific collection; i.e.
db.collection.getPlanCache().help()
See also:
db.collection.getPlanCache() (page 203)
PlanCache.listQueryShapes()
On this page
Definition (page 204)
Required Access (page 205)
Example (page 205)
Definition
PlanCache.listQueryShapes()
Displays the query shapes for which cached query plans exist.
The query optimizer only caches the plans for those query shapes that can have more than one viable plan.
The method is only available from the plan cache object (page 203) of a specific collection; i.e.
204
db.collection.getPlanCache().listQueryShapes()
The method returns an array of the query shapes currently in the cache. In the example, the orders collection had
cached query plans associated with the following shapes:
[
{
"query" : { "qty" : { "$gt" : 10 } },
"sort" : { "ord_date" : 1 },
"projection" : { }
},
{
"query" : { "$or" :
[
{ "qty" : { "$gt" : 15 }, "item" : "xyz123" },
{ "status" : "A" }
]
},
"sort" : { },
"projection" : { }
},
{
"query" : { "$or" : [ { "qty" : { "$gt" : 15 } }, { "status" : "A" } ] },
"sort" : { },
"projection" : { }
}
]
See also:
db.collection.getPlanCache() (page 203)
PlanCache.getPlansByQuery() (page 206)
PlanCache.help() (page 204)
planCacheListQueryShapes (page 366)
PlanCache.getPlansByQuery()
205
On this page
Definition (page 206)
Required Access (page 206)
Example (page 206)
Definition
PlanCache.getPlansByQuery(<query>, <projection>, <sort>)
Displays the cached query plans for the specified query shape.
The query optimizer only caches the plans for those query shapes that can have more than one viable plan.
The method is only available from the plan cache object (page 203) of a specific collection; i.e.
db.collection.getPlanCache().getPlansByQuery( <query>, <projection>, <sort> )
The following operation displays the query plan cached for the shape:
db.orders.getPlanCache().getPlansByQuery(
{ "qty" : { "$gt" : 10 } },
{ },
{ "ord_date" : 1 }
)
See also:
db.collection.getPlanCache() (page 203)
PlanCache.listQueryShapes() (page 204)
206
On this page
Definition (page 207)
Required Access (page 207)
Example (page 207)
Definition
PlanCache.clearPlansByQuery(<query>, <projection>, <sort>)
Clears the cached query plans for the specified query shape.
The method is only available from the plan cache object (page 203) of a specific collection; i.e.
db.collection.getPlanCache().clearPlansByQuery( <query>, <projection>, <sort> )
The following operation removes the query plan cached for the shape:
db.orders.getPlanCache().clearPlansByQuery(
{ "qty" : { "$gt" : 10 } },
{ },
{ "ord_date" : 1 }
)
See also:
db.collection.getPlanCache() (page 203)
PlanCache.listQueryShapes() (page 204)
2.1. mongo Shell Methods
207
On this page
Definition (page 208)
Required Access (page 208)
Definition
PlanCache.clear()
Removes all cached query plans for a collection.
The method is only available from the plan cache object (page 203) of a specific collection; i.e.
db.collection.getPlanCache().clear()
Required Access On systems running with authorization (page 902), a user must have access that includes
the planCacheWrite action.
See also:
db.collection.getPlanCache() (page 203)
PlanCache.clearPlansByQuery() (page 207)
208
Name
Description
Bulk() (page 209)
Bulk operations builder.
db.collection.initializeOrderedBulkOp()
Initializes a Bulk() (page 209) operations builder for an ordered list
(page 211)
of operations.
db.collection.initializeUnorderedBulkOp()
Initializes a Bulk() (page 209) operations builder for an unordered
(page 212)
list of operations.
Bulk.insert() (page 213)
Adds an insert operation to a list of operations.
Bulk.find() (page 214)
Specifies the query condition for an update or a remove operation.
Bulk.find.removeOne()
Adds a single document remove operation to a list of operations.
(page 215)
Bulk.find.remove() (page 216)
Adds a multiple document remove operation to a list of operations.
Bulk.find.replaceOne()
Adds a single document replacement operation to a list of operations.
(page 216)
Bulk.find.updateOne()
Adds a single document update operation to a list of operations.
(page 217)
Bulk.find.update() (page 219)
Adds a multi update operation to a list of operations.
Bulk.find.upsert() (page 220)
Specifies upsert: true for an update operation.
Bulk.execute() (page 222)
Executes a list of operations in bulk.
Bulk.getOperations()
Returns an array of write operations executed in the Bulk()
(page 225)
(page 209) operations object.
Bulk.tojson() (page 226)
Returns a JSON document that contains the number of operations and
batches in the Bulk() (page 209) operations object.
Bulk.toString() (page 227)
Returns the Bulk.tojson() (page 226) results as a string.
Bulk()
On this page
Description (page 209)
Ordered and Unordered Bulk Operations (page 209)
Methods (page 210)
Description
Bulk()
New in version 2.6.
Bulk operations builder used to construct a list of write operations to perform in bulk for a single collection.
To instantiate the builder, use either the db.collection.initializeOrderedBulkOp() (page 211)
or the db.collection.initializeUnorderedBulkOp() (page 212) method.
Ordered and Unordered Bulk Operations The builder can construct the list of operations as ordered or unordered.
Ordered Operations With an ordered operations list, MongoDB executes the write operations in the list serially.
If an error occurs during the processing of one of the write operations, MongoDB will return without processing any
remaining write operations in the list.
Use db.collection.initializeOrderedBulkOp() (page 211) to create a builder for an ordered list of
write commands.
When executing an ordered (page 211) list of operations, MongoDB groups the operations by the operation
type (page 226) and contiguity; i.e. contiguous operations of the same type are grouped together. For example, if an
2.1. mongo Shell Methods
209
ordered list has two insert operations followed by an update operation followed by another insert operation, MongoDB
groups the operations into three separate groups: first group contains the two insert operations, second group contains
the update operation, and the third group contains the last insert operation. This behavior is subject to change in future
versions.
Each group of operations can have at most 1000 operations (page 937). If a group exceeds this limit
(page 937), MongoDB will divide the group into smaller groups of 1000 or less. For example, if the bulk operations list consists of 2000 insert operations, MongoDB creates 2 groups, each with 1000 operations.
The sizes and grouping mechanics are internal performance details and are subject to change in future versions.
To see how the operations are grouped for a bulk operation execution, call Bulk.getOperations() (page 225)
after the execution.
Executing an ordered (page 211) list of operations on a sharded collection will generally be slower than executing
an unordered (page 212) list since with an ordered list, each operation must wait for the previous operation to finish.
Unordered Operations With an unordered operations list, MongoDB can execute in parallel, as well as in a nondeterministic order, the write operations in the list. If an error occurs during the processing of one of the write operations,
MongoDB will continue to process remaining write operations in the list.
Use db.collection.initializeUnorderedBulkOp() (page 212) to create a builder for an unordered list
of write commands.
When executing an unordered (page 212) list of operations, MongoDB groups the operations. With an unordered
bulk operation, the operations in the list may be reordered to increase performance. As such, applications should not
depend on the ordering when performing unordered (page 212) bulk operations.
Each group of operations can have at most 1000 operations (page 937). If a group exceeds this limit
(page 937), MongoDB will divide the group into smaller groups of 1000 or less. For example, if the bulk operations list consists of 2000 insert operations, MongoDB creates 2 groups, each with 1000 operations.
The sizes and grouping mechanics are internal performance details and are subject to change in future versions.
To see how the operations are grouped for a bulk operation execution, call Bulk.getOperations() (page 225)
after the execution.
Methods The Bulk() (page 209) builder has the following methods:
210
Name
Description
Bulk.insert()
Adds an insert operation to a list of operations.
(page 213)
Bulk.find() (page 214)
Specifies the query condition for an update or a remove operation.
Bulk.find.removeOne() Adds a single document remove operation to a list of operations.
(page 215)
Bulk.find.remove()
Adds a multiple document remove operation to a list of operations.
(page 216)
Bulk.find.replaceOne()Adds a single document replacement operation to a list of operations.
(page 216)
Bulk.find.updateOne() Adds a single document update operation to a list of operations.
(page 217)
Bulk.find.update()
Adds a multi update operation to a list of operations.
(page 219)
Bulk.find.upsert()
Specifies upsert: true for an update operation.
(page 220)
Bulk.execute()
Executes a list of operations in bulk.
(page 222)
Bulk.getOperations() Returns an array of write operations executed in the Bulk() (page 209)
(page 225)
operations object.
Bulk.tojson()
Returns a JSON document that contains the number of operations and batches in
(page 226)
the Bulk() (page 209) operations object.
Bulk.toString()
Returns the Bulk.tojson() (page 226) results as a string.
(page 227)
db.collection.initializeOrderedBulkOp()
On this page
Definition (page 211)
Behavior (page 211)
Examples (page 212)
Definition
db.collection.initializeOrderedBulkOp()
Initializes and returns a new Bulk() (page 209) operations builder for a collection. The builder constructs an
ordered list of write operations that MongoDB executes in bulk.
Returns new Bulk() (page 209) operations builder object.
Behavior
Order of Operation With an ordered operations list, MongoDB executes the write operations in the list serially.
Execution of Operations When executing an ordered (page 211) list of operations, MongoDB groups the operations by the operation type (page 226) and contiguity; i.e. contiguous operations of the same type are grouped
together. For example, if an ordered list has two insert operations followed by an update operation followed by another insert operation, MongoDB groups the operations into three separate groups: first group contains the two insert
211
operations, second group contains the update operation, and the third group contains the last insert operation. This
behavior is subject to change in future versions.
Each group of operations can have at most 1000 operations (page 937). If a group exceeds this limit
(page 937), MongoDB will divide the group into smaller groups of 1000 or less. For example, if the bulk operations list consists of 2000 insert operations, MongoDB creates 2 groups, each with 1000 operations.
The sizes and grouping mechanics are internal performance details and are subject to change in future versions.
To see how the operations are grouped for a bulk operation execution, call Bulk.getOperations() (page 225)
after the execution.
Executing an ordered (page 211) list of operations on a sharded collection will generally be slower than executing
an unordered (page 212) list since with an ordered list, each operation must wait for the previous operation to finish.
Error Handling If an error occurs during the processing of one of the write operations, MongoDB will return
without processing any remaining write operations in the list.
Examples The following initializes a Bulk() (page 209) operations builder on the users collection, adds a series
of write operations, and executes the operations:
var bulk = db.users.initializeOrderedBulkOp();
bulk.insert( { user: "abc123", status: "A", points: 0 }
bulk.insert( { user: "ijk123", status: "A", points: 0 }
bulk.insert( { user: "mop123", status: "P", points: 0 }
bulk.find( { status: "D" } ).remove();
bulk.find( { status: "P" } ).update( { $set: { comment:
bulk.execute();
);
);
);
"Pending" } } );
See also:
db.collection.initializeUnorderedBulkOp() (page 212)
Bulk.find() (page 214)
Bulk.find.removeOne() (page 215)
Bulk.execute() (page 222)
db.collection.initializeUnorderedBulkOp()
On this page
Definition (page 212)
Behavior (page 212)
Example (page 213)
Definition
db.collection.initializeUnorderedBulkOp()
New in version 2.6.
Initializes and returns a new Bulk() (page 209) operations builder for a collection. The builder constructs an
unordered list of write operations that MongoDB executes in bulk.
Behavior
212
Order of Operation With an unordered operations list, MongoDB can execute in parallel the write operations in the
list and in any order. If the order of operations matter, use db.collection.initializeOrderedBulkOp()
(page 211) instead.
Execution of Operations When executing an unordered (page 212) list of operations, MongoDB groups the
operations. With an unordered bulk operation, the operations in the list may be reordered to increase performance. As
such, applications should not depend on the ordering when performing unordered (page 212) bulk operations.
Each group of operations can have at most 1000 operations (page 937). If a group exceeds this limit
(page 937), MongoDB will divide the group into smaller groups of 1000 or less. For example, if the bulk operations list consists of 2000 insert operations, MongoDB creates 2 groups, each with 1000 operations.
The sizes and grouping mechanics are internal performance details and are subject to change in future versions.
To see how the operations are grouped for a bulk operation execution, call Bulk.getOperations() (page 225)
after the execution.
Error Handling If an error occurs during the processing of one of the write operations, MongoDB will continue to
process remaining write operations in the list.
Example The following initializes a Bulk() (page 209) operations builder and adds a series of insert operations to
add multiple documents:
var bulk = db.users.initializeUnorderedBulkOp();
bulk.insert( { user: "abc123", status: "A", points: 0 } );
bulk.insert( { user: "ijk123", status: "A", points: 0 } );
bulk.insert( { user: "mop123", status: "P", points: 0 } );
bulk.execute();
See also:
db.collection.initializeOrderedBulkOp() (page 211)
Bulk() (page 209)
Bulk.insert() (page 213)
Bulk.execute() (page 222)
Bulk.insert()
On this page
Description (page 213)
Example (page 214)
Description
Bulk.insert(<document>)
New in version 2.6.
Adds an insert operation to a bulk operations list.
Bulk.insert() (page 213) accepts the following parameter:
213
param document doc Document to insert. The size of the document must be less than or equal to
the maximum BSON document size (page 932).
Example The following initializes a Bulk() (page 209) operations builder for the items collection and adds a
series of insert operations to add multiple documents:
var bulk = db.items.initializeUnorderedBulkOp();
bulk.insert( { item: "abc123", defaultQty: 100, status: "A", points: 100 } );
bulk.insert( { item: "ijk123", defaultQty: 200, status: "A", points: 200 } );
bulk.insert( { item: "mop123", defaultQty: 0, status: "P", points: 0 } );
bulk.execute();
See also:
db.collection.initializeUnorderedBulkOp() (page 212)
db.collection.initializeOrderedBulkOp() (page 211)
Bulk.execute() (page 222)
Bulk.find()
On this page
Description (page 214)
Example (page 215)
Description
Bulk.find(<query>)
New in version 2.6.
Specifies a query condition for an update or a remove operation.
Bulk.find() (page 214) accepts the following parameter:
param document query Specifies a query condition using Query Selectors (page 519) to select documents for an update or a remove operation. To specify all documents, use an empty document
{}.
With update operations, the sum of the query document and the update document must be less
than or equal to the maximum BSON document size (page 932).
With remove operations, the query document must be less than or equal to the maximum BSON
document size (page 932).
Use Bulk.find() (page 214) with the following write operations:
Bulk.find.removeOne() (page 215)
Bulk.find.remove() (page 216)
Bulk.find.replaceOne() (page 216)
Bulk.find.updateOne() (page 217)
Bulk.find.update() (page 219)
214
Example The following example initializes a Bulk() (page 209) operations builder for the items collection and
adds a remove operation and an update operation to the list of operations. The remove operation and the update
operation use the Bulk.find() (page 214) method to specify a condition for their respective actions:
var bulk = db.items.initializeUnorderedBulkOp();
bulk.find( { status: "D" } ).remove();
bulk.find( { status: "P" } ).update( { $set: { points: 0 } } )
bulk.execute();
See also:
db.collection.initializeUnorderedBulkOp() (page 212)
db.collection.initializeOrderedBulkOp() (page 211)
Bulk.execute() (page 222)
Bulk.find.removeOne()
On this page
Description (page 215)
Example (page 215)
Description
Bulk.find.removeOne()
New in version 2.6.
Adds a single document remove operation to a bulk operations list. Use the Bulk.find() (page 214)
method to specify the condition that determines which document to remove. The Bulk.find.removeOne()
(page 215) limits the removal to one document. To remove multiple documents, see Bulk.find.remove()
(page 216).
Example The following example initializes a Bulk() (page 209) operations builder for the items collection and
adds two Bulk.find.removeOne() (page 215) operations to the list of operations.
Each remove operation removes just one document: one document with the status equal to "D" and another
document with the status equal to "P".
var bulk = db.items.initializeUnorderedBulkOp();
bulk.find( { status: "D" } ).removeOne();
bulk.find( { status: "P" } ).removeOne();
bulk.execute();
See also:
db.collection.initializeUnorderedBulkOp() (page 212)
db.collection.initializeOrderedBulkOp() (page 211)
Bulk.find() (page 214)
Bulk.find.remove() (page 216)
Bulk.execute() (page 222)
All Bulk Methods (page 210)
215
Bulk.find.remove()
On this page
Description (page 216)
Example (page 216)
Description
Bulk.find.remove()
New in version 2.6.
Adds a remove operation to a bulk operations list. Use the Bulk.find() (page 214) method to specify the condition that determines which documents to remove. The Bulk.find.remove() (page 216)
method removes all matching documents in the collection. To limit the remove to a single document, see
Bulk.find.removeOne() (page 215).
Example The following example initializes a Bulk() (page 209) operations builder for the items collection and
adds a remove operation to the list of operations. The remove operation removes all documents in the collection where
the status equals "D":
var bulk = db.items.initializeUnorderedBulkOp();
bulk.find( { status: "D" } ).remove();
bulk.execute();
See also:
db.collection.initializeUnorderedBulkOp() (page 212)
db.collection.initializeOrderedBulkOp() (page 211)
Bulk.find() (page 214)
Bulk.find.removeOne() (page 215)
Bulk.execute() (page 222)
Bulk.find.replaceOne()
On this page
Description (page 216)
Example (page 217)
Description
Bulk.find.replaceOne(<document>)
New in version 2.6.
Adds a single document replacement operation to a bulk operations list. Use the Bulk.find()
(page 214) method to specify the condition that determines which document to replace.
The
Bulk.find.replaceOne() (page 216) method limits the replacement to a single document.
Bulk.find.replaceOne() (page 216) accepts the following parameter:
216
param document replacement A replacement document that completely replaces the existing document. Contains only field and value pairs.
The sum of the associated <query> document from the Bulk.find() (page 214) and the
replacement document must be less than or equal to the maximum BSON document size
(page 932).
To specify an upsert for this operation, see Bulk.find.upsert() (page 220).
Example The following example initializes a Bulk() (page 209) operations builder for the items collection, and
adds various replaceOne (page 216) operations to the list of operations.
var bulk = db.items.initializeUnorderedBulkOp();
bulk.find( { item: "abc123" } ).replaceOne( { item: "abc123", status: "P", points: 100 } );
bulk.execute();
See also:
db.collection.initializeUnorderedBulkOp() (page 212)
db.collection.initializeOrderedBulkOp() (page 211)
Bulk.find() (page 214)
Bulk.execute() (page 222)
All Bulk Methods (page 210)
Bulk.find.updateOne()
On this page
Description (page 217)
Behavior (page 218)
Example (page 218)
Description
Bulk.find.updateOne(<update>)
New in version 2.6.
Adds a single document update operation to a bulk operations list. The operation can either replace an existing
document or update specific fields in an existing document.
Use the Bulk.find() (page 214) method to specify the condition that determines which document to update.
The Bulk.find.updateOne() (page 217) method limits the update or replacement to a single document.
To update multiple documents, see Bulk.find.update() (page 219).
Bulk.find.updateOne() (page 217) accepts the following parameter:
param document update An update document that updates specific fields or a replacement document that completely replaces the existing document.
An update document only contains update operator (page 587) expressions. A replacement
document contains only field and value pairs.
The sum of the associated <query> document from the Bulk.find() (page 214) and the
update/replacement document must be less than or equal to the maximum BSON document
size.
2.1. mongo Shell Methods
217
To specify an upsert: true for this operation, see Bulk.find.upsert() (page 220).
Behavior
Update Specific Fields If the <update> document contains only update operator (page 587) expressions, as in:
{
$set: { status: "D" },
points: { $inc: 2 }
}
Then, Bulk.find.updateOne() (page 217) updates only the corresponding fields, status and points, in the
document.
Replace a Document If the <update> document contains only field:value expressions, as in:
{
item: "TBD",
points: 0,
inStock: true,
status: "I"
}
Then, Bulk.find.updateOne() (page 217) replaces the matching document with the <update> document
with the exception of the _id field. The Bulk.find.updateOne() (page 217) method does not replace the _id
value.
Example The following example initializes a Bulk() (page 209) operations builder for the items collection, and
adds various updateOne (page 217) operations to the list of operations.
var bulk = db.items.initializeUnorderedBulkOp();
bulk.find( { status: "D" } ).updateOne( { $set: { status: "I", points: "0" } } );
bulk.find( { item: null } ).updateOne(
{
item: "TBD",
points: 0,
inStock: true,
status: "I"
}
);
bulk.execute();
See also:
db.collection.initializeUnorderedBulkOp() (page 212)
db.collection.initializeOrderedBulkOp() (page 211)
Bulk.find() (page 214)
Bulk.find.update() (page 219)
Bulk.execute() (page 222)
All Bulk Methods (page 210)
218
Bulk.find.update()
On this page
Description (page 219)
Example (page 219)
Description
Bulk.find.update(<update>)
New in version 2.6.
Adds a multi update operation to a bulk operations list. The method updates specific fields in existing documents.
Use the Bulk.find() (page 214) method to specify the condition that determines which documents to update. The Bulk.find.update() (page 219) method updates all matching documents. To specify a single
document update, see Bulk.find.updateOne() (page 217).
Bulk.find.update() (page 219) accepts the following parameter:
param document update Specifies the fields to update. Only contains update operator (page 587)
expressions.
The sum of the associated <query> document from the Bulk.find() (page 214) and
the update document must be less than or equal to the maximum BSON document size
(page 932).
To specify upsert: true for this operation, see Bulk.find.upsert() (page 220).
With
Bulk.find.upsert() (page 220), if no documents match the Bulk.find() (page 214) query condition, the update operation inserts only a single document.
Example The following example initializes a Bulk() (page 209) operations builder for the items collection, and
adds various multi update operations to the list of operations.
var bulk = db.items.initializeUnorderedBulkOp();
bulk.find( { status: "D" } ).update( { $set: { status: "I", points: "0" } } );
bulk.find( { item: null } ).update( { $set: { item: "TBD" } } );
bulk.execute();
See also:
db.collection.initializeUnorderedBulkOp() (page 212)
db.collection.initializeOrderedBulkOp() (page 211)
Bulk.find() (page 214)
Bulk.find.updateOne() (page 217)
Bulk.execute() (page 222)
All Bulk Methods (page 210)
Bulk.find.upsert()
219
On this page
Description (page 220)
Behavior (page 220)
Description
Bulk.find.upsert()
New in version 2.6.
Sets the upsert option to true for an update or a replacement operation and has the following syntax:
Bulk.find(<query>).upsert().update(<update>);
Bulk.find(<query>).upsert().updateOne(<update>);
Bulk.find(<query>).upsert().replaceOne(<replacement>);
With the upsert option set to true, if no matching documents exist for the Bulk.find() (page 214)
condition, then the update or the replacement operation performs an insert. If a matching document does exist,
then the update or replacement operation performs the specified update or replacement.
Use Bulk.find.upsert() (page 220) with the following write operations:
Bulk.find.replaceOne() (page 216)
Bulk.find.updateOne() (page 217)
Bulk.find.update() (page 219)
Behavior The following describe the insert behavior of various write operations when used in conjunction with
Bulk.find.upsert() (page 220).
Insert for Bulk.find.replaceOne() The Bulk.find.replaceOne() (page 216) method accepts, as its
parameter, a replacement document that only contains field and value pairs:
var bulk = db.items.initializeUnorderedBulkOp();
bulk.find( { item: "abc123" } ).upsert().replaceOne(
{
item: "abc123",
status: "P",
points: 100,
}
);
bulk.execute();
If the replacement operation with the Bulk.find.upsert() (page 220) option performs an insert, the inserted
document is the replacement document. If the replacement document does not specify an _id field, MongoDB adds
the _id field:
{
"_id" : ObjectId("52ded3b398ca567f5c97ac9e"),
"item" : "abc123",
"status" : "P",
"points" : 100
}
220
Insert for Bulk.find.updateOne() The Bulk.find.updateOne() (page 217) method accepts, as its
parameter, an <update> document that contains only field and value pairs or only update operator (page 587)
expressions.
Field and Value Pairs If the <update> document contains only field and value pairs:
var bulk = db.items.initializeUnorderedBulkOp();
bulk.find( { status: "P" } ).upsert().updateOne(
{
item: "TBD",
points: 0,
inStock: true,
status: "I"
}
);
bulk.execute();
Then, if the update operation with the Bulk.find.upsert() (page 220) option performs an insert, the inserted
document is the <update> document. If the update document does not specify an _id field, MongoDB adds the
_id field:
{
"_id" : ObjectId("52ded5a898ca567f5c97ac9f"),
"item" : "TBD",
"points" : 0,
"inStock" : true,
"status" : "I"
}
If the <update> document contains contains only update operator (page 587)
Then, if the update operation with the Bulk.find.upsert() (page 220) option performs an insert, the update
operation inserts a document with field and values from the <query> document of the Bulk.find() (page 214)
method and then applies the specified update from the <update> document:
{
"_id" : ObjectId("52ded68c98ca567f5c97aca0"),
"item" : null,
"status" : "P",
"defaultQty" : 0,
"inStock" : true,
"lastModified" : ISODate("2014-01-21T20:20:28.786Z"),
"points" : "0"
}
If neither the <query> document nor the <update> document specifies an _id field, MongoDB adds the _id
field.
2.1. mongo Shell Methods
221
Insert for Bulk.find.update() When using upsert() (page 220) with the multiple document update
method Bulk.find.update() (page 219), if no documents match the query condition, the update operation inserts a single document.
The Bulk.find.update() (page 219) method accepts, as its parameter, an <update> document that contains
only update operator (page 587) expressions:
var bulk = db.items.initializeUnorderedBulkOp();
bulk.find( { status: "P" } ).upsert().update(
{
$setOnInsert: { defaultQty: 0, inStock: true },
$currentDate: { lastModified: true },
$set: { status: "I", points: "0" }
}
);
bulk.execute();
Then, if the update operation with the Bulk.find.upsert() (page 220) option performs an insert, the update
operation inserts a single document with the fields and values from the <query> document of the Bulk.find()
(page 214) method and then applies the specified update from the <update> document:
{
"_id": ObjectId("52ded81a98ca567f5c97aca1"),
"status": "I",
"defaultQty": 0,
"inStock": true,
"lastModified": ISODate("2014-01-21T20:27:06.691Z"),
"points": "0"
}
If neither the <query> document nor the <update> document specifies an _id field, MongoDB adds the _id
field.
See also:
db.collection.initializeUnorderedBulkOp() (page 212)
db.collection.initializeOrderedBulkOp() (page 211)
Bulk.find() (page 214)
Bulk.execute() (page 222)
All Bulk Methods (page 210)
Bulk.execute()
On this page
Description (page 222)
Behavior (page 223)
Examples (page 223)
Description
Bulk.execute()
New in version 2.6.
222
Executes the list of operations built by the Bulk() (page 209) operations builder.
Bulk.execute() (page 222) accepts the following parameter:
param document writeConcern Optional. Write concern document for the bulk operation as
a whole. Omit to use default. For a standalone mongod (page 762) server, the write concern
defaults to { w: 1 }. With a replica set, the default write concern is { w: 1 } unless
modified as part of the replica set configuration.
See Override Default Write Concern (page 224) for an example.
Returns A BulkWriteResult (page 290) object that contains the status of the operation.
After execution, you cannot re-execute the Bulk() (page 209) object without reinitializing.
See db.collection.initializeUnorderedBulkOp() (page 212) and
db.collection.initializeOrderedBulkOp() (page 211).
Behavior
Ordered Operations When executing an ordered (page 211) list of operations, MongoDB groups the operations
by the operation type (page 226) and contiguity; i.e. contiguous operations of the same type are grouped
together. For example, if an ordered list has two insert operations followed by an update operation followed by
another insert operation, MongoDB groups the operations into three separate groups: first group contains the two
insert operations, second group contains the update operation, and the third group contains the last insert operation.
This behavior is subject to change in future versions.
Each group of operations can have at most 1000 operations (page 937). If a group exceeds this limit
(page 937), MongoDB will divide the group into smaller groups of 1000 or less. For example, if the bulk operations list consists of 2000 insert operations, MongoDB creates 2 groups, each with 1000 operations.
The sizes and grouping mechanics are internal performance details and are subject to change in future versions.
To see how the operations are grouped for a bulk operation execution, call Bulk.getOperations() (page 225)
after the execution.
Executing an ordered (page 211) list of operations on a sharded collection will generally be slower than executing
an unordered (page 212) list since with an ordered list, each operation must wait for the previous operation to finish.
Unordered Operations When executing an unordered (page 212) list of operations, MongoDB groups the operations. With an unordered bulk operation, the operations in the list may be reordered to increase performance. As
such, applications should not depend on the ordering when performing unordered (page 212) bulk operations.
Each group of operations can have at most 1000 operations (page 937). If a group exceeds this limit
(page 937), MongoDB will divide the group into smaller groups of 1000 or less. For example, if the bulk operations list consists of 2000 insert operations, MongoDB creates 2 groups, each with 1000 operations.
The sizes and grouping mechanics are internal performance details and are subject to change in future versions.
To see how the operations are grouped for a bulk operation execution, call Bulk.getOperations() (page 225)
after the execution.
Examples
223
Execute Bulk Operations The following initializes a Bulk() (page 209) operations builder on the items collection, adds a series of insert operations, and executes the operations:
var bulk = db.items.initializeUnorderedBulkOp();
bulk.insert( { item: "abc123", status: "A", defaultQty: 500, points: 5 } );
bulk.insert( { item: "ijk123", status: "A", defaultQty: 100, points: 10 } );
bulk.execute( );
For details on the return object, see BulkWriteResult() (page 290). For details on the batches executed, see
Bulk.getOperations() (page 225).
Override Default Write Concern The following operation to a replica set specifies a write concern of "w:
majority" with a wtimeout of 5000 milliseconds such that the method returns after the writes propagate to a
majority of the voting replica set members or the method times out after 5 seconds.
Changed in version 3.0: In previous versions, majority referred to the majority of all members of the replica set.
var bulk = db.items.initializeUnorderedBulkOp();
bulk.insert( { item: "efg123", status: "A", defaultQty: 100, points: 0 } );
bulk.insert( { item: "xyz123", status: "A", defaultQty: 100, points: 0 } );
bulk.execute( { w: "majority", wtimeout: 5000 } );
See
Bulk() (page 209) for a listing of methods available for bulk operations.
Bulk.getOperations()
224
On this page
Example (page 225)
Returned Fields (page 226)
Bulk.getOperations()
New in version 2.6.
Returns an array of write operations executed through Bulk.execute() (page 222). The returned write
operations are in groups as determined by MongoDB for execution. For information on how MongoDB groups
the list of bulk write operations, see Bulk.execute() Behavior (page 223).
Only use Bulk.getOperations() (page 225) after a Bulk.execute() (page 222). Calling
Bulk.getOperations() (page 225) before you call Bulk.execute() (page 222) will result in an incomplete list.
Example The following initializes a Bulk() (page 209) operations builder on the items collection, adds a series
of write operations, executes the operations, and then calls getOperations() (page 225) on the bulk builder
object:
var bulk = db.items.initializeUnorderedBulkOp();
for (var i = 1; i <= 1500; i++) {
bulk.insert( { x: i } );
}
bulk.execute();
bulk.getOperations();
The getOperations() (page 225) method returns an array with the operations executed. The output shows that
MongoDB divided the operations into 2 groups, one with 1000 operations and one with 500. For information on how
MongoDB groups the list of bulk write operations, see Bulk.execute() Behavior (page 223)
Although the method returns all 1500 operations in the returned array, this page omits some of the results for brevity.
[
{
"originalZeroIndex" : 0,
"batchType" : 1,
"operations" : [
{ "_id" : ObjectId("53a8959f1990ca24d01c6165"), "x" : 1 },
... // Content omitted for brevity
{ "_id" : ObjectId("53a8959f1990ca24d01c654c"), "x" : 1000 }
]
},
{
"originalZeroIndex" : 1000,
"batchType" : 1,
"operations" : [
{ "_id" : ObjectId("53a8959f1990ca24d01c654d"), "x" : 1001 },
... // Content omitted for brevity
{ "_id" : ObjectId("53a8959f1990ca24d01c6740"), "x" : 1500 }
]
225
}
]
Returned Fields The array contains documents with the following fields:
originalZeroIndex
Specifies the order in which the operation was added to the bulk operations builder, based on a zero index; e.g.
first operation added to the bulk operations builder will have originalZeroIndex (page 226) value of 0.
batchType
Specifies the write operations type.
batchType
1
2
3
Operation
Insert
Update
Remove
operations
Array of documents that contain the details of the operation.
See also:
Bulk() (page 209) and Bulk.execute() (page 222).
Bulk.tojson()
On this page
Example (page 226)
Bulk.tojson()
New in version 2.6.
Returns a JSON document that contains the number of operations and batches in the Bulk() (page 209) object.
Example The following initializes a Bulk() (page 209) operations builder on the items collection, adds a series
of write operations, and calls Bulk.tojson() (page 226) on the bulk builder object.
var bulk = db.items.initializeOrderedBulkOp();
bulk.insert( { item: "abc123", status: "A", defaultQty: 500, points: 5 } );
bulk.insert( { item: "ijk123", status: "A", defaultQty: 100, points: 10 } );
bulk.find( { status: "D" } ).removeOne();
bulk.tojson();
226
On this page
Example (page 227)
Bulk.toString()
New in version 2.6.
Returns as a string a JSON document that contains the number of operations and batches in the Bulk()
(page 209) object.
Example The following initializes a Bulk() (page 209) operations builder on the items collection, adds a series
of write operations, and calls Bulk.toString() (page 227) on the bulk builder object.
var bulk = db.items.initializeOrderedBulkOp();
bulk.insert( { item: "abc123", status: "A", defaultQty: 500, points: 5 } );
bulk.insert( { item: "ijk123", status: "A", defaultQty: 100, points: 10 } );
bulk.find( { status: "D" } ).removeOne();
bulk.toString();
Description
Authenticates a user to a database.
Creates a new user.
Updates user data.
Changes an existing users password.
Deprecated. Removes a user from a database.
Deletes all users associated with a database.
Removes a single user.
Grants a role and its privileges to a user.
Removes a role from a user.
Returns information about the specified user.
Returns information about all users associated with a database.
db.auth()
On this page
Definition (page 228)
227
Definition
db.auth()
Allows a user to authenticate to the database from within the shell.
The db.auth() (page 228) method can accept either:
the username and password.
db.auth( <username>, <password> )
a user document that contains the username and password, and optionally, the authentication mechanism
and a digest password flag.
db.auth( {
user: <username>,
pwd: <password>,
mechanism: <authentication mechanism>,
digestPassword: <boolean>
} )
param string username Specifies an existing username with access privileges for this database.
param string password Specifies the corresponding password.
param string mechanism Optional. Specifies the authentication mechanism (page 797) used. Defaults to either:
SCRAM-SHA-1 on new 3.0 installations and on 3.0 databases that have been upgraded from
2.6 with authSchemaUpgrade (page 1044); or
MONGODB-CR otherwise.
Changed in version 3.0: In previous version, defaulted to MONGODB-CR.
For available mechanisms, see authentication mechanisms (page 797).
param boolean digestPassword Optional. Determines whether the server receives digested or
undigested password. Set to false to specify undigested password. For use with SASL/LDAP
authentication since the server must forward an undigested password to saslauthd.
Alternatively, you can use mongo --username, --password, and --authenticationMechanism
to specify authentication credentials.
Note: The mongo (page 794) shell excludes all db.auth() (page 228) operations from the saved history.
Returns db.auth() (page 228) returns 0 when authentication is not successful, and 1 when the
operation is successful.
db.createUser()
On this page
228
Definition
db.createUser(user, writeConcern)
Creates a new user for the database where the method runs. db.createUser() (page 229) returns a duplicate
user error if the user already exists on the database.
The db.createUser() (page 229) method has the following syntax:
field document user The document with authentication and access information about the user to
create.
field document writeConcern Optional. The level of write concern for the creation operation. The writeConcern document takes the same fields as the getLastError (page 354)
command.
The user document defines the user and has the following form:
{ user: "<name>",
pwd: "<cleartext password>",
customData: { <any information> },
roles: [
{ role: "<role>", db: "<database>" } | "<role>",
...
]
}
To specify a role that exists in a different database, specify the role with a document.
The db.createUser() (page 229) method wraps the createUser (page 372) command.
Behavior
Encryption db.createUser() (page 229) sends password to the MongoDB instance without encryption. To
encrypt the password during transmission, use TLS/SSL.
229
External Credentials Users created on the $external database should have credentials stored externally to MongoDB, as, for example, with MongoDB Enterprise installations that use Kerberos.
local Database You cannot create users on the local database.
Required Access
To create a new user in a database, you must have createUser action on that database resource.
To grant roles to a user, you must have the grantRole action on the roles database.
Built-in roles userAdmin and userAdminAnyDatabase provide createUser and grantRole actions on
their respective resources.
Examples The following db.createUser() (page 229) operation creates the accountAdmin01 user on the
products database.
use products
db.createUser( { "user" : "accountAdmin01",
"pwd": "cleartext password",
"customData" : { employeeId: 12345 },
"roles" : [ { role: "clusterAdmin", db: "admin" },
{ role: "readAnyDatabase", db: "admin" },
"readWrite"
] },
{ w: "majority" , wtimeout: 5000 } )
Create User Without Roles The following operation creates a user named reportsUser in the admin database
but does not yet assign roles:
use admin
db.createUser(
{
user: "reportsUser",
pwd: "password",
roles: [ ]
}
)
230
Create Administrative User with Roles The following operation creates a user named appAdmin in the admin
database and gives the user readWrite access to the config database, which lets the user change certain settings
for sharded clusters, such as to the balancer setting.
use admin
db.createUser(
{
user: "appAdmin",
pwd: "password",
roles:
[
{ role: "readWrite", db: "config" },
"clusterAdmin"
]
}
)
db.updateUser()
On this page
Definition
db.updateUser(username, update, writeConcern)
Updates the users profile on the database on which you run the method. An update to a field completely
replaces the previous fields values. This includes updates to the users roles array.
Warning: When you update the roles array, you completely replace the previous arrays values. To
add or remove roles without replacing all the users existing roles, use the db.grantRolesToUser()
(page 236) or db.revokeRolesFromUser() (page 237) methods.
The db.updateUser() (page 231) method uses the following syntax:
db.updateUser(
"<username>",
{
customData : { <any information> },
roles : [
{ role: "<role>", db: "<database>" } | "<role>",
...
],
pwd: "<cleartext password>"
},
writeConcern: { <write concern> }
)
231
param document update A document containing the replacement data for the user. This data completely replaces the corresponding data for the user.
param document writeConcern Optional. The level of write concern for the update operation. The writeConcern document takes the same fields as the getLastError (page 354)
command.
The update document specifies the fields to update and their new values. All fields in the update document
are optional, but must include at least one field.
The update document has the following fields:
field document customData Optional. Any arbitrary information.
field array roles Optional. The roles granted to the user. An update to the roles array overrides
the previous arrays values.
field string pwd Optional. The users password.
In the roles field, you can specify both built-in roles and user-defined role.
To specify a role that exists in the same database where db.updateUser() (page 231) runs, you can either
specify the role with the name of the role:
"readWrite"
To specify a role that exists in a different database, specify the role with a document.
The db.updateUser() (page 231) method wraps the updateUser (page 373) command.
Behavior db.updateUser() (page 231) sends password to the MongoDB instance without encryption. To encrypt the password during transmission, use TLS/SSL.
Required Access You must have access that includes the revokeRole action on all databases in order to update a
users roles array.
You must have the grantRole action on a roles database to add a role to a user.
To change another users pwd or customData field, you must have the changeAnyPassword and
changeAnyCustomData actions respectively on that users database.
To modify your own password and custom data, you must have privileges that grant changeOwnPassword and
changeOwnCustomData actions respectively on the users database.
Example Given a user appClient01 in the products database with the following user info:
{
"_id" : "products.appClient01",
"user" : "appClient01",
"db" : "products",
"customData" : { "empID" : "12345", "badge" : "9156" },
"roles" : [
{ "role" : "readWrite",
"db" : "products"
},
{ "role" : "read",
232
"db" : "inventory"
}
]
}
The following db.updateUser() (page 231) method completely replaces the users customData and roles
data:
use products
db.updateUser( "appClient01",
{
customData : { employeeId : "0x3039" },
roles : [
{ role : "read", db : "assets"
]
}
)
The user appClient01 in the products database now has the following user information:
{
"_id" : "products.appClient01",
"user" : "appClient01",
"db" : "products",
"customData" : { "employeeId" : "0x3039" },
"roles" : [
{ "role" : "read",
"db" : "assets"
}
]
}
db.changeUserPassword()
On this page
Definition (page 233)
Required Access (page 234)
Example (page 234)
Definition
db.changeUserPassword(username, password)
Updates a users password. Run the method in the database where the user is defined, i.e. the database you
created (page 229) the user.
param string username Specifies an existing username with access privileges for this database.
param string password Specifies the corresponding password.
param string mechanism Optional. Specifies the authentication mechanism (page 797) used. Defaults to either:
SCRAM-SHA-1 on new 3.0 installations and on 3.0 databases that have been upgraded from
2.6 with authSchemaUpgrade (page 1044); or
MONGODB-CR otherwise.
2.1. mongo Shell Methods
233
db.removeUser()
On this page
Definition (page 234)
Deprecated since version 2.6: Use db.dropUser() (page 235) instead of db.removeUser() (page 234)
Definition
db.removeUser(username)
Removes the specified username from the database.
The db.removeUser() (page 234) method has the following parameter:
param string username The database username.
db.dropAllUsers()
On this page
Definition (page 234)
Required Access (page 235)
Example (page 235)
Definition
db.dropAllUsers(writeConcern)
Removes all users from the current database.
Warning: The dropAllUsers method removes all users from the database.
The dropAllUsers method takes the following arguments:
234
field document writeConcern Optional. The level of write concern for the removal operation. The writeConcern document takes the same fields as the getLastError (page 354)
command.
The db.dropAllUsers() (page 234) method wraps the dropAllUsersFromDatabase (page 376)
command.
Required Access You must have the dropUser action on a database to drop a user from that database.
Example The following db.dropAllUsers() (page 234) operation drops every user from the products
database.
use products
db.dropAllUsers( {w: "majority", wtimeout: 5000} )
The n field in the results document shows the number of users removed:
{ "n" : 12, "ok" : 1 }
db.dropUser()
On this page
Definition (page 235)
Required Access (page 235)
Example (page 235)
Definition
db.dropUser(username, writeConcern)
Removes the user from the current database.
The db.dropUser() (page 235) method takes the following arguments:
param string username The name of the user to remove from the database.
param document writeConcern Optional. The level of write concern for the removal operation. The writeConcern document takes the same fields as the getLastError (page 354)
command.
The db.dropUser() (page 235) method wraps the dropUser (page 375) command.
Before dropping a user who has the userAdminAnyDatabase role, ensure you have at least another user
with user administration privileges.
Required Access You must have the dropUser action on a database to drop a user from that database.
Example The following db.dropUser() (page 235) operation drops the reportUser1 user on the products
database.
use products
db.dropUser("reportUser1", {w: "majority", wtimeout: 5000})
235
db.grantRolesToUser()
On this page
Definition (page 236)
Required Access (page 236)
Example (page 236)
Definition
db.grantRolesToUser(username, roles, writeConcern)
Grants additional roles to a user.
The grantRolesToUser method uses the following syntax:
db.grantRolesToUser( "<username>", [ <roles> ], { <writeConcern> } )
To specify a role that exists in a different database, specify the role with a document.
The db.grantRolesToUser() (page 236) method wraps the grantRolesToUser (page 377) command.
Required Access You must have the grantRole action on a database to grant a role on that database.
Example Given a user accountUser01 in the products database with the following roles:
"roles" : [
{ "role" : "assetsReader",
"db" : "assets"
}
]
The following grantRolesToUser() operation gives accountUser01 the readWrite role on the
products database and the read role on the stock database.
236
use products
db.grantRolesToUser(
"accountUser01",
[ "readWrite" , { role: "read", db: "stock" } ],
{ w: "majority" , wtimeout: 4000 }
)
The user accountUser01 in the products database now has the following roles:
"roles" : [
{ "role"
"db" :
},
{ "role"
"db" :
},
{ "role"
"db" :
}
]
: "assetsReader",
"assets"
: "read",
"stock"
: "readWrite",
"products"
db.revokeRolesFromUser()
On this page
Definition (page 237)
Required Access (page 238)
Example (page 238)
Definition
db.revokeRolesFromUser()
Removes a one or more roles from a user on the current database. The db.revokeRolesFromUser()
(page 237) method uses the following syntax:
db.revokeRolesFromUser( "<username>", [ <roles> ], { <writeConcern> } )
237
To specify a role that exists in a different database, specify the role with a document.
The db.revokeRolesFromUser() (page 237) method wraps the revokeRolesFromUser (page 378)
command.
Required Access You must have the revokeRole action on a database to revoke a role on that database.
Example The accountUser01 user in the products database has the following roles:
"roles" : [
{ "role"
"db" :
},
{ "role"
"db" :
},
{ "role"
"db" :
}
]
: "assetsReader",
"assets"
: "read",
"stock"
: "readWrite",
"products"
The following db.revokeRolesFromUser() (page 237) method removes the two of the users roles: the read
role on the stock database and the readWrite role on the products database, which is also the database on
which the method runs:
use products
db.revokeRolesFromUser( "accountUser01",
[ { role: "read", db: "stock" }, "readWrite" ],
{ w: "majority" }
)
The user accountUser01 user in the products database now has only one remaining role:
"roles" : [
{ "role" : "assetsReader",
"db" : "assets"
}
]
db.getUser()
On this page
Definition (page 238)
Required Access (page 239)
Example (page 239)
Definition
db.getUser(username)
Returns user information for a specified user. Run this method on the users database. The user must exist on
the database on which the method runs.
238
db.getUsers()
On this page
Definition (page 239)
Required Access (page 239)
Definition
db.getUsers()
Returns information for all the users in the database.
db.getUsers() (page 239) wraps the usersInfo (page 380) command.
Required Access To view another users information, you must have the viewUser action on the other users
database.
Users can view their own information.
239
Description
Creates a role and specifies its privileges.
Updates a user-defined role.
Deletes a user-defined role.
Deletes all user-defined roles associated with a database.
Assigns privileges to a user-defined role.
Removes the specified privileges from a user-defined role.
Specifies roles from which a user-defined role inherits
privileges.
Removes inherited roles from a role.
Returns information for the specified role.
Returns information for all the user-defined roles in a
database.
db.createRole()
On this page
Definition
db.createRole(role, writeConcern)
Creates a role in a database. You can specify privileges for the role by explicitly listing the privileges or by
having the role inherit privileges from other roles or both. The role applies to the database on which you run the
method.
The db.createRole() (page 240) method takes the following arguments:
param document role A document containing the name of the role and the role definition.
param document writeConcern Optional. The level of write concern to apply to this operation. The writeConcern document uses the same fields as the getLastError (page 354)
command.
The role document has the following form:
{
role: "<name>",
privileges: [
{ resource: { <resource> }, actions: [ "<action>", ... ] },
...
],
roles: [
{ role: "<role>", db: "<database>" } | "<role>",
...
240
]
}
To specify a role that exists in a different database, specify the role with a document.
The db.createRole() (page 240) method wraps the createRole (page 382) command.
Behavior Except for roles created in the admin database, a role can only include privileges that apply to its database
and can only inherit from other roles in its database.
A role created in the admin database can include privileges that apply to the admin database, other databases or to
the cluster resource, and can inherit from roles in other databases as well as the admin database.
The db.createRole() (page 240) method returns a duplicate role error if the role already exists in the database.
Required Access To create a role in a database, you must have:
the createRole action on that database resource.
the grantRole action on that database to specify privileges for the new role as well as to specify roles to
inherit from.
Built-in roles userAdmin and userAdminAnyDatabase provide createRole and grantRole actions on
their respective resources.
Example The following db.createRole() (page 240) method creates the myClusterwideAdmin role on
the admin database:
use admin
db.createRole(
{
role: "myClusterwideAdmin",
privileges: [
{ resource: { cluster: true }, actions: [ "addShard" ] },
{ resource: { db: "config", collection: "" }, actions: [ "find", "update", "insert", "remove"
{ resource: { db: "users", collection: "usersCollection" }, actions: [ "update", "insert", "re
{ resource: { db: "", collection: "" }, actions: [ "find" ] }
241
],
roles: [
{ role: "read", db: "admin" }
]
},
{ w: "majority" , wtimeout: 5000 }
)
db.updateRole()
On this page
Definition
db.updateRole(rolename, update, writeConcern)
Updates a user-defined role. The db.updateRole() (page 242) method must run on the roles database.
An update to a field completely replaces the previous fields values. To grant or remove roles or privileges
without replacing all values, use one or more of the following methods:
db.grantRolesToRole() (page 249)
db.grantPrivilegesToRole() (page 246)
db.revokeRolesFromRole() (page 250)
db.revokePrivilegesFromRole() (page 247)
Warning: An update to the privileges or roles array completely replaces the previous arrays values.
The updateRole() method uses the following syntax:
db.updateRole(
"<rolename>",
{
privileges:
[
{ resource: { <resource> }, actions: [ "<action>", ... ] },
...
],
roles:
[
{ role: "<role>", db: "<database>" } | "<role>",
...
]
},
{ <writeConcern> }
)
242
To specify a role that exists in a different database, specify the role with a document.
The db.updateRole() (page 242) method wraps the updateRole (page 384) command.
Behavior Except for roles created in the admin database, a role can only include privileges that apply to its database
and can only inherit from other roles in its database.
A role created in the admin database can include privileges that apply to the admin database, other databases or to
the cluster resource, and can inherit from roles in other databases as well as the admin database.
Required Access You must have the revokeRole action on all databases in order to update a role.
You must have the grantRole action on the database of each role in the roles array to update the array.
You must have the grantRole action on the database of each privilege in the privileges array to update the
array. If a privileges resource spans databases, you must have grantRole on the admin database. A privilege
spans databases if the privilege is any of the following:
a collection in all databases
all collections and all database
the cluster resource
Example The following db.updateRole() (page 242) method replaces the privileges and the roles for
the inventoryControl role that exists in the products database. The method runs on the database that contains
inventoryControl:
243
use products
db.updateRole(
"inventoryControl",
{
privileges:
[
{
resource: { db:"products", collection:"clothing" },
actions: [ "update", "createCollection", "createIndex"]
}
],
roles:
[
{
role: "read",
db: "products"
}
]
},
{ w:"majority" }
)
On this page
Definition (page 244)
Required Access (page 244)
Example (page 244)
Definition
db.dropRole(rolename, writeConcern)
Deletes a user-defined role from the database on which you run the method.
The db.dropRole() (page 244) method takes the following arguments:
param string rolename The name of the user-defined role to remove from the database.
param document writeConcern Optional. The level of write concern for the removal operation. The writeConcern document takes the same fields as the getLastError (page 354)
command.
The db.dropRole() (page 244) method wraps the dropRole (page 386) command.
Required Access You must have the dropRole action on a database to drop a role from that database.
Example The following operations remove the readPrices role from the products database:
use products
db.dropRole( "readPrices", { w: "majority" } )
244
db.dropAllRoles()
On this page
Definition (page 245)
Required Access (page 245)
Example (page 245)
Definition
db.dropAllRoles(writeConcern)
Deletes all user-defined roles on the database where you run the method.
Warning: The dropAllRoles method removes all user-defined roles from the database.
The dropAllRoles method takes the following argument:
field document writeConcern Optional. The level of write concern for the removal operation. The writeConcern document takes the same fields as the getLastError (page 354)
command.
Returns The number of user-defined roles dropped.
The db.dropAllRoles() (page 245) method wraps the dropAllRolesFromDatabase (page 386)
command.
Required Access You must have the dropRole action on a database to drop a role from that database.
Example The following operations drop all user-defined roles from the products database and uses a write concern of majority.
use products
db.dropAllRoles( { w: "majority" } )
db.grantPrivilegesToRole()
On this page
245
Definition
db.grantPrivilegesToRole(rolename, privileges, writeConcern)
Grants additional privileges to a user-defined role.
The grantPrivilegesToRole() method uses the following syntax:
db.grantPrivilegesToRole(
"< rolename >",
[
{ resource: { <resource> }, actions: [ "<action>", ... ] },
...
],
{ < writeConcern > }
)
246
}
],
{ w: "majority" }
)
The first privilege permits users with this role to perform the insert action on all collections of the products
database, except the system collections (page 884). To access a system collection, a privilege must explicitly specify
the system collection in the resource document, as in the second privilege.
The second privilege permits users with this role to perform the find action on the product databases system
collection named system.js (page 885).
db.revokePrivilegesFromRole()
On this page
Definition
db.revokePrivilegesFromRole(rolename, privileges, writeConcern)
Removes the specified privileges from the user-defined role on the database where the method runs. The
revokePrivilegesFromRole method has the following syntax:
db.revokePrivilegesFromRole(
"<rolename>",
[
{ resource: { <resource> }, actions: [ "<action>", ... ] },
...
],
{ <writeConcern> }
)
247)
method
wraps
the
Behavior To revoke a privilege, the resource document pattern must match exactly the resource field of
that privilege. The actions field can be a subset or match exactly.
For example, given the role accountRole in the products database with the following privilege that specifies the
products database as the resource:
247
{
"resource" : {
"db" : "products",
"collection" : ""
},
"actions" : [
"find",
"update"
]
}
You cannot revoke find and/or update from just one collection in the products database. The following operations result in no change to the role:
use products
db.revokePrivilegesFromRole(
"accountRole",
[
{
resource : {
db : "products",
collection : "gadgets"
},
actions : [
"find",
"update"
]
}
]
)
db.revokePrivilegesFromRole(
"accountRole",
[
{
resource : {
db : "products",
collection : "gadgets"
},
actions : [
"find"
]
}
]
)
To revoke the "find" and/or the "update" action from the role accountRole, you must match the resource
document exactly. For example, the following operation revokes just the "find" action from the existing privilege.
use products
db.revokePrivilegesFromRole(
"accountRole",
[
{
resource : {
db : "products",
collection : ""
},
actions : [
248
"find"
]
}
]
)
Required Access You must have the revokeRole action on the database a privilege targets in order to revoke
that privilege. If the privilege targets multiple databases or the cluster resource, you must have the revokeRole
action on the admin database.
Example The following operation removes multiple privileges from the associates role:
db.revokePrivilegesFromRole(
"associate",
[
{
resource: { db: "products", collection: "" },
actions: [ "createCollection", "createIndex", "find" ]
},
{
resource: { db: "products", collection: "orders" },
actions: [ "insert" ]
}
],
{ w: "majority" }
)
db.grantRolesToRole()
On this page
Definition
db.grantRolesToRole(rolename, roles, writeConcern)
Grants roles to a user-defined role.
The grantRolesToRole method uses the following syntax:
db.grantRolesToRole( "<rolename>", [ <roles> ], { <writeConcern> } )
249
In the roles field, you can specify both built-in roles and user-defined role.
To specify a role that exists in the same database where db.grantRolesToRole() (page 249) runs, you
can either specify the role with the name of the role:
"readWrite"
To specify a role that exists in a different database, specify the role with a document.
The db.grantRolesToRole() (page 249) method wraps the grantRolesToRole (page 391) command.
Behavior A role can inherit privileges from other roles in its database. A role created on the admin database can
inherit privileges from roles in any database.
Required Access You must have the grantRole action on a database to grant a role on that database.
Example The following grantRolesToRole() operation updates the productsReaderWriter role in the
products database to inherit the privileges of productsReader role:
use products
db.grantRolesToRole(
"productsReaderWriter",
[ "productsReader" ],
{ w: "majority" , wtimeout: 5000 }
)
db.revokeRolesFromRole()
On this page
Definition (page 250)
Required Access (page 251)
Example (page 251)
Definition
db.revokeRolesFromRole(rolename, roles, writeConcern)
Removes the specified inherited roles from a role.
The revokeRolesFromRole method uses the following syntax:
db.revokeRolesFromRole( "<rolename>", [ <roles> ], { <writeConcern> } )
250
param document writeConcern Optional. The level of write concern to apply to this operation. The writeConcern document uses the same fields as the getLastError (page 354)
command.
In the roles field, you can specify both built-in roles and user-defined role.
To specify a role that exists in the same database where db.revokeRolesFromRole() (page 250) runs,
you can either specify the role with the name of the role:
"readWrite"
To specify a role that exists in a different database, specify the role with a document.
The db.revokeRolesFromRole() (page 250) method wraps the revokeRolesFromRole (page 392)
command.
Required Access You must have the revokeRole action on a database to revoke a role on that database.
Example The purchaseAgents role in the emea database inherits privileges from several other roles, as listed
in the roles array:
{
"_id" : "emea.purchaseAgents",
"role" : "purchaseAgents",
"db" : "emea",
"privileges" : [],
"roles" : [
{
"role" : "readOrdersCollection",
"db" : "emea"
},
{
"role" : "readAccountsCollection",
"db" : "emea"
},
{
"role" : "writeOrdersCollection",
"db" : "emea"
}
]
}
The following db.revokeRolesFromRole() (page 250) operation on the emea database removes two roles
from the purchaseAgents role:
use emea
db.revokeRolesFromRole( "purchaseAgents",
[
"writeOrdersCollection",
"readOrdersCollection"
],
{ w: "majority" , wtimeout: 5000 }
)
251
{
"_id" : "emea.purchaseAgents",
"role" : "purchaseAgents",
"db" : "emea",
"privileges" : [],
"roles" : [
{
"role" : "readAccountsCollection",
"db" : "emea"
}
]
}
db.getRole()
On this page
Definition (page 252)
Required Access (page 252)
Examples (page 252)
Definition
db.getRole(rolename, showPrivileges)
Returns the roles from which this role inherits privileges. Optionally, the method can also return all the roles
privileges.
Run db.getRole() (page 252) from the database that contains the role. The command can retrieve information for both user-defined roles and built-in roles.
The db.getRole() (page 252) method takes the following arguments:
param string rolename The name of the role.
param document showPrivileges Optional. If true, returns the roles privileges. Pass this argument as a document: {showPrivileges: true}.
db.getRole() (page 252) wraps the rolesInfo (page 394) command.
Required Access To view a roles information, you must be either explicitly granted the role or must have the
viewRole action on the roles database.
Examples The following operation returns role inheritance information for the role associate defined on the
products database:
use products
db.getRole( "associate" )
The following operation returns role inheritance information and privileges for the role associate defined on the
products database:
use products
db.getRole( "associate", { showPrivileges: true } )
252
db.getRoles()
On this page
Definition (page 253)
Required Access (page 253)
Example (page 253)
Definition
db.getRoles()
Returns information for all the roles in the database on which the command runs. The method can be run with
or without an argument.
If run without an argument, db.getRoles() (page 253) returns inheritance information for the databases
user-defined roles.
To return more information, pass the db.getRoles() (page 253) a document with the following fields:
field integer rolesInfo Set this field to 1 to retrieve all user-defined roles.
field boolean showPrivileges Optional. Set the field to true to show role privileges, including
both privileges inherited from other roles and privileges defined directly. By default, the command returns only the roles from which this role inherits privileges and does not return specific
privileges.
field boolean showBuiltinRoles Optional. Set to true to display built-in roles as well as user-defined
roles.
db.getRoles() (page 253) wraps the rolesInfo (page 394) command.
Required Access To view a roles information, you must be either explicitly granted the role or must have the
viewRole action on the roles database.
Example The following operations return documents for all the roles on the products database, including role
privileges and built-in roles:
db.getRoles(
{
rolesInfo: 1,
showPrivileges:true,
showBuiltinRoles: true
}
)
253
2.1.8 Replication
Replication Methods
Name
Description
rs.add() (page 254)
Adds a member to a replica set.
rs.addArb() (page 255) Adds an arbiter to a replica set.
rs.conf() (page 256)
Returns the replica set configuration document.
rs.freeze() (page 257) Prevents the current member from seeking election as primary for a period of time.
rs.help() (page 257)
Returns basic help text for replica set functions.
rs.initiate()
Initializes a new replica set.
(page 257)
rs.printReplicationInfo()
Prints a report of the status of the replica set from the perspective of the primary.
(page 258)
rs.printSlaveReplicationInfo()
Prints a report of the status of the replica set from the perspective of the
(page 259)
secondaries.
rs.reconfig()
Re-configures a replica set by applying a new replica set configuration object.
(page 259)
rs.remove() (page 261) Remove a member from a replica set.
rs.slaveOk()
Sets the slaveOk property for the current connection. Deprecated. Use
(page 261)
readPref() (page 151) and Mongo.setReadPref() (page 293) to set read
preference.
rs.status() (page 261) Returns a document with information about the state of the replica set.
rs.stepDown()
Causes the current primary to become a secondary which forces an election.
(page 262)
rs.syncFrom()
Sets the member that this replica set member will sync from, overriding the default
(page 263)
sync target selection logic.
rs.add()
On this page
Definition (page 254)
Behavior (page 255)
Example (page 255)
Definition
rs.add()
Adds a member to a replica set. To run the method, you must connect to the primary of the replica set.
param string, document host The new member to add to the replica set.
If a string, specify the hostname and optionally the port number for the new member. See Pass
a Hostname String to rs.add() (page 255) for an example.
If a document, specify a replica set member configuration document as found in the members
array. You must specify members[n]._id and the members[n].host fields in the member configuration document. See Pass a Member Configuration Document to rs.add() (page 255)
for an example.
See https://docs.mongodb.org/manual/reference/replica-configuration
document for full documentation of all replica set configuration options
254
param boolean arbiterOnly Optional. Applies only if the <host> value is a string. If true, the
added host is an arbiter.
rs.add()
(page
254)
provides
a
wrapper
around
some
of
the
functionality
of
the
replSetReconfig
(page
403)
database
command
and
the
corresponding mongo (page 794) shell helper rs.reconfig() (page 259).
See the
https://docs.mongodb.org/manual/reference/replica-configuration
document
for full documentation of all replica set configuration options.
Behavior rs.add() (page 254) can, in some cases, trigger an election for primary which will disconnect the shell.
In such cases, the mongo (page 794) shell displays an error even if the operation succeeds.
Example
Pass a Hostname String to rs.add() The following operation adds a mongod (page 762) instance, running on
the host mongodb3.example.net and accessible on the default port 27017:
rs.add('mongodb3.example.net:27017')
Pass a Member Configuration Document to rs.add() Changed in version 3.0.0: Previous versions required
an _id field in the document you passed to rs.add() (page 254). After 3.0.0 you can omit the _id field in this
document. members[n]._id describes the requirements for specifying _id.
The following operation adds a mongod (page 762) instance, running on the host mongodb4.example.net and
accessible on the default port 27017, as a priority 0 secondary member:
rs.add( { host: "mongodbd4.example.net:27017", priority: 0 } )
You must specify the members[n].host field in the member configuration document.
See the https://docs.mongodb.org/manual/reference/replica-configuration for the available replica set member configuration settings.
See https://docs.mongodb.org/manual/administration/replica-sets for more examples and
information.
rs.addArb()
On this page
Description (page 255)
Description
rs.addArb(host)
Adds a new arbiter to an existing replica set.
The rs.addArb() (page 255) method takes the following parameter:
255
param string host Specifies the hostname and optionally the port number of the arbiter member to
add to replica set.
rs.conf()
On this page
Definition (page 256)
Output Example (page 256)
Definition
rs.conf()
Returns a document that contains the current replica set configuration.
The method wraps the replSetGetConfig (page 410) command.
Output Example The following document provides a representation of a replica set configuration document. The
configuration of your replica set may include only a subset of these settings:
{
_id: <string>,
version: <int>,
protocolVersion: <number>,
members: [
{
_id: <int>,
host: <string>,
arbiterOnly: <boolean>,
buildIndexes: <boolean>,
hidden: <boolean>,
priority: <number>,
tags: <document>,
slaveDelay: <int>,
votes: <number>
},
...
],
settings: {
chainingAllowed : <boolean>,
heartbeatIntervalMillis : <int>,
heartbeatTimeoutSecs: <int>,
electionTimeoutMillis : <int>,
getLastErrorModes : <document>,
getLastErrorDefaults : <document>
}
}
256
rs.freeze()
On this page
Description (page 257)
Description
rs.freeze(seconds)
Makes the current replica set member ineligible to become primary for the period specified.
The rs.freeze() (page 257) method has the following parameter:
param number seconds The duration the member is ineligible to become primary.
rs.freeze() (page 257) provides a wrapper around the database command replSetFreeze (page 398).
rs.help()
rs.help()
Returns a basic help text for all of the replication related shell functions.
rs.initiate()
On this page
Description (page 257)
Replica Set Configuration (page 257)
Description
rs.initiate(configuration)
Initiates a replica set. Optionally takes a configuration argument in the form of a document that holds the
configuration of a replica set.
The rs.initiate() (page 257) method has the following parameter:
param document configuration Optional.
A document that specifies configuration
settings for the new replica set. If a configuration is not specified, MongoDB uses a default configuration.
The rs.initiate() (page 257) method provides a wrapper around the replSetInitiate (page 402)
database command.
257
rs.printReplicationInfo()
On this page
Definition (page 258)
Output Example (page 258)
Output Fields (page 258)
Definition
rs.printReplicationInfo()
New in version 2.6.
Prints a formatted report of the replica set members oplog. The displayed report formats the data returned by
db.getReplicationInfo() (page 188). 8 The output of rs.printReplicationInfo() (page 258)
is identical to that of db.printReplicationInfo() (page 192).
Note:
The rs.printReplicationInfo() (page 258) in the mongo (page 794) shell does
not return JSON. Use rs.printReplicationInfo() (page 258) for manual inspection, and
db.getReplicationInfo() (page 188) in scripts.
Output Example The following example is a sample output from the rs.printReplicationInfo()
(page 258) method run on the primary:
configured oplog size:
log length start to end:
oplog first event time:
oplog last event time:
now:
192MB
65422secs (18.17hrs)
Mon Jun 23 2014 17:47:18 GMT-0400 (EDT)
Tue Jun 24 2014 11:57:40 GMT-0400 (EDT)
Thu Jun 26 2014 14:24:39 GMT-0400 (EDT)
Output Fields rs.printReplicationInfo() (page 258) formats and prints the data returned by
db.getReplicationInfo() (page 188):
configured oplog size Displays the db.getReplicationInfo.logSizeMB (page 188) value.
log length start to end Displays
the
db.getReplicationInfo.timeDiff
db.getReplicationInfo.timeDiffHours (page 188) values.
(page
188)
and
258
On this page
Definition (page 259)
Output (page 259)
Definition
rs.printSlaveReplicationInfo()
Returns a formatted report of the status of a replica set from the perspective of the secondary member of the set.
The output is identical to that of db.printSlaveReplicationInfo() (page 194).
Output The following is example output from the rs.printSlaveReplicationInfo() (page 259) method
issued on a replica set with two secondary members:
source: m1.example.net:27017
syncedTo: Thu Apr 10 2014
0 secs (0 hrs) behind the
source: m2.example.net:27017
syncedTo: Thu Apr 10 2014
0 secs (0 hrs) behind the
A delayed member may show as 0 seconds behind the primary when the inactivity period on the primary is greater
than the members[n].slaveDelay value.
rs.reconfig()
On this page
Definition (page 259)
Behavior (page 260)
Examples (page 260)
Definition
rs.reconfig(configuration, force)
Reconfigures an existing replica set, overwriting the existing replica set configuration. To run the method, you
must connect to the primary of the replica set.
param document configuration A document that specifies the configuration of a replica set.
param document force Optional. If set as { force: true }, this forces the replica set to
accept the new configuration even if a majority of the members are not accessible. Use with
caution, as this can lead to rollback situations.
To reconfigure an existing replica set, first retrieve the current configuration with rs.conf() (page 256),
modify the configuration document as needed, and then pass the modified document to rs.reconfig()
(page 259).
rs.reconfig() (page 259) provides a wrapper around the replSetReconfig (page 403) command.
The force parameter allows a reconfiguration command to be issued to a non-primary node.
259
Behavior The rs.reconfig() (page 259) shell method can trigger the current primary to step down in some
situations. When the primary steps down, it forcibly closes all client connections. This is by design. Since it may take
a period of time to elect a new primary, schedule reconfiguration changes during maintenance periods to minimize
loss of write availability.
Warning: Using rs.reconfig() (page 259) with { force:
writes. Exercise caution when using this option.
The following sequence of operations updates the members[n].priority of the second member. The operations
are issued through a mongo (page 794) shell connected to the primary.
cfg = rs.conf();
cfg.members[1].priority = 2;
rs.reconfig(cfg);
1. The first statement uses the rs.conf() (page 256) method to retrieve a document containing the current
configuration for the replica set and sets the document to the local variable cfg.
2. The second statement sets a members[n].priority value to the second document in the members array.
For additional settings, see replica set configuration settings.
To access the member configuration document in the array, the statement uses the array index and not the replica
set members members[n]._id field.
3. The last statement calls the rs.reconfig() (page 259) method with the modified cfg to initialize this new
configuration. Upon successful reconfiguration, the replica set configuration will resemble the following:
{
"_id" : "rs0",
"version" : 2,
"members" : [
{
"_id" : 0,
"host" : "mongodb0.example.net:27017"
},
{
"_id" : 1,
260
"host" : "mongodb1.example.net:27017",
"priority" : 2
},
{
"_id" : 2,
"host" : "mongodb2.example.net:27017"
}
]
}
See also:
On this page
Definition (page 261)
Definition
rs.remove(hostname)
Removes the member described by the hostname parameter from the current replica set. This function will
disconnect the shell briefly and forces a reconnection as the replica set renegotiates which member will be
primary. As a result, the shell will display an error even if this command succeeds.
The rs.remove() (page 261) method has the following parameter:
param string hostname The hostname of a system in the replica set.
Note: Before running the rs.remove() (page 261) operation, you must shut down the replica set member
that youre removing.
Changed in version 2.2: This procedure is no longer required when using rs.remove() (page 261), but it
remains good practice.
rs.slaveOk()
rs.slaveOk()
Provides a shorthand for the following operation:
db.getMongo().setSlaveOk()
This allows the current connection to allow read operations to run on secondary members. See the
readPref() (page 151) method for more fine-grained control over read preference in the mongo
(page 794) shell.
rs.status()
rs.status()
Returns A document with status information.
261
This output reflects the current status of the replica set, using data derived from the heartbeat packets sent by the
other members of the replica set.
This method provides a wrapper around the replSetGetStatus (page 399) command. See the documentation of the command for a complete description of the output (page 399).
rs.stepDown()
On this page
Description (page 262)
Behavior (page 262)
Description
rs.stepDown(stepDownSecs, secondaryCatchUpPeriodSecs)
Forces the primary of the replica set to become a secondary, triggering an election for primary. The method
steps down the primary for a specified number of seconds; during this period, the stepdown member is ineligible
from becoming primary.
The method only steps down the primary if an electable secondary is up-to-date with the primary, waiting
up to 10 seconds for a secondary to catch up.
The method is only valid against the primary and will error if run on a non-primary member.
The rs.stepDown() (page 262) method has the following parameters:
param number stepDownSecs The number of seconds to step down the primary, during which time
the stepdown member is ineligible for becoming primary. If you specify a non-numeric value,
the command uses 60 seconds.
The stepdown period starts from the time that the mongod (page 762) receives the command.
The stepdown period must be greater than the secondaryCatchUpPeriodSecs.
param number secondaryCatchUpPeriodSecs Optional. The number of seconds that mongod
will wait for an electable secondary to catch up to the primary.
When specified, secondaryCatchUpPeriodSecs overrides the default wait time of 10
seconds.
rs.stepDown() (page 262) provides a wrapper around the command replSetStepDown (page 404).
Behavior New in version 3.0.
Before stepping down, rs.stepDown() (page 262) will attempt to terminate long running user operations that
would block the primary from stepping down, such as an index build, a write operation or a map-reduce job.
To avoid rollbacks, rs.stepDown() (page 262), by default, only steps down the primary if an electable secondary is completely caught up with the primary. The command will wait up to either 10 seconds or the
secondaryCatchUpPeriodSecs for a secondary to catch up.
If no electable secondary meets this criterion by the waiting period, the primary does not step down and the method
throws an exception.
Upon successful stepdown, rs.stepDown() (page 262) forces all clients currently connected to the database to
disconnect. This helps ensure that the clients maintain an accurate view of the replica set.
262
Because the disconnect includes the connection used to run the command, you cannot retrieve the return status of the
command if the command completes successfully; i.e. you can only retrieve the return status of the command if it
errors. When running the command in a script, the script should account for this behavior.
Note: rs.stepDown() (page 262) blocks all writes to the primary while it runs.
rs.syncFrom()
rs.syncFrom()
New in version 2.2.
Provides a wrapper around the replSetSyncFrom (page 406), which allows administrators to configure the
member of a replica set that the current member will pull data from. Specify the name of the member you want
to replicate from in the form of [hostname]:[port].
See replSetSyncFrom (page 406) for more details.
See https://docs.mongodb.org/manual/tutorial/configure-replica-set-secondary-sync-target
for details how to use this command.
263
264
2.1.9 Sharding
Sharding Methods
Name
Description
sh._adminCommand() Runs a database command against the admin database, like db.runCommand()
(page 266)
(page 195), but can confirm that it is issued against a mongos (page 784).
sh.getBalancerLockDetails()
Reports on the active balancer lock, if it exists.
(page 266)
sh._checkFullName() Tests a namespace to determine if its well formed.
(page 266)
sh._checkMongos()
Tests to see if the mongo (page 794) shell is connected to a mongos (page 784)
(page 267)
instance.
sh._lastMigration() Reports on the last chunk migration.
(page 267)
sh.addShard()
Adds a shard to a sharded cluster.
(page 268)
sh.addShardTag()
Associates a shard with a tag, to support tag aware sharding.
(page 269)
sh.addTagRange()
Associates range of shard keys with a shard tag, to support tag aware
(page 269)
sharding.
sh.removeTagRange() Removes an association between a range shard keys and a shard tag. Use to
(page 270)
manage tag aware sharding.
sh.disableBalancing()Disable balancing on a single collection in a sharded database. Does not affect
(page 271)
balancing of other collections in a sharded cluster.
sh.enableBalancing() Activates the sharded collection balancer process if previously disabled using
(page 272)
sh.disableBalancing() (page 271).
sh.enableSharding() Enables sharding on a specific database.
(page 272)
sh.getBalancerHost() Returns the name of a mongos (page 784) thats responsible for the balancer
(page 272)
process.
sh.getBalancerState()Returns a boolean to report if the balancer is currently enabled.
(page 273)
sh.help() (page 273)
Returns help text for the sh methods.
sh.isBalancerRunning()
Returns a boolean to report if the balancer process is currently migrating chunks.
(page 273)
sh.moveChunk()
Migrates a chunk in a sharded cluster.
(page 274)
sh.removeShardTag() Removes the association between a shard and a shard tag.
(page 274)
sh.setBalancerState()Enables or disables the balancer which migrates chunks between shards.
(page 275)
sh.shardCollection() Enables sharding for a collection.
(page 276)
sh.splitAt()
Divides an existing chunk into two chunks using a specific value of the shard key as
(page 276)
the dividing point.
sh.splitFind()
Divides an existing chunk that contains a document matching a query into two
(page 277)
approximately equal chunks.
sh.startBalancer() Enables the balancer and waits for balancing to start.
(page 277)
sh.status()
Reports on the status of a sharded cluster, as db.printShardingStatus()
(page 278)
(page 193).
sh.stopBalancer()
Disables the balancer and waits for any in progress balancing rounds to complete.
(page 281)
sh.waitForBalancer() Internal. Waits for the balancer state to change.
(page 282)
2.1. mongo Shell Methods
265
sh.waitForBalancerOff()
Internal. Waits until the balancer stops running.
(page 282)
sh.waitForDLock()
Internal. Waits for a specified distributed sharded cluster lock.
(page 283)
sh._adminCommand()
On this page
Definition (page 266)
Definition
sh._adminCommand(command, checkMongos)
Runs a database command against the admin database of a mongos (page 784) instance.
param string command A database command to run against the admin database.
param boolean checkMongos Require verification that the shell is connected to a mongos
(page 784) instance.
See also:
db.runCommand() (page 195)
sh.getBalancerLockDetails()
sh.getBalancerLockDetails()
Reports on the active balancer lock, if it exists.
Returns null if lock document does not exist or no lock is not taken. Otherwise, returns the lock
document.
Return type Document or null.
sh._checkFullName()
On this page
Definition (page 266)
Definition
sh._checkFullName(namespace)
Verifies that a namespace name is well formed.
If the namespace is well formed,
sh._checkFullName() (page 266) method exits with no message.
the
Throws If the namespace is not well formed, sh._checkFullName() (page 266) throws: name
needs to be fully qualified <db>.<collection>
The sh._checkFullName() (page 266) method has the following parameter:
param string namespace The namespace of a collection. The namespace is the combination of the
database name and the collection name. Enclose the namespace in quotation marks.
266
sh._checkMongos()
sh._checkMongos()
Returns nothing
Throws not connected to a mongos
The sh._checkMongos() (page 267) method throws an error message if the mongo (page 794) shell is not
connected to a mongos (page 784) instance. Otherwise it exits (no return document or return code).
sh._lastMigration()
On this page
Definition (page 267)
Output (page 267)
Definition
sh._lastMigration(namespace)
Returns information on the last migration performed on the specified database or collection.
The sh._lastMigration() (page 267) method has the following parameter:
param string namespace The namespace of a database or collection within the current database.
Output The sh._lastMigration() (page 267) method returns a document with details about the last migration
performed on the database or collection. The document contains the following output:
sh._lastMigration._id
The id of the migration task.
sh._lastMigration.server
The name of the server.
sh._lastMigration.clientAddr
The IP address and port number of the server.
sh._lastMigration.time
The time of the last migration, formatted as ISODate.
sh._lastMigration.what
The specific type of migration.
sh._lastMigration.ns
The complete namespace of the collection affected by the migration.
sh._lastMigration.details
A document containing details about the migrated chunk. The document includes min and max embedded
documents with the bounds of the migrated chunk.
sh.addShard()
267
On this page
Definition (page 268)
Considerations (page 268)
Example (page 268)
Definition
sh.addShard(host)
Adds a database instance or replica set to a sharded cluster. The optimal configuration is to deploy shards across
replica sets. This method must be run on a mongos (page 784) instance.
The sh.addShard() (page 268) method has the following parameter:
param string host The hostname of either a standalone database instance or of a replica set. Include
the port number if the instance is running on a non-standard port. Include the replica set name if
the instance is a replica set, as explained below.
The sh.addShard() (page 268) method has the following prototype form:
sh.addShard("<host>")
Warning: Do not use localhost for the hostname unless your configuration server is also running on
localhost.
New in version 2.6: mongos (page 784) installed from official .deb and .rpm packages have the
bind_ip configuration set to 127.0.0.1 by default.
The sh.addShard() (page 268) method is a helper for the addShard (page 413) command. The addShard
(page 413) command has additional options which are not available with this helper.
Considerations
Balancing When you add a shard to a sharded cluster, you affect the balance of chunks among the shards of a cluster
for all existing sharded collections. The balancer will begin migrating chunks so that the cluster will achieve balance.
See https://docs.mongodb.org/manual/core/sharding-balancing for more information.
Changed in version 2.6: Chunk migrations can have an impact on disk space. Starting in MongoDB 2.6, the source
shard automatically archives the migrated documents by default. For details, see moveChunk-directory.
Hidden Members
Important: You cannot include a hidden member in the seed list provided to sh.addShard() (page 268).
Example To add a shard on a replica set, specify the name of the replica set and the hostname of at least one member
of the replica set, as a seed. If you specify additional hostnames, all must be members of the same replica set.
The following example adds a replica set named repl0 and specifies one member of the replica set:
268
sh.addShard("repl0/mongodb3.example.net:27327")
sh.addShardTag()
On this page
Definition (page 269)
Example (page 269)
Definition
sh.addShardTag(shard, tag)
New in version 2.2.
Associates a shard with a tag or identifier. MongoDB uses these identifiers to direct chunks that fall within a
tagged range to specific shards. sh.addTagRange() (page 269) associates chunk ranges with tag ranges.
param string shard The name of the shard to which to give a specific tag.
param string tag The name of the tag to add to the shard.
Only issue sh.addShardTag() (page 269) when connected to a mongos (page 784) instance.
Example The following example adds three tags, NYC, LAX, and NRT, to three shards:
sh.addShardTag("shard0000", "NYC")
sh.addShardTag("shard0001", "LAX")
sh.addShardTag("shard0002", "NRT")
See also:
sh.addTagRange() (page 269) and sh.removeShardTag() (page 274).
sh.addTagRange()
On this page
Definition (page 269)
Behavior (page 270)
Example (page 270)
Definition
sh.addTagRange(namespace, minimum, maximum, tag)
New in version 2.2.
Attaches a range of shard key values to a shard tag created using the sh.addShardTag() (page 269) method.
sh.addTagRange() (page 269) takes the following arguments:
param string namespace The namespace of the sharded collection to tag.
269
param document minimum The minimum value of the shard key range to include in the
tag. The minimum is an inclusive match. Specify the minimum value in the form of
<fieldname>:<value>. This value must be of the same BSON type or types as the shard
key.
param document maximum The maximum value of the shard key range to include in the
tag. The maximum is an exclusive match. Specify the maximum value in the form of
<fieldname>:<value>. This value must be of the same BSON type or types as the shard
key.
param string tag The name of the tag to attach the range specified by the minimum and maximum
arguments to.
Use sh.addShardTag() (page 269) to ensure that the balancer migrates documents that exist within the
specified range to a specific shard or set of shards.
Only issue sh.addTagRange() (page 269) when connected to a mongos (page 784) instance.
Behavior
Bounds Shard ranges are always inclusive of the lower value and exclusive of the upper boundary.
Dropped Collections If you add a tag range to a collection using sh.addTagRange() (page 269) and then later
drop the collection or its database, MongoDB does not remove the tag association. If you later create a new collection
with the same name, the old tag association will apply to the new collection.
Example Given a shard key of {state:
zip codes in New York State:
1, zip:
sh.addTagRange( "exampledb.collection",
{ state: "NY", zip: MinKey },
{ state: "NY", zip: MaxKey },
"NY"
)
sh.removeTagRange()
On this page
Definition (page 270)
Example (page 271)
Definition
sh.removeTagRange(namespace, minimum, maximum, tag)
New in version 3.0.
Removes a range of shard key values to a shard tag created using the sh.removeShardTag() (page 274)
method. sh.removeTagRange() (page 270) takes the following arguments:
param string namespace The namespace of the sharded collection to tag.
270
param document minimum The minimum value of the shard key from the tag. Specify the minimum value in the form of <fieldname>:<value>. This value must be of the same BSON
type or types as the shard key.
param document maximum The maximum value of the shard key range from the tag. Specify the
maximum value in the form of <fieldname>:<value>. This value must be of the same
BSON type or types as the shard key.
param string tag The name of the tag attached to the range specified by the minimum and
maximum arguments to.
Use sh.removeShardTag() (page 274) to ensure that unused or out of date ranges are removed and hence
chunks are balanced as required.
Only issue sh.removeTagRange() (page 270) when connected to a mongos (page 784) instance.
Example Given a shard key of {state:
covering zip codes in New York State:
1, zip:
sh.removeTagRange( "exampledb.collection",
{ state: "NY", zip: MinKey },
{ state: "NY", zip: MaxKey },
"NY"
)
sh.disableBalancing()
On this page
Description (page 271)
Description
sh.disableBalancing(namespace)
Disables the balancer for the specified sharded collection. This does not affect the balancing of chunks for other
sharded collections in the same cluster.
The sh.disableBalancing() (page 271) method has the following parameter:
param string namespace The namespace of the collection.
On this page
Description (page 272)
271
Description
sh.enableBalancing(namespace)
Enables the balancer for the specified namespace of the sharded collection.
The sh.enableBalancing() (page 272) method has the following parameter:
param string namespace The namespace of the collection.
Important: sh.enableBalancing() (page 272) does not start balancing. Rather, it allows balancing of
this collection the next time the balancer runs.
On this page
Definition (page 272)
Definition
sh.enableSharding(database)
Enables sharding on the specified database. This does not automatically shard any collections but makes it
possible to begin sharding collections using sh.shardCollection() (page 276).
The sh.enableSharding() (page 272) method has the following parameter:
param string database The name of the database shard. Enclose the name in quotation marks.
See also:
sh.shardCollection() (page 276)
sh.getBalancerHost()
sh.getBalancerHost()
Returns String in form hostname:port
sh.getBalancerHost() (page 272) returns the name of the server that is running the balancer.
See also:
sh.enableBalancing() (page 272)
sh.disableBalancing() (page 271)
sh.getBalancerState() (page 273)
sh.isBalancerRunning() (page 273)
sh.setBalancerState() (page 275)
sh.startBalancer() (page 277)
sh.stopBalancer() (page 281)
sh.waitForBalancer() (page 282)
272
sh.getBalancerState()
Returns boolean
sh.getBalancerState() (page 273) returns true when the balancer is enabled and false if the balancer
is disabled. This does not reflect the current state of balancing operations: use sh.isBalancerRunning()
(page 273) to check the balancers current state.
See also:
sh.enableBalancing() (page 272)
sh.disableBalancing() (page 271)
sh.getBalancerHost() (page 272)
sh.isBalancerRunning() (page 273)
sh.setBalancerState() (page 275)
sh.startBalancer() (page 277)
sh.stopBalancer() (page 281)
sh.waitForBalancer() (page 282)
sh.waitForBalancerOff() (page 282)
sh.help()
sh.help()
Returns a basic help text for all sharding related shell functions.
sh.isBalancerRunning()
sh.isBalancerRunning()
Returns boolean
Returns true if the balancer process is currently running and migrating chunks and false if the balancer process is
not running. Use sh.getBalancerState() (page 273) to determine if the balancer is enabled or disabled.
See also:
sh.enableBalancing() (page 272)
sh.disableBalancing() (page 271)
sh.getBalancerHost() (page 272)
sh.getBalancerState() (page 273)
sh.setBalancerState() (page 275)
sh.startBalancer() (page 277)
sh.stopBalancer() (page 281)
273
On this page
Definition (page 274)
Example (page 274)
Definition
sh.moveChunk(namespace, query, destination)
Moves the chunk that contains the document specified by the query to the destination shard.
sh.moveChunk() (page 274) provides a wrapper around the moveChunk (page 426) database command
and takes the following arguments:
param string namespace The namespace of the sharded collection that contains the chunk to migrate.
param document query An equality match on the shard key that selects the chunk to move.
param string destination The name of the shard to move.
Important: In most circumstances, allow the balancer to automatically migrate chunks, and avoid calling
sh.moveChunk() (page 274) directly.
See also:
moveChunk
(page
426),
sh.splitAt()
(page
276),
sh.splitFind()
https://docs.mongodb.org/manual/sharding, and chunk migration.
(page
277),
Example Given the people collection in the records database, the following operation finds the chunk that
contains the documents with the zipcode field set to 53187 and then moves that chunk to the shard named
shard0019:
sh.moveChunk("records.people", { zipcode: "53187" }, "shard0019")
sh.removeShardTag()
On this page
Definition (page 274)
Definition
sh.removeShardTag(shard, tag)
New in version 2.2.
Removes the association between a tag and a shard. Only issue sh.removeShardTag() (page 274) when
connected to a mongos (page 784) instance.
274
param string shard The name of the shard from which to remove a tag.
param string tag The name of the tag to remove from the shard.
See also:
sh.addShardTag() (page 269), sh.addTagRange() (page 269)
sh.setBalancerState()
On this page
Description (page 275)
Description
sh.setBalancerState(state)
Enables or disables the balancer. Use sh.getBalancerState() (page 273) to determine if the balancer is
currently enabled or disabled and sh.isBalancerRunning() (page 273) to check its current state.
The sh.getBalancerState() (page 273) method has the following parameter:
param boolean state Set this to true to enable the balancer and false to disable it.
See also:
sh.enableBalancing() (page 272)
sh.disableBalancing() (page 271)
sh.getBalancerHost() (page 272)
sh.getBalancerState() (page 273)
sh.isBalancerRunning() (page 273)
sh.startBalancer() (page 277)
sh.stopBalancer() (page 281)
sh.waitForBalancer() (page 282)
sh.waitForBalancerOff() (page 282)
sh.shardCollection()
On this page
275
Definition
sh.shardCollection(namespace, key, unique)
Shards a collection using the key as a the shard key. sh.shardCollection() (page 276) takes the following arguments:
param string namespace The namespace of the collection to shard.
param document key A document that specifies the shard key to use to partition and distribute
objects among the shards. A shard key may be one field or multiple fields. A shard key with
multiple fields is called a compound shard key.
param boolean unique When true, ensures that the underlying index enforces a unique constraint.
Hashed shard keys do not support unique constraints.
New in version 2.4: Use the form {field:
may not be compound indexes.
Considerations MongoDB provides no method to deactivate sharding for a collection after calling
shardCollection (page 421). Additionally, after shardCollection (page 421), you cannot change shard
keys or modify the value of any field used in your shard key index.
Example Given the people collection in the records database, the following command shards the collection by
the zipcode field:
sh.shardCollection("records.people", { zipcode: 1} )
Additional
Information shardCollection
(page
421)
for
additional
options,
https://docs.mongodb.org/manual/sharding and https://docs.mongodb.org/manual/core/sharding-in
for an overview of sharding, https://docs.mongodb.org/manual/tutorial/deploy-shard-cluster
for a tutorial, and sharding-shard-key for choosing a shard key.
sh.splitAt()
On this page
Definition (page 276)
Consideration (page 277)
Behavior (page 277)
Definition
sh.splitAt(namespace, query)
Splits a chunk at the shard key value specified by the query.
The method takes the following arguments:
param string namespace The namespace (i.e. <database>.<collection>) of the sharded
collection that contains the chunk to split.
param document query A query document that specifies the shard key value at which to split the
chunk.
276
Consideration In most circumstances, you should leave chunk splitting to the automated processes within MongoDB. However, when initially deploying a sharded cluster, it may be beneficial to pre-split manually an empty
collection using methods such as sh.splitAt() (page 276).
Behavior sh.splitAt() (page 276) splits the original chunk into two chunks. One chunk has a shard key range
that starts with the original lower bound (inclusive) and ends at the specified shard key value (exclusive). The other
chunk has a shard key range that starts with the specified shard key value (inclusive) as the lower bound and ends at
the original upper bound (exclusive).
To split a chunk at its median point instead, see sh.splitFind() (page 277).
sh.splitFind()
On this page
Definition (page 277)
Consideration (page 277)
Definition
sh.splitFind(namespace, query)
Splits the chunk that contains the shard key value specified by the query at the chunks median point.
sh.splitFind() (page 277) creates two roughly equal chunks. To split a chunk at a specific point instead,
see sh.splitAt() (page 276).
The method takes the following arguments:
param string namespace The namespace (i.e. <database>.<collection>) of the sharded
collection that contains the chunk to split.
param document query A query document that specifies the shard key value that determines the
chunk to split.
Consideration In most circumstances, you should leave chunk splitting to the automated processes within MongoDB. However, when initially deploying a sharded cluster, it may be beneficial to pre-split manually an empty
collection using methods such as sh.splitFind() (page 277).
sh.startBalancer()
On this page
Definition (page 277)
Definition
sh.startBalancer(timeout, interval)
Enables the balancer in a sharded cluster and waits for balancing to initiate.
param integer timeout Milliseconds to wait.
param integer interval Milliseconds to sleep each cycle of waiting.
277
See also:
sh.enableBalancing() (page 272)
sh.disableBalancing() (page 271)
sh.getBalancerHost() (page 272)
sh.getBalancerState() (page 273)
sh.isBalancerRunning() (page 273)
sh.setBalancerState() (page 275)
sh.stopBalancer() (page 281)
sh.waitForBalancer() (page 282)
sh.waitForBalancerOff() (page 282)
sh.status()
On this page
Definition (page 278)
Output Examples (page 278)
Output Fields (page 280)
Definition
sh.status()
When run on a mongos (page 784) instance, prints a formatted report of the sharding configuration and the
information regarding existing chunks in a sharded cluster. The default behavior suppresses the detailed chunk
information if the total number of chunks is greater than or equal to 20.
The sh.status() (page 278) method has the following parameter:
param boolean verbose Optional. If true, the method displays details of the document distribution across chunks when you have 20 or more chunks.
See also:
db.printShardingStatus() (page 193)
Output Examples The Sharding Version (page 280) section displays information on the config database:
--- Sharding Status --sharding version: {
"_id" : <num>,
"minCompatibleVersion" : <num>,
"currentVersion" : <num>,
"clusterId" : <ObjectId>
}
The Shards (page 280) section lists information on the shard(s). For each shard, the section displays the name, host,
and the associated tags, if any.
278
shards:
{ "_id" : <shard name1>,
"host" : <string>,
"tags" : [ <string> ... ]
}
{ "_id" : <shard name2>,
"host" : <string>,
"tags" : [ <string> ... ]
}
...
New in version 3.0.0: The Balancer (page 280) section lists information about the state of the balancer. This provides
insight into current balancer operation and can be useful when troubleshooting an unbalanced sharded cluster.
balancer:
Currently enabled: yes
Currently running: yes
Balancer lock taken at Wed Dec 10 2014 12:00:16 GMT+1100 (AEDT) by
Pixl.local:27017:1418172757:16807:Balancer:282475249
Collections with active migrations:
test.t2 started at Wed Dec 10 2014 11:54:51 GMT+1100 (AEDT)
Failed balancer rounds in last 5 attempts: 1
Last reported error: tag ranges not valid for: test.t2
Time of Reported error: Wed Dec 10 2014 12:00:33 GMT+1100 (AEDT)
Migration Results for the last 24 hours:
96 : Success
15 : Failed with error 'ns not found, should be impossible', from
shard01 to shard02
The Databases (page 281) section lists information on the database(s). For each database, the section displays the
name, whether the database has sharding enabled, and the primary shard for the database.
databases:
{ "_id" : <dbname1>,
"partitioned" : <boolean>,
"primary" : <string>
}
{ "_id" : <dbname2>,
"partitioned" : <boolean>,
"primary" : <string>
}
...
The Sharded Collection (page 281) section provides information on the sharding details for sharded collection(s). For
each sharded collection, the section displays the shard key, the number of chunks per shard(s), the distribution of
documents across chunks 9 , and the tag information, if any, for shard key range(s).
<dbname>.<collection>
shard key: { <shard key> : <1 or hashed> }
chunks:
<shard name1> <number of chunks>
<shard name2> <number of chunks>
...
{ <shard key>: <min range1> } -->> { <shard key> : <max range1> } on : <shard name> <last modified
{ <shard key>: <min range2> } -->> { <shard key> : <max range2> } on : <shard name> <last modified
...
9 The sharded collection section, by default, displays the chunk information if the total number of chunks is less than 20. To display the
information when you have 20 or more chunks, call the sh.status() (page 278) methods with the verbose parameter set to true, i.e.
sh.status(true).
279
tag: <tag1>
...
Output Fields
Sharding Version
sh.status.sharding-version._id
The _id (page 280) is an identifier for the version details.
sh.status.sharding-version.minCompatibleVersion
The minCompatibleVersion (page 280) is the minimum compatible version of the config server.
sh.status.sharding-version.currentVersion
The currentVersion (page 280) is the current version of the config server.
sh.status.sharding-version.clusterId
The clusterId (page 280) is the identification for the sharded cluster.
Shards
sh.status.shards._id
The _id (page 280) displays the name of the shard.
sh.status.shards.host
The host (page 280) displays the host location of the shard.
sh.status.shards.tags
The tags (page 280) displays all the tags for the shard. The field only displays if the shard has tags.
Balancer New in version 3.0.0: sh.status() (page 278) added the balancer field.
sh.status.balancer.currently-enabled
currently-enabled (page 280) indicates if the balancer is currently enabled on the sharded cluster.
sh.status.balancer.currently-running
currently-running (page 280) indicates whether the balancer is currently running, and therefore currently
balancing the cluster.
If the balancer is running, currently-running (page 280) lists the process that holds the balancer lock,
and the date and time that the process obtained the lock.
If there is an active balancer lock, currently-running (page 280) also reports the state of the balancer.
sh.status.balancer.collections-with-active-migrations
collections-with-active-migrations (page 280) lists the names of any collections with active
migrations, and specifies when the migration began. If there are no active migrations, this field will not appear
in the sh.status() (page 278) output.
sh.status.balancer.failed-balancer-rounds-in-last-5-attempts
failed-balancer-rounds-in-last-5-attempts (page 280) displays the number of balancer
rounds that failed, from among the last five attempted rounds. A balancer round will fail when a chunk migration fails.
sh.status.balancer.last-reported-error
last-reported-error (page 280) lists the most recent balancer error message. If there have been no
errors, this field will not appear in the sh.status() (page 278) output.
sh.status.balancer.time-of-reported-error
time-of-reported-error (page 280) provides the date and time of the most recently-reported error.
280
sh.status.balancer.migration-results-for-the-last-24-hours
migration-results-for-the-last-24-hours (page 280) displays the number of migrations in
the last 24 hours, and the error messages from failed migrations . If there have been no recent migrations,
migration-results-for-the-last-24-hours (page 280) displays No recent migrations.
migration-results-for-the-last-24-hours (page 280) includes all migrations, including those
not initiated by the balancer.
Databases
sh.status.databases._id
The _id (page 281) displays the name of the database.
sh.status.databases.partitioned
The partitioned (page 281) displays whether the database has sharding enabled. If true, the database has
sharding enabled.
sh.status.databases.primary
The primary (page 281) displays the primary shard for the database.
Sharded Collection
sh.status.databases.shard-key
The shard-key (page 281) displays the shard key specification document.
sh.status.databases.chunks
The chunks (page 281) lists all the shards and the number of chunks that reside on each shard.
sh.status.databases.chunk-details
The chunk-details (page 281) lists the details of the chunks 1 :
The range of shard key values that define the chunk,
The shard where the chunk resides, and
The last modified timestamp for the chunk.
sh.status.databases.tag
The tag (page 281) lists the details of the tags associated with a range of shard key values.
sh.stopBalancer()
On this page
Definition (page 281)
Definition
sh.stopBalancer(timeout, interval)
Disables the balancer in a sharded cluster and waits for balancing to complete.
param integer timeout Milliseconds to wait.
param integer interval Milliseconds to sleep each cycle of waiting.
See also:
sh.enableBalancing() (page 272)
sh.disableBalancing() (page 271)
281
On this page
Definition (page 282)
Definition
sh.waitForBalancer(wait, timeout, interval)
Waits for a change in the state of the balancer. sh.waitForBalancer() (page 282) is an internal method,
which takes the following arguments:
param boolean wait Optional. Set to true to ensure the balancer is now active. The default is
false, which waits until balancing stops and becomes inactive.
param integer timeout Milliseconds to wait.
param integer interval Milliseconds to sleep.
sh.waitForBalancerOff()
On this page
Definition (page 282)
Definition
sh.waitForBalancerOff(timeout, interval)
Internal method that waits until the balancer is not running.
param integer timeout Milliseconds to wait.
param integer interval Milliseconds to sleep.
See also:
sh.enableBalancing() (page 272)
sh.disableBalancing() (page 271)
sh.getBalancerHost() (page 272)
sh.getBalancerState() (page 273)
sh.isBalancerRunning() (page 273)
282
On this page
Definition (page 283)
Definition
sh.waitForDLock(lockname, wait, timeout, interval)
Waits until the specified distributed lock changes state. sh.waitForDLock() (page 283) is an internal
method that takes the following arguments:
param string lockname The name of the distributed lock.
param boolean wait Optional. Set to true to ensure the balancer is now active. Set to false to
wait until balancing stops and becomes inactive.
param integer timeout Milliseconds to wait.
param integer interval Milliseconds to sleep in each waiting cycle.
sh.waitForPingChange()
On this page
Definition (page 283)
Definition
sh.waitForPingChange(activePings, timeout, interval)
sh.waitForPingChange() (page 283) waits for a change in ping state of one of the activepings, and
only returns when the specified ping changes state.
param array activePings An array of active pings from the mongos (page 881) collection.
param integer timeout Number of milliseconds to wait for a change in ping state.
param integer interval Number of milliseconds to sleep in each waiting cycle.
283
2.1.10 Subprocess
Subprocess Methods
Name
clearRawMongoProgramOutput() (page 284)
rawMongoProgramOutput() (page 284)
run()
runMongoProgram() (page 284)
runProgram() (page 284)
startMongoProgram()
stopMongoProgram() (page 285)
stopMongoProgramByPid() (page 285)
stopMongod() (page 285)
waitMongoProgramOnPort() (page 285)
waitProgram() (page 285)
Description
For internal use.
For internal use.
For internal use.
For internal use.
For internal use.
For internal use.
For internal use.
For internal use.
For internal use.
For internal use.
For internal use.
clearRawMongoProgramOutput()
clearRawMongoProgramOutput()
For internal use.
rawMongoProgramOutput()
rawMongoProgramOutput()
For internal use.
run()
run()
For internal use.
runMongoProgram()
runMongoProgram()
For internal use.
runProgram()
runProgram()
For internal use.
startMongoProgram()
_startMongoProgram()
For internal use.
284
stopMongoProgram()
stopMongoProgram()
For internal use.
stopMongoProgramByPid()
stopMongoProgramByPid()
For internal use.
stopMongod()
stopMongod()
For internal use.
waitMongoProgramOnPort()
waitMongoProgramOnPort()
For internal use.
waitProgram()
waitProgram()
For internal use.
2.1.11 Constructors
Object Constructors and Methods
Name
Date() (page 286)
Description
Creates a date object. By default creates a date object including the
current date.
Converts a 32-byte hexadecimal string to the UUID BSON subtype.
Returns the timestamp portion of an ObjectId.
285
Date()
On this page
Behavior (page 286)
Examples (page 286)
Date()
Returns a date either as a string or as a document-bson-type-date object.
Date() returns the current date as a string.
new Date() returns the current date as a document-bson-type-date object. The mongo (page 794) shell
wraps the document-bson-type-date object with the ISODate helper.
new Date("<YYYY-mm-dd>") returns the specified date string ("<YYYY-mm-dd>") as a documentbson-type-date object. The mongo (page 794) shell wraps the document-bson-type-date object with the
ISODate helper.
Behavior Internally, document-bson-type-date objects are stored as a 64 bit integer representing the number of
milliseconds since the Unix epoch (Jan 1, 1970), which results in a representable date range of about 290 millions
years into the past and future.
Examples
Return Date as a String To return the date as a string, use the Date() method, as in the following example:
var myDateString = Date();
Return Date as Date Object The mongo (page 794) shell wrap objects of document-bson-type-date type with the
ISODate helper; however, the objects remain of type document-bson-type-date.
The following example uses new Date() to return Date objects.
var myDate = new Date();
See also:
BSON Date, mongo Shell Date
UUID()
On this page
Definition (page 287)
Example (page 287)
286
Definition
UUID(<string>)
Generates a BSON UUID object.
param string hex Specify a 32-byte hexadecimal string to convert to the UUID BSON subtype.
Returns A BSON UUID object.
Example Create a 32 byte hexadecimal string:
var myuuid = '0123456789abcdeffedcba9876543210'
ObjectId.getTimestamp()
ObjectId.getTimestamp()
Returns The timestamp portion of the ObjectId() object as a Date.
In the following example, call the getTimestamp() (page 287) method on an ObjectId (e.g.
ObjectId("507c7f79bcf86cd7994f6c0e")):
ObjectId("507c7f79bcf86cd7994f6c0e").getTimestamp()
ObjectId.toString()
ObjectId.toString()
Returns The string representation of the ObjectId() object.
ObjectId(...).
Changed in version 2.2: In previous versions ObjectId.toString() (page 287) returns the value of the
ObjectId as a hexadecimal string.
In the following example, call the toString() (page 287) method on an ObjectId (e.g.
ObjectId("507c7f79bcf86cd7994f6c0e")):
ObjectId("507c7f79bcf86cd7994f6c0e").toString()
You can confirm the type of this object using the following operation:
typeof ObjectId("507c7f79bcf86cd7994f6c0e").toString()
287
ObjectId.valueOf()
ObjectId.valueOf()
Returns The value of the ObjectId() object as a lowercase hexadecimal string. This value is the str
attribute of the ObjectId() object.
Changed in version 2.2: In previous versions ObjectId.valueOf() (page 288) returns the ObjectId()
object.
In the following example, call the valueOf() (page 288) method on an ObjectId (e.g.
ObjectId("507c7f79bcf86cd7994f6c0e")):
ObjectId("507c7f79bcf86cd7994f6c0e").valueOf()
You can confirm the type of this object using the following operation:
typeof ObjectId("507c7f79bcf86cd7994f6c0e").valueOf()
WriteResult()
On this page
Definition (page 288)
Properties (page 288)
Definition
WriteResult()
A wrapper that contains the result status of the mongo (page 794) shell write methods.
See
db.collection.insert()
(page
78),
db.collection.update()
db.collection.remove() (page 100), and db.collection.save() (page 104).
(page
116),
288
WriteResult.nUpserted
The number of documents inserted by an upsert (page 118).
WriteResult._id
The _id of the document inserted by an upsert. Returned only if an upsert results in an insert.
WriteResult.nRemoved
The number of documents removed.
WriteResult.writeError
A document that contains information regarding any error, excluding write concern errors, encountered during
the write operation.
WriteResult.writeError.code
An integer value identifying the error.
WriteResult.writeError.errmsg
A description of the error.
WriteResult.writeConcernError
A document that contains information regarding any write concern errors encountered during the write operation.
WriteResult.writeConcernError.code
An integer value identifying the write concern error.
WriteResult.writeConcernError.errInfo
A document identifying the write concern setting related to the error.
WriteResult.writeConcernError.errmsg
A description of the error.
See also:
WriteResult.hasWriteError() (page 289), WriteResult.hasWriteConcernError() (page 290)
WriteResult.hasWriteError()
On this page
Definition (page 289)
Definition
WriteResult.hasWriteError()
Returns true if the result of a mongo (page 794) shell write method has WriteResult.writeError
(page 289). Otherwise, the method returns false.
See also:
WriteResult() (page 288)
WriteResult.hasWriteConcernError()
289
On this page
Definition (page 290)
Definition
WriteResult.hasWriteConcernError()
Returns true if the result of a mongo (page 794) shell write method
WriteResult.writeConcernError (page 289). Otherwise, the method returns false.
See also:
has
On this page
Properties (page 290)
BulkWriteResult()
New in version 2.6.
A wrapper that contains the results of the Bulk.execute() (page 222) method.
Properties The BulkWriteResult (page 290) has the following properties:
BulkWriteResult.nInserted
The number of documents inserted using the Bulk.insert() (page 213) method. For documents inserted
through operations with the Bulk.find.upsert() (page 220) option, see the nUpserted (page 290) field
instead.
BulkWriteResult.nMatched
The number of existing documents selected for update or replacement. If the update/replacement operation
results in no change to an existing document, e.g. $set (page 592) expression updates the value to the current
value, nMatched (page 290) can be greater than nModified (page 290).
BulkWriteResult.nModified
The number of existing documents updated or replaced. If the update/replacement operation results in no change
to an existing document, such as setting the value of the field to its current value, nModified (page 290) can
be less than nMatched (page 290). Inserted documents do not affect the number of nModified (page 290);
refer to the nInserted (page 290) and nUpserted (page 290) fields instead.
BulkWriteResult.nRemoved
The number of documents removed.
BulkWriteResult.nUpserted
The number of documents inserted through operations with the Bulk.find.upsert() (page 220) option.
BulkWriteResult.upserted
An array of documents that contains information for each document inserted through operations with the
Bulk.find.upsert() (page 220) option.
Each document contains the following information:
290
BulkWriteResult.upserted.index
An integer that identifies the operation in the bulk operations list, which uses a zero-based index.
BulkWriteResult.upserted._id
The _id value of the inserted document.
BulkWriteResult.writeErrors
An array of documents that contains information regarding any error, unrelated to write concerns, encountered
during the update operation. The writeErrors (page 291) array contains an error document for each write
operation that errors.
Each error document contains the following fields:
BulkWriteResult.writeErrors.index
An integer that identifies the write operation in the bulk operations list, which uses a zero-based index. See
also Bulk.getOperations() (page 225).
BulkWriteResult.writeErrors.code
An integer value identifying the error.
BulkWriteResult.writeErrors.errmsg
A description of the error.
BulkWriteResult.writeErrors.op
A document identifying the operation that failed. For instance, an update/replace operation error will return
a document specifying the query, the update, the multi and the upsert options; an insert operation will
return the document the operation tried to insert.
BulkWriteResult.writeConcernError
Document that describe error related to write concern and contains the field:
BulkWriteResult.writeConcernError.code
An integer value identifying the cause of the write concern error.
BulkWriteResult.writeConcernError.errInfo
A document identifying the write concern setting related to the error.
BulkWriteResult.writeConcernError.errmsg
A description of the cause of the write concern error.
2.1.12 Connection
Connection Methods
Name
Mongo.getDB() (page 292)
Mongo.getReadPrefMode()
(page 292)
Mongo.getReadPrefTagSet()
(page 293)
Mongo.setReadPref() (page 293)
Mongo.setSlaveOk() (page 294)
Mongo() (page 294)
connect()
Description
Returns a database object.
Returns the current read preference mode for the MongoDB
connection.
Returns the read preference tag set for the MongoDB connection.
Sets the read preference for the MongoDB connection.
Allows operations on the current connection to read from secondary
members.
Creates a new connection object.
Connects to a MongoDB instance and to a specified database on that
instance.
291
Mongo.getDB()
On this page
Description (page 292)
Example (page 292)
Description
Mongo.getDB(<database>)
Provides access to database objects from the mongo (page 794) shell or from a JavaScript file.
The Mongo.getDB() (page 292) method has the following parameter:
param string database The name of the database to access.
Example The following example instantiates a new connection to the MongoDB instance running on the localhost
interface and returns a reference to "myDatabase":
db = new Mongo().getDB("myDatabase");
See also:
Mongo() (page 294) and connect() (page 294)
Mongo.getReadPrefMode()
Mongo.getReadPrefMode()
Returns The current read preference mode for the Mongo() (page 187) connection object.
See https://docs.mongodb.org/manual/core/read-preference for an introduction to read
preferences in MongoDB. Use getReadPrefMode() (page 292) to return the current read preference mode,
as in the following example:
db.getMongo().getReadPrefMode()
Use the following operation to return and print the current read preference mode:
print(db.getMongo().getReadPrefMode());
This operation will return one of the following read preference modes:
primary
primaryPreferred
secondary
secondaryPreferred
nearest
See also:
https://docs.mongodb.org/manual/core/read-preference,
setReadPref() (page 293), and getReadPrefTagSet() (page 293).
292
Mongo.getReadPrefTagSet()
Mongo.getReadPrefTagSet()
Returns The current read preference tag set for the Mongo() (page 187) connection object.
See https://docs.mongodb.org/manual/core/read-preference for an introduction to read
preferences and tag sets in MongoDB. Use getReadPrefTagSet() (page 293) to return the current read
preference tag set, as in the following example:
db.getMongo().getReadPrefTagSet()
Use the following operation to return and print the current read preference tag set:
printjson(db.getMongo().getReadPrefTagSet());
See also:
https://docs.mongodb.org/manual/core/read-preference,
setReadPref() (page 293), and getReadPrefTagSet() (page 293).
Mongo.setReadPref()
On this page
Definition (page 293)
Examples (page 293)
Definition
Mongo.setReadPref(mode, tagSet)
Call the setReadPref() (page 293) method on a Mongo (page 187) connection object to control how the
client will route all queries to members of the replica set.
param string mode One of the following read preference modes:
primary,
primaryPreferred, secondary, secondaryPreferred, or nearest.
param array tagSet Optional. A tag set used to specify custom read preference modes. For details,
see replica-set-read-preference-tag-sets.
Examples To set a read preference mode in the mongo (page 794) shell, use the following operation:
db.getMongo().setReadPref('primaryPreferred')
To set a read preference that uses a tag set, specify an array of tag sets as the second argument to
Mongo.setReadPref() (page 293), as in the following:
db.getMongo().setReadPref('primaryPreferred', [ { "dc": "east" } ] )
You can specify multiple tag sets, in order of preference, as in the following:
db.getMongo().setReadPref('secondaryPreferred',
[ { "dc": "east", "use": "production" },
{ "dc": "east", "use": "reporting" },
{ "dc": "east" },
{}
] )
293
If the replica set cannot satisfy the first tag set, the client will attempt to use the second read preference. Each tag set
can contain zero or more field/value tag pairs, with an empty document acting as a wildcard which matches a replica
set member with any tag set or no tag set.
Note: You must call Mongo.setReadPref() (page 293) on the connection object before retrieving documents
using that connection to use that read preference.
Mongo.setSlaveOk()
Mongo.setSlaveOk()
For the current session, this command permits read operations from non-master (i.e. slave or secondary) instances. Practically, use this method in the following form:
db.getMongo().setSlaveOk()
Indicates that eventually consistent read operations are acceptable for the current application. This function
provides the same functionality as rs.slaveOk() (page 261).
See the readPref() (page 151) method for more fine-grained control over read preference in the
mongo (page 794) shell.
Mongo()
On this page
Description (page 294)
Instantiation Options (page 294)
Description
Mongo(host)
JavaScript constructor to instantiate a database connection from the mongo (page 794) shell or from a JavaScript
file.
The Mongo() (page 294) method has the following parameter:
param string host Optional. The host, either in the form of <host> or <host><:port>.
Instantiation Options
on the default port.
Use the constructor without a parameter to instantiate a connection to the localhost interface
Pass the <host> parameter to the constructor to instantiate a connection to the <host> and the default port.
Pass the <host><:port> parameter to the constructor to instantiate a connection to the <host> and the <port>.
See also:
Mongo.getDB() (page 292) and db.getMongo() (page 187).
connect()
294
On this page
Description (page 295)
Example (page 295)
Description
connect(url, user, password)
Creates a connection to a MongoDB instance and returns the reference to the database. However, in most cases,
use the Mongo() (page 294) object and its getDB() (page 292) method instead.
param string url Specifies the connection string. You can specify either:
<hostname>:<port>/<database>
<hostname>/<database>
<database>
param string user Optional. Specifies an existing username with access privileges for this database.
If user is specified, you must include the password parameter as well.
param string password Optional unless the user parameter is specified. Specifies the password
for the user.
Example The following example instantiates a new connection to the MongoDB instance running on the localhost
interface and returns a reference to myDatabase:
db = connect("localhost:27017/myDatabase")
See also:
Mongo() (page 294), Mongo.getDB() (page 292), db.auth() (page 228)
295
2.1.13 Native
Native Methods
Name
cat()
version()
cd()
sleep()
copyDbpath() (page 297)
resetDbpath() (page 298)
fuzzFile() (page 298)
getHostName() (page 298)
getMemInfo() (page 298)
hostname()
_isWindows() (page 298)
listFiles() (page 299)
load()
ls()
md5sumFile() (page 300)
mkdir()
pwd()
quit()
_rand() (page 301)
removeFile() (page 301)
setVerboseShell()
(page 301)
_srand() (page 302)
Description
Returns the contents of the specified file.
Returns the current version of the mongo (page 794) shell instance.
Changes the current working directory to the specified path.
Suspends the mongo (page 794) shell for a given period of time.
Copies a local dbPath (page 907). For internal use.
Removes a local dbPath (page 907). For internal use.
For internal use to support testing.
Returns the hostname of the system running the mongo (page 794) shell.
Returns a document that reports the amount of memory used by the shell.
Returns the hostname of the system running the shell.
Returns true if the shell runs on a Windows system; false if a Unix or
Linux system.
Returns an array of documents that give the name and size of each object in
the directory.
Loads and runs a JavaScript file in the shell.
Returns a list of the files in the current directory.
The md5 hash of the specified file.
Creates a directory at the specified path.
Returns the current directory.
Exits the current shell session.
Returns a random number between 0 and 1.
Removes the specified file from the local file system.
Configures the mongo (page 794) shell to report operation timing.
For internal use.
cat()
On this page
Definition (page 296)
Definition
cat(filename)
Returns the contents of the specified file. The method returns with output relative to the current shell session
and does not impact the server.
param string filename Specify a path and file name on the local file system.
version()
version()
Returns The version of the mongo (page 794) shell as a string.
296
Changed in version 2.4: In previous versions of the shell, version() would print the version instead of
returning a string.
cd()
On this page
Definition (page 297)
Definition
cd(path)
param string path A path on the file system local to the mongo (page 794) shell context.
cd() changes the directory context of the mongo (page 794) shell and has no effect on the MongoDB server.
sleep()
On this page
Definition (page 297)
Example (page 297)
Definition
sleep(ms)
param integer ms A duration in milliseconds.
sleep() suspends a JavaScript execution context for a specified number of milliseconds.
Example Consider a low-priority bulk data import script. To avoid impacting other processes, you may suspend the
shell after inserting each document, distributing the cost of insertion over a longer period of time.
The following example mongo (page 794) script will load a JSON file containing an array of documents, and save one
element every 100 milliseconds.
JSON.parse(cat('users.json')).forEach(function(user) {
db.users.save(user);
sleep(100);
});
copyDbpath()
copyDbpath()
For internal use.
297
resetDbpath()
resetDbpath()
For internal use.
fuzzFile()
On this page
Description (page 298)
Description
fuzzFile(filename)
For internal use.
param string filename A filename or path to a local file.
getHostName()
getHostName()
Returns The hostname of the system running the mongo (page 794) shell process.
getMemInfo()
getMemInfo()
Returns a document with two fields that report the amount of memory used by the JavaScript shell process. The
fields returned are resident and virtual.
hostname()
hostname()
Returns The hostname of the system running the mongo (page 794) shell process.
_isWindows()
_isWindows()
Returns boolean.
Returns true if the mongo (page 794) shell is running on a system that is Windows, or false if the shell is
running on a Unix or Linux systems.
298
listFiles()
listFiles()
Returns an array, containing one document per object in the directory. This function operates in the context of
the mongo (page 794) shell. The fields included in the documents are:
name
A string which contains the pathname of the object.
baseName
A string which contains the name of the object.
isDirectory
A boolean to indicate whether the object is a directory.
size
The size of the object in bytes. This field is only present for files.
load()
On this page
Definition (page 299)
Example (page 299)
Definition
load(file)
Loads and runs a JavaScript file into the current shell environment.
The load() method has the following parameter:
param string filename Specifies the path of a JavaScript file to execute.
Specify filenames with relative or absolute paths. When using relative path names, confirm the current directory
using the pwd() method.
After executing a file with load(), you may reference any functions or variables defined the file from the
mongo (page 794) shell environment.
Example Consider the following examples of the load() method:
load("scripts/myjstest.js")
load("/data/db/scripts/myjstest.js")
ls()
ls()
Returns a list of the files in the current directory.
This function returns with output relative to the current shell session, and does not impact the server.
299
md5sumFile()
On this page
Description (page 300)
Description
md5sumFile(filename)
Returns a md5 hash of the specified file.
The md5sumFile() (page 300) method has the following parameter:
param string filename A file name.
Note: The specified filename must refer to a file located on the system running the mongo (page 794) shell.
mkdir()
On this page
Description (page 300)
Description
mkdir(path)
Creates a directory at the specified path. This method creates the entire path specified if the enclosing directory
or directories do not already exit.
This method is equivalent to mkdir -p with BSD or GNU utilities.
The mkdir() method has the following parameter:
param string path A path on the local filesystem.
pwd()
pwd()
Returns the current directory.
This function returns with output relative to the current shell session, and does not impact the server.
quit()
quit()
Exits the current shell session.
300
rand()
_rand()
Returns A random number between 0 and 1.
This function provides functionality similar to the Math.rand() function from the standard library.
removeFile()
On this page
Description (page 301)
Description
removeFile(filename)
Removes the specified file from the local file system.
The removeFile() (page 301) method has the following parameter:
param string filename A filename or path to a local file.
setVerboseShell()
On this page
Example (page 301)
setVerboseShell()
The setVerboseShell() (page 301) method configures the mongo (page 794) shell to print the duration
of each operation.
setVerboseShell() (page 301) has the form:
setVerboseShell(true)
setVerboseShell() (page 301) takes one boolean parameter. Specify true or leave the parameter blank
to activate the verbose shell. Specify false to deactivate.
Example The following example demonstrates the behavior of the verbose shell:
1. From the mongo (page 794) shell, set verbose shell to true:
setVerboseShell(true)
301
3. In addition to returning the results of the operation, the mongo (page 794) shell now displays information about
the duration of the operation:
{ "_id"
{ "_id"
{ "_id"
{ "_id"
{ "_id"
Fetched
:
:
:
:
:
5
"11377", "count"
"11368", "count"
"11101", "count"
"11106", "count"
"11103", "count"
record(s) in 0ms
:
:
:
:
:
1
1
2
3
1
}
}
}
}
}
_srand()
_srand()
For internal use.
All command documentation outlined below describes a command and its available parameters and provides a document template or prototype for each command. Some command documentation also includes the relevant mongo
(page 794) shell helpers.
Name
aggregate
(page 302)
count (page 306)
distinct (page 309)
group (page 312)
mapReduce
(page 316)
Description
Performs aggregation tasks such as group using the aggregation framework.
Counts the number of documents in a collection.
Displays the distinct values found for a specified key in a collection.
Groups documents in a collection by the specified key and performs simple
aggregation.
Performs map-reduce aggregation for large data sets.
On this page
aggregate
302
aggregate
Performs aggregation operation using the aggregation pipeline (page 622). The pipeline allows users to process
data from a collection with a sequence of stage-based manipulations.
The command has following syntax:
Changed in version 3.2.
{
aggregate: "<collection>",
pipeline: [ <stage>, <...> ],
explain: <boolean>,
allowDiskUse: <boolean>,
cursor: <document>,
bypassDocumentValidation: <boolean>,
readConcern: <document>
}
The aggregate (page 302) command takes the following fields as arguments:
field string aggregate The name of the collection to as the input for the aggregation pipeline.
field array pipeline An array of aggregation pipeline stages (page 622) that process and transform
the document stream as part of the aggregation pipeline.
field boolean explain Optional. Specifies to return the information on the processing of the pipeline.
New in version 2.6.
field boolean allowDiskUse Optional. Enables writing to temporary files. When set to true, aggregation stages can write data to the _tmp subdirectory in the dbPath (page 907) directory.
New in version 2.6.
field document cursor Optional. Specify a document that contains options that control the creation
of the cursor object.
New in version 2.6.
field boolean bypassDocumentValidation Optional.
(page 648) aggregation operator.
Enables aggregate to bypass document validation during the operation. This lets you insert
documents that do not meet the validation requirements.
New in version 3.2.
field document readConcern Optional. Specifies the read concern.
To use a read concern level of "majority", you must use the WiredTiger storage engine
and start the mongod (page 762) instances with the --enableMajorityReadConcern
(page 773) command line option (or the replication.enableMajorityReadConcern
(page 914) setting if using a configuration file).
Only replica sets using protocol version 1 support "majority" read concern. Replica
sets running protocol version 0 do not support "majority" read concern.
To use a https://docs.mongodb.org/manual/reference/read-concern level
of "majority", you cannot include the $out (page 648) stage.
New in version 3.2.
Changed in version 2.6: aggregation pipeline (page 622) introduces the $out (page 648) operator to allow
aggregate (page 302) command to store results to a collection.
303
The following example performs an aggregate (page 302) operation on the articles collection to calculate the
count of each distinct element in the tags array that appears in the collection.
db.runCommand(
{ aggregate: "articles",
pipeline: [
{ $project: { tags: 1 } },
{ $unwind: "$tags" },
{ $group: {
_id: "$tags",
count: { $sum : 1 }
}
}
]
}
)
In the mongo (page 794) shell, this operation can use the aggregate() (page 20) helper as in the following:
db.articles.aggregate(
[
{ $project: { tags: 1 } },
{ $unwind: "$tags" },
{ $group: {
_id: "$tags",
count: { $sum : 1 }
}
}
]
)
Note: In 2.6 and later, the aggregate() (page 20) helper always returns a cursor.
Changed in version 2.4: If an error occurs, the aggregate() (page 20) helper throws an exception. In previous
versions, the helper returned a document with the error message and code, and ok status field not equal to 1, same as
the aggregate (page 302) command.
Return Information on the Aggregation Operation The following aggregation operation sets the optional field
explain to true to return information about the aggregation operation.
304
Note: The intended readers of the explain output document are humans, and not machines, and the output format
is subject to change between releases.
See also:
db.collection.aggregate() (page 20) method
Aggregate Data using External Sort Aggregation pipeline stages have maximum memory use limit. To handle large
datasets, set allowDiskUse option to true to enable writing data to temporary files, as in the following example:
db.runCommand(
{ aggregate: "stocks",
pipeline: [
{ $project : { cusip: 1, date: 1, price: 1, _id: 0 } },
{ $sort : { cusip : 1, date: 1 } }
],
allowDiskUse: true
}
)
See also:
db.collection.aggregate() (page 20)
Aggregate Command Returns a Cursor
Note: Using the aggregate (page 302) command to return a cursor is a low-level operation, intended for authors
of drivers. Most users should use the db.collection.aggregate() (page 20) helper provided in the mongo
(page 794) shell or in their driver. In 2.6 and later, the aggregate() (page 20) helper always returns a cursor.
The following command returns a document that contains results with which to instantiate a cursor object.
db.runCommand(
{ aggregate: "records",
pipeline: [
{ $project: { name: 1, email: 1, _id: 0 } },
{ $sort: { name: 1 } }
],
cursor: { }
}
)
To specify an initial batch size, specify the batchSize in the cursor field, as in the following example:
db.runCommand(
{ aggregate: "records",
pipeline: [
{ $project: { name: 1, email: 1, _id: 0 } },
{ $sort: { name: 1 } }
305
],
cursor: { batchSize: 0 }
}
)
The {batchSize: 0 } document specifies the size of the initial batch size only. Specify subsequent batch sizes
to OP_GET_MORE operations as with other MongoDB cursors. A batchSize of 0 means an empty first batch and
is useful if you want to quickly get back a cursor or failure message, without doing significant server-side work.
Override Default readConcern The following operation on a replica set specifies a
https://docs.mongodb.org/manual/reference/read-concern of "majority" to read the
most recent copy of the data confirmed as having been written to a majority of the nodes.
Note:
To use a read concern level of "majority", you must use the WiredTiger storage engine and start the mongod
(page 762) instances with the --enableMajorityReadConcern (page 773) command line option (or the
replication.enableMajorityReadConcern (page 914) setting if using a configuration file).
Only replica sets using protocol version 1 support "majority" read concern. Replica sets running
protocol version 0 do not support "majority" read concern.
To use a https://docs.mongodb.org/manual/reference/read-concern
"majority", you cannot include the $out (page 648) stage.
level
of
Regardless of the read concern level, the most recent data on a node may not reflect the most recent version of
the data in the system.
The getMore (page 353) command uses the readConcern level specified in the originating aggregate
(page 302) command.
db.runCommand(
{
aggregate: "orders",
pipeline: [ { $match: { status: "A" } } ],
readConcern: { level: "majority" }
}
)
See also:
db.collection.aggregate() (page 20)
On this page
count
Definition
count
Counts the number of documents in a collection. Returns a document that contains this count and as well as the
command status.
count (page 306) has the following form:
306
{
count: <collection-name>,
query: <document>,
limit: <integer>,
skip: <integer>,
hint: <hint>,
readConcern: <document>
}
To get a count of documents that match a query condition, include the $match (page 627) stage as well:
db.collection.aggregate(
[
{ $match: <query condition> },
{ $group: { _id: null, count: { $sum: 1 } } }
]
)
307
In the result, the n, which represents the count, is 26, and the command status ok is 1:
{ "n" : 26, "ok" : 1 }
Count Documents That Match a Query The following operation returns a count of the documents in the orders
collection where the value of the ord_dt field is greater than Date(01/01/2012):
db.runCommand( { count:'orders',
query: { ord_dt: { $gt: new Date('01/01/2012') } }
} )
In the result, the n, which represents the count, is 13 and the command status ok is 1:
{ "n" : 13, "ok" : 1 }
Skip Documents in Count The following operation returns a count of the documents in the orders collection
where the value of the ord_dt field is greater than Date(01/01/2012) and skip the first 10 matching documents:
db.runCommand( { count:'orders',
query: { ord_dt: { $gt: new Date('01/01/2012') } },
skip: 10 } )
In the result, the n, which represents the count, is 3 and the command status ok is 1:
{ "n" : 3, "ok" : 1 }
Specify the Index to Use The following operation uses the index { status: 1 } to return a count of the
documents in the orders collection where the value of the ord_dt field is greater than Date(01/01/2012)
and the status field is equal to "D":
db.runCommand(
{
count:'orders',
query: {
ord_dt: { $gt: new Date('01/01/2012') },
status: "D"
},
hint: { status: 1 }
}
)
In the result, the n, which represents the count, is 1 and the command status ok is 1:
308
{ "n" : 1, "ok" : 1 }
On this page
distinct
Definition
distinct
Finds the distinct values for a specified field across a single collection. distinct (page 309) returns a document that contains an array of the distinct values. The return document also contains an embedded document
with query statistics and the query plan.
The command takes the following form:
{ distinct: "<collection>", key: "<field>", query: <query> }
309
To use a read concern level of "majority", you must use the WiredTiger storage engine
and start the mongod (page 762) instances with the --enableMajorityReadConcern
(page 773) command line option (or the replication.enableMajorityReadConcern
(page 914) setting if using a configuration file).
Only replica sets using protocol version 1 support "majority" read concern. Replica
sets running protocol version 0 do not support "majority" read concern.
New in version 3.2.
MongoDB also provides the shell wrapper method db.collection.distinct() (page 43) for the
distinct (page 309) command. Additionally, many MongoDB drivers also provide a wrapper method. Refer
to the specific driver documentation.
Behavior
Array Fields If the value of the specified field is an array, distinct (page 309) considers each element of the
array as a separate value.
For instance, if a field has as its value [ 1, [1], 1 ], then distinct (page 309) considers 1, [1], and 1 as
separate values.
For an example, see Return Distinct Values for an Array Field (page 311).
Index Use When possible, distinct (page 309) operations can use indexes.
Indexes can also cover distinct (page 309) operations. See covered-queries for more information on queries
covered by indexes.
Examples The examples use the inventory collection that contains the following documents:
{
{
{
{
"_id":
"_id":
"_id":
"_id":
1,
2,
3,
4,
"dept":
"dept":
"dept":
"dept":
"A",
"A",
"B",
"A",
"item":
"item":
"item":
"item":
{
{
{
{
"sku":
"sku":
"sku":
"sku":
"111",
"111",
"222",
"333",
"color":
"color":
"color":
"color":
Return Distinct Values for a Field The following example returns the distinct values for the field dept from all
documents in the inventory collection:
db.runCommand ( { distinct: "inventory", key: "dept" } )
The command returns a document with a field named values that contains the distinct dept values:
{
"values" : [ "A", "B" ],
"stats" : { ... },
"ok" : 1
}
Return Distinct Values for an Embedded Field The following example returns the distinct values for the field
sku, embedded in the item field, from all documents in the inventory collection:
db.runCommand ( { distinct: "inventory", key: "item.sku" } )
The command returns a document with a field named values that contains the distinct sku values:
310
{
"values" : [ "111", "222", "333" ],
"stats" : { ... },
"ok" : 1
}
See also:
document-dot-notation for information on accessing fields within embedded documents
Return Distinct Values for an Array Field The following example returns the distinct values for the field sizes
from all documents in the inventory collection:
db.runCommand ( { distinct: "inventory", key: "sizes" } )
The command returns a document with a field named values that contains the distinct sizes values:
{
"values" : [ "M", "S", "L" ],
"stats" : { ... },
"ok" : 1
}
For information on distinct (page 309) and array fields, see the Behavior (page 310) section.
Specify Query with distinct The following example returns the distinct values for the field sku, embedded in
the item field, from the documents whose dept is equal to "A":
db.runCommand ( { distinct: "inventory", key: "item.sku", query: { dept: "A"} } )
The command returns a document with a field named values that contains the distinct sku values:
{
"values" : [ "111", "333" ],
"stats" : { ... },
"ok" : 1
}
311
db.runCommand(
{
distinct: "restaurants",
key: "rating",
query: { cuisine: "italian" },
readConcern: { level: "majority" }
}
)
On this page
group
Definition
group
Groups documents in a collection by the specified key and performs simple aggregation functions, such as
computing counts and sums. The command is analogous to a SELECT <...> GROUP BY statement in SQL.
The command returns a document with the grouped records as well as the command meta-data.
The group (page 312) command takes the following prototype form:
{
group:
{
ns: <namespace>,
key: <key>,
$reduce: <reduce function>,
$keyf: <key function>,
cond: <query>,
finalize: <finalize function>
}
}
the result document as a whole. Unlike the $keyf and $reduce fields that also specify a
function, this field name is finalize, not $finalize.
For the shell, MongoDB provides a wrapper method db.collection.group() (page 75). However, the
db.collection.group() (page 75) method takes the keyf field and the reduce field whereas the
group (page 312) command takes the $keyf field and the $reduce field.
Behavior
Limits and Restrictions The group (page 312) command does not work with sharded clusters. Use the aggregation framework or map-reduce in sharded environments.
The result set must fit within the maximum BSON document size (page 932).
Additionally, in version 2.2, the returned array can contain at most 20,000 elements; i.e. at most 20,000 unique
groupings. For group by operations that results in more than 20,000 unique groupings, use mapReduce (page 316).
Previous versions had a limit of 10,000 elements.
Prior to 2.4, the group (page 312) command took the mongod (page 762) instances JavaScript lock which blocked
all other JavaScript execution.
mongo Shell JavaScript Functions/Properties Changed in version 2.4.
In MongoDB 2.4, map-reduce operations (page 316), the group (page 312) command, and $where
(page 550) operator expressions cannot access certain global functions or properties, such as db, that are available in
the mongo (page 794) shell.
When upgrading to MongoDB 2.4, you will need to refactor your code if your map-reduce operations
(page 316), group (page 312) commands, or $where (page 550) operator expressions include any global shell
functions or properties that are no longer available, such as db.
The following JavaScript functions and properties are available to map-reduce operations (page 316), the
group (page 312) command, and $where (page 550) operator expressions in MongoDB 2.4:
Available Properties
Available Functions
args
MaxKey
MinKey
assert()
BinData()
DBPointer()
DBRef()
doassert()
emit()
gc()
HexData()
hex_md5()
isNumber()
isObject()
ISODate()
isString()
Map()
MD5()
NumberInt()
NumberLong()
ObjectId()
print()
printjson()
printjsononeline()
sleep()
Timestamp()
tojson()
tojsononeline()
tojsonObject()
UUID()
version()
JavaScript in MongoDB
2.2. Database Commands
313
Although group (page 312) uses JavaScript, most interactions with MongoDB do not use JavaScript but use an
idiomatic driver in the language of the interacting application.
Examples The following are examples of the db.collection.group() (page 75) method. The examples
assume an orders collection with documents of the following prototype:
{
_id: ObjectId("5085a95c8fada716c89d0021"),
ord_dt: ISODate("2012-07-01T04:00:00Z"),
ship_dt: ISODate("2012-07-02T04:00:00Z"),
item:
{
sku: "abc123",
price: 1.99,
uom: "pcs",
qty: 25
}
}
Group by Two Fields The following example groups by the ord_dt and item.sku fields those documents that
have ord_dt greater than 01/01/2012:
db.runCommand(
{
group:
{
ns: 'orders',
key: { ord_dt: 1, 'item.sku': 1 },
cond: { ord_dt: { $gt: new Date( '01/01/2012' ) } },
$reduce: function ( curr, result ) { },
initial: { }
}
}
)
The result is a document that contain the retval field which contains the group by records, the count field which
contains the total number of documents grouped, the keys field which contains the number of unique groupings (i.e.
number of elements in the retval), and the ok field which contains the command status:
{ "retval" :
[ { "ord_dt"
{ "ord_dt"
{ "ord_dt"
{ "ord_dt"
{ "ord_dt"
{ "ord_dt"
{ "ord_dt"
{ "ord_dt"
{ "ord_dt"
{ "ord_dt"
{ "ord_dt"
],
"count" : 13,
"keys" : 11,
"ok" : 1 }
314
:
:
:
:
:
:
:
:
:
:
:
ISODate("2012-07-01T04:00:00Z"),
ISODate("2012-07-01T04:00:00Z"),
ISODate("2012-07-01T04:00:00Z"),
ISODate("2012-07-01T04:00:00Z"),
ISODate("2012-06-01T04:00:00Z"),
ISODate("2012-06-01T04:00:00Z"),
ISODate("2012-06-01T04:00:00Z"),
ISODate("2012-05-01T04:00:00Z"),
ISODate("2012-05-01T04:00:00Z"),
ISODate("2012-06-08T04:00:00Z"),
ISODate("2012-06-08T04:00:00Z"),
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
:
:
:
:
:
:
:
:
:
:
:
"abc123"},
"abc456"},
"bcd123"},
"efg456"},
"abc123"},
"efg456"},
"ijk123"},
"abc123"},
"abc456"},
"abc123"},
"abc456"}
Calculate the Sum The following example groups by the ord_dt and item.sku fields those documents that
have ord_dt greater than 01/01/2012 and calculates the sum of the qty field for each grouping:
db.runCommand(
{ group:
{
ns: 'orders',
key: { ord_dt: 1, 'item.sku': 1 },
cond: { ord_dt: { $gt: new Date( '01/01/2012' ) } },
$reduce: function ( curr, result ) {
result.total += curr.item.qty;
},
initial: { total : 0 }
}
}
)
The retval field of the returned document is an array of documents that contain the group by fields and the calculated
aggregation field:
{ "retval" :
[ { "ord_dt"
{ "ord_dt"
{ "ord_dt"
{ "ord_dt"
{ "ord_dt"
{ "ord_dt"
{ "ord_dt"
{ "ord_dt"
{ "ord_dt"
{ "ord_dt"
{ "ord_dt"
],
"count" : 13,
"keys" : 11,
"ok" : 1 }
:
:
:
:
:
:
:
:
:
:
:
ISODate("2012-07-01T04:00:00Z"),
ISODate("2012-07-01T04:00:00Z"),
ISODate("2012-07-01T04:00:00Z"),
ISODate("2012-07-01T04:00:00Z"),
ISODate("2012-06-01T04:00:00Z"),
ISODate("2012-06-01T04:00:00Z"),
ISODate("2012-06-01T04:00:00Z"),
ISODate("2012-05-01T04:00:00Z"),
ISODate("2012-05-01T04:00:00Z"),
ISODate("2012-06-08T04:00:00Z"),
ISODate("2012-06-08T04:00:00Z"),
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
"item.sku"
:
:
:
:
:
:
:
:
:
:
:
"abc123",
"abc456",
"bcd123",
"efg456",
"abc123",
"efg456",
"ijk123",
"abc123",
"abc456",
"abc123",
"abc456",
"total"
"total"
"total"
"total"
"total"
"total"
"total"
"total"
"total"
"total"
"total"
:
:
:
:
:
:
:
:
:
:
:
Calculate Sum, Count, and Average The following example groups by the calculated day_of_week field, those
documents that have ord_dt greater than 01/01/2012 and calculates the sum, count, and average of the qty field
for each grouping:
db.runCommand(
{
315
25
25
10
10
25
15
20
45
25
25
25
},
},
},
},
},
},
},
},
},
},
}
group:
{
ns: 'orders',
$keyf: function(doc) {
return { day_of_week: doc.ord_dt.getDay() };
},
cond: { ord_dt: { $gt: new Date( '01/01/2012' ) } },
$reduce: function( curr, result ) {
result.total += curr.item.qty;
result.count++;
},
initial: { total : 0, count: 0 },
finalize: function(result) {
var weekdays = [
"Sunday", "Monday", "Tuesday",
"Wednesday", "Thursday",
"Friday", "Saturday"
];
result.day_of_week = weekdays[result.day_of_week];
result.avg = Math.round(result.total / result.count);
}
}
}
)
The retval field of the returned document is an array of documents that contain the group by fields and the calculated
aggregation field:
{
"retval" :
[
{ "day_of_week" : "Sunday", "total" : 70, "count" : 4, "avg" : 18 },
{ "day_of_week" : "Friday", "total" : 110, "count" : 6, "avg" : 18 },
{ "day_of_week" : "Tuesday", "total" : 70, "count" : 3, "avg" : 23 }
],
"count" : 13,
"keys" : 3,
"ok" : 1
}
See also:
https://docs.mongodb.org/manual/core/aggregation-pipeline
On this page
mapReduce
mapReduce
The mapReduce (page 316) command allows you to run map-reduce aggregation operations over a collection.
316
The mapReduce (page 316) command has the following prototype form:
db.runCommand(
{
mapReduce: <collection>,
map: <function>,
reduce: <function>,
finalize: <function>,
out: <output>,
query: <document>,
sort: <document>,
limit: <number>,
scope: <document>,
jsMode: <boolean>,
verbose: <boolean>,
bypassDocumentValidation: <boolean>
}
)
Pass the name of the collection to the mapReduce command (i.e. <collection>) to use as the source
documents to perform the map reduce operation. The command also accepts the following parameters:
field collection mapReduce The name of the collection on which you want to perform map-reduce.
This collection will be filtered using query before being processed by the map function.
field function map A JavaScript function that associates or maps a value with a key and emits
the key and value pair.
See Requirements for the map Function (page 319) for more information.
field function reduce A JavaScript function that reduces to a single object all the values associated with a particular key.
See Requirements for the reduce Function (page 320) for more information.
field string or document out Specifies where to output the result of the map-reduce operation. You
can either output to a collection or return the result inline. On a primary member of a replica set
you can output either to a collection or inline, but on a secondary, only inline output is possible.
See out Options (page 320) for more information.
field document query Optional. Specifies the selection criteria using query operators (page 519)
for determining the documents input to the map function.
field document sort Optional. Sorts the input documents. This option is useful for optimization.
For example, specify the sort key to be the same as the emit key so that there are fewer reduce
operations. The sort key must be in an existing index for this collection.
field number limit Optional. Specifies a maximum number of documents for the input into the map
function.
field function finalize Optional. Follows the reduce method and modifies the output.
See Requirements for the finalize Function (page 320) for more information.
field document scope Optional. Specifies global variables that are accessible in the map, reduce
and finalize functions.
field boolean jsMode Optional. Specifies whether to convert intermediate data into BSON format
between the execution of the map and reduce functions. Defaults to false.
If false:
317
Internally, MongoDB converts the JavaScript objects emitted by the map function to BSON
objects. These BSON objects are then converted back to JavaScript objects when calling the
reduce function.
The map-reduce operation places the intermediate BSON objects in temporary, on-disk storage. This allows the map-reduce operation to execute over arbitrarily large data sets.
If true:
Internally, the JavaScript objects emitted during map function remain as JavaScript objects.
There is no need to convert the objects for the reduce function, which can result in faster
execution.
You can only use jsMode for result sets with fewer than 500,000 distinct key arguments
to the mappers emit() function.
The jsMode defaults to false.
field Boolean verbose Optional. Specifies whether to include the timing information in the result
information. The verbose defaults to true to include the timing information.
field boolean bypassDocumentValidation Optional. Enables mapReduce (page 316) to bypass
document validation during the operation. This lets you insert documents that do not meet the
validation requirements.
New in version 3.2.
The following is a prototype usage of the mapReduce (page 316) command:
var mapFunction = function() { ... };
var reduceFunction = function(key, values) { ... };
db.runCommand(
{
mapReduce: <input-collection>,
map: mapFunction,
reduce: reduceFunction,
out: { merge: <output-collection> },
query: <query>
}
)
JavaScript in MongoDB
Although mapReduce (page 316) uses JavaScript, most interactions with MongoDB do not use JavaScript but
use an idiomatic driver in the language of the interacting application.
Note: Changed in version 2.4.
In MongoDB 2.4, map-reduce operations (page 316), the group (page 312) command, and $where
(page 550) operator expressions cannot access certain global functions or properties, such as db, that are available in
the mongo (page 794) shell.
When upgrading to MongoDB 2.4, you will need to refactor your code if your map-reduce operations
(page 316), group (page 312) commands, or $where (page 550) operator expressions include any global shell
functions or properties that are no longer available, such as db.
The following JavaScript functions and properties are available to map-reduce operations (page 316), the
group (page 312) command, and $where (page 550) operator expressions in MongoDB 2.4:
318
Available Properties
Available Functions
args
MaxKey
MinKey
assert()
BinData()
DBPointer()
DBRef()
doassert()
emit()
gc()
HexData()
hex_md5()
isNumber()
isObject()
ISODate()
isString()
Map()
MD5()
NumberInt()
NumberLong()
ObjectId()
print()
printjson()
printjsononeline()
sleep()
Timestamp()
tojson()
tojsononeline()
tojsonObject()
UUID()
version()
Requirements for the map Function The map function is responsible for transforming each input document into
zero or more documents. It can access the variables defined in the scope parameter, and has the following prototype:
function() {
...
emit(key, value);
}
The following map function may call emit(key,value) multiple times depending on the number of elements in
the input documents items field:
function() {
this.items.forEach(function(item){ emit(item.sku, 1); });
}
319
Requirements for the reduce Function The reduce function has the following prototype:
function(key, values) {
...
return result;
}
the reduce function must be idempotent. Ensure that the following statement is true:
reduce( key, [ reduce(key, valuesArray) ] ) == reduce( key, valuesArray )
the reduce function should be commutative: that is, the order of the elements in the valuesArray should
not affect the output of the reduce function, so that the following statement is true:
reduce( key, [ A, B ] ) == reduce( key, [ B, A ] )
Requirements for the finalize Function The finalize function has the following prototype:
function(key, reducedValue) {
...
return modifiedObject;
}
The finalize function receives as its arguments a key value and the reducedValue from the reduce function.
Be aware that:
The finalize function should not access the database for any reason.
The finalize function should be pure, or have no impact outside of the function (i.e. side effects.)
The finalize function can access the variables defined in the scope parameter.
out Options You can specify the following options for the out parameter:
320
Output to a Collection This option outputs to a new collection, and is not available on secondary members of replica
sets.
out: <collectionName>
Output to a Collection with an Action This option is only available when passing a collection that already exists
to out. It is not available on secondary members of replica sets.
out: { <action>: <collectionName>
[, db: <dbName>]
[, sharded: <boolean> ]
[, nonAtomic: <boolean> ] }
When you output to a collection with an action, the out has the following parameters:
<action>: Specify one of the following actions:
replace
Replace the contents of the <collectionName> if the collection with the <collectionName> exists.
merge
Merge the new result with the existing result if the output collection already exists. If an existing document
has the same key as the new result, overwrite that existing document.
reduce
Merge the new result with the existing result if the output collection already exists. If an existing document
has the same key as the new result, apply the reduce function to both the new and the existing documents
and overwrite the existing document with the result.
db:
Optional. The name of the database that you want the map-reduce operation to write its output. By default this
will be the same database as the input collection.
sharded:
Optional. If true and you have enabled sharding on output database, the map-reduce operation will shard the
output collection using the _id field as the shard key.
nonAtomic:
New in version 2.2.
Optional. Specify output operation as non-atomic. This applies only to the merge and reduce output modes,
which may take minutes to execute.
By default nonAtomic is false, and the map-reduce operation locks the database during post-processing.
If nonAtomic is true, the post-processing step prevents MongoDB from locking the database: during this
time, other clients will be able to read intermediate states of the output collection.
Output Inline Perform the map-reduce operation in memory and return the result. This option is the only available
option for out on secondary members of replica sets.
out: { inline: 1 }
The result must fit within the maximum size of a BSON document (page 932).
321
Map-Reduce Examples In the mongo (page 794) shell, the db.collection.mapReduce() (page 89)
method is a wrapper around the mapReduce (page 316) command.
The following examples use the
db.collection.mapReduce() (page 89) method:
Consider the following map-reduce operations on a collection orders that contains documents of the following
prototype:
{
_id: ObjectId("50a8240b927d5d8b5891743c"),
cust_id: "abc123",
ord_date: new Date("Oct 04, 2012"),
status: 'A',
price: 25,
items: [ { sku: "mmm", qty: 5, price: 2.5 },
{ sku: "nnn", qty: 5, price: 2.5 } ]
}
Return the Total Price Per Customer Perform the map-reduce operation on the orders collection to group by
the cust_id, and calculate the sum of the price for each cust_id:
1. Define the map function to process each input document:
In the function, this refers to the document that the map-reduce operation is processing.
The function maps the price to the cust_id for each document and emits the cust_id and price
pair.
var mapFunction1 = function() {
emit(this.cust_id, this.price);
};
2. Define the corresponding reduce function with two arguments keyCustId and valuesPrices:
The valuesPrices is an array whose elements are the price values emitted by the map function and
grouped by keyCustId.
The function reduces the valuesPrice array to the sum of its elements.
var reduceFunction1 = function(keyCustId, valuesPrices) {
return Array.sum(valuesPrices);
};
3. Perform the map-reduce on all documents in the orders collection using the mapFunction1 map function
and the reduceFunction1 reduce function.
db.orders.mapReduce(
mapFunction1,
reduceFunction1,
{ out: "map_reduce_example" }
)
322
quantity ordered for each sku. The operation concludes by calculating the average quantity per order for each sku
value:
1. Define the map function to process each input document:
In the function, this refers to the document that the map-reduce operation is processing.
For each item, the function associates the sku with a new object value that contains the count of 1
and the item qty for the order and emits the sku and value pair.
var mapFunction2 = function() {
for (var idx = 0; idx < this.items.length; idx++) {
var key = this.items[idx].sku;
var value = {
count: 1,
qty: this.items[idx].qty
};
emit(key, value);
}
};
2. Define the corresponding reduce function with two arguments keySKU and countObjVals:
countObjVals is an array whose elements are the objects mapped to the grouped keySKU values
passed by map function to the reducer function.
The function reduces the countObjVals array to a single object reducedValue that contains the
count and the qty fields.
In reducedVal, the count field contains the sum of the count fields from the individual array elements, and the qty field contains the sum of the qty fields from the individual array elements.
var reduceFunction2 = function(keySKU, countObjVals) {
reducedVal = { count: 0, qty: 0 };
for (var idx = 0; idx < countObjVals.length; idx++) {
reducedVal.count += countObjVals[idx].count;
reducedVal.qty += countObjVals[idx].qty;
}
return reducedVal;
};
3. Define a finalize function with two arguments key and reducedVal. The function modifies the
reducedVal object to add a computed field named avg and returns the modified object:
var finalizeFunction2 = function (key, reducedVal) {
reducedVal.avg = reducedVal.qty/reducedVal.count;
return reducedVal;
};
using
the
mapFunction2,
db.orders.mapReduce( mapFunction2,
reduceFunction2,
{
out: { merge: "map_reduce_example" },
323
query: { ord_date:
{ $gt: new Date('01/01/2012') }
},
finalize: finalizeFunction2
}
)
This operation uses the query field to select only those documents with ord_date greater than new
Date(01/01/2012). Then it output the results to a collection map_reduce_example. If the
map_reduce_example collection already exists, the operation will merge the existing contents with the
results of this map-reduce operation.
For more information and examples, see the Map-Reduce page and https://docs.mongodb.org/manual/tutorial/perf
Output The mapReduce (page 316) command adds support for the bypassDocumentValidation option,
which lets you bypass document validation (page 977) when inserting or updating documents in a collection with
validation rules.
If you set the out (page 320) parameter to write the results to a collection, the mapReduce (page 316) command
returns a document in the following form:
{
"result" : <string or document>,
"timeMillis" : <int>,
"counts" : {
"input" : <int>,
"emit" : <int>,
"reduce" : <int>,
"output" : <int>
},
"ok" : <int>,
}
If you set the out (page 320) parameter to output the results inline, the mapReduce (page 316) command returns a
document in the following form:
{
"results" : [
{
"_id" : <key>,
"value" :<reduced or finalizedValue for key>
},
...
],
"timeMillis" : <int>,
"counts" : {
"input" : <int>,
"emit" : <int>,
"reduce" : <int>,
"output" : <int>
},
"ok" : <int>
}
mapReduce.result
For output sent to a collection, this value is either:
a string for the collection name if out (page 320) did not specify the database name, or
324
a document with both db and collection fields if out (page 320) specified both a database and collection name.
mapReduce.results
For output written inline, an array of resulting documents. Each resulting document contains two fields:
_id field contains the key value,
value field contains th