An interface providing features to work with files or streams containing multiple small JSON documents. Given an input such as
{"text":"a"}
{"text":"b"}
{"text":"c"}
...... you want to read the entries (individual JSON documents) as quickly and as conveniently as possible. Importantly, the input might span several gigabytes, but you want to use a small (fixed) amount of memory. Ideally, you'd also like the parallelize the processing (using more than one core) to speed up the process.
- Motivations
- Performance
- How it works
- Support
- API
- Streaming directly from a memory-mapped file
- Use cases
- Tracking your position
- Incomplete streams
The main motivation for this piece of software is to achieve maximum speed and offer a better quality of life in parsing files containing multiple small JSON documents.
The JavaScript Object Notation (JSON) RFC7159 is a handy serialization format. However, when serializing a large sequence of values as an array, or a possibly indeterminate-length or never- ending sequence of values, JSON may be inconvenient.
Consider a sequence of one million values, each possibly one kilobyte when encoded -- roughly one gigabyte. It is often desirable to process such a dataset incrementally without having to first read all of it before beginning to produce results.
The following is a chart comparing the speed of the different alternatives to parse a multiline JSON.
The simdjson library provides a threaded and non-threaded parse_many() implementation. As the
figure below shows, if you can, use threads, but if you cannot, the unthreaded mode is still fast!

The parsing in simdjson is divided into 2 stages. First, in stage 1, we parse the document and find
all the structural indexes ({, }, ], [, ,, ", ...) and validate UTF8. Then, in stage 2,
we go through the document again and build the tape using structural indexes found during stage 1.
Although stage 1 finds the structural indexes, it has no knowledge of the structure of the document
nor does it know whether it parsed a valid document, multiple documents, or even if the document is
complete.
Prior to parse_many, most people who had to parse a multiline JSON file would proceed by reading the
file line by line, using a utility function like std::getline or equivalent, and would then use
the parse on each of those lines. From a performance point of view, this process is highly
inefficient, in that it requires a lot of unnecessary memory allocation and makes use of the
getline function, which is fundamentally slow, slower than the act of parsing with simdjson
(more on this here).
Unlike the popular parser RapidJson, our DOM does not require the buffer once the parsing job is completed, the DOM and the buffer are completely independent. The drawback of this architecture is that we need to allocate some additional memory to store our ParsedJson data, for every document inside a given file. Memory allocation can be slow and become a bottleneck, therefore, we want to minimize it as much as possible.
To achieve a minimum amount of allocations, we opted for a design where we create only one parser object and therefore allocate its memory once, and then recycle it for every document in a given file. But, knowing that they often have largely varying size, we need to make sure that we allocate enough memory so that all the documents can fit. This value is what we call the batch size. As of right now, we need to manually specify a value for this batch size, it has to be at least as big as the biggest document in your file, but not too big so that it submerges the cached memory. The bigger the batch size, the fewer we need to make allocations. We found that 1MB is somewhat a sweet spot.
- When the user calls
parse_many, we return adocument_streamwhich the user can iterate over to receive parsed documents. - We call stage 1 on the first batch_size bytes of JSON in the buffer, detecting structural indexes for all documents in that batch.
- We call stage 2 on the indexes, reading tokens until we reach the end of a valid document (i.e. a single array, object, string, boolean, number or null).
- Each time the user calls
++to read the next document, we call stage 2 to parse the next document where we left off. - When we reach the end of the batch, we call stage 1 on the next batch, starting from the end of the last document, and go to step 3.
But how can we make use of threads if they are available? We found a pretty cool algorithm that allows us to quickly identify the position of the last JSON document in a given batch. Knowing exactly where the end of the batch is, we no longer need for stage 2 to finish in order to load a new batch. We already know where to start the next batch. Therefore, we can run stage 1 on the next batch concurrently while the main thread is going through stage 2. Running stage 1 in a different thread can, in best cases, remove almost entirely its cost and replaces it by the overhead of a thread, which is orders of magnitude cheaper. Ain't that awesome!
Thread support is only active if thread supported is detected in which case the macro
SIMDJSON_THREADS_ENABLED is set. You can also manually pass SIMDJSON_THREADS_ENABLED=1 flag
to the library. Otherwise the library runs in single-thread mode.
You should be consistent. If you link against the simdjson library built for multithreading
(i.e., with SIMDJSON_THREADS_ENABLED), then you should build your application with multithreading
system (setting SIMDJSON_THREADS_ENABLED=1 and linking against a thread library).
A document_stream instance uses at most two threads: there is a main thread and a worker thread.
You should expect the main thread to be fully occupied while the worker thread is partially busy
(e.g., 80% of the time).
Since we want to offer flexibility and not restrict ourselves to a specific file format, we support any file that contains any amount of valid JSON document, separated by one or more character that is considered whitespace by the JSON spec. Anything that is not whitespace will be parsed as a JSON document and could lead to failure.
Whitespace Characters:
- Space
- Linefeed
- Carriage return
- Horizontal tab
- Nothing
Some official formats (non-exhaustive list):
- Newline-Delimited JSON (NDJSON)
- JSON lines (JSONL)
- Record separator-delimited JSON (RFC 7464)
- More on Wikipedia...
See basics.md for an overview of the API.
From jsonlines.org:
-
Better than CSV
["Name", "Session", "Score", "Completed"] ["Gilbert", "2013", 24, true] ["Alexa", "2013", 29, true] ["May", "2012B", 14, false] ["Deloise", "2012A", 19, true]
CSV seems so easy that many programmers have written code to generate it themselves, and almost every implementation is different. Handling broken CSV files is a common and frustrating task. CSV has no standard encoding, no standard column separator and multiple character escaping standards. String is the only type supported for cell values, so some programs attempt to guess the correct types.
JSON Lines handles tabular data cleanly and without ambiguity. Cells may use the standard JSON types.
The biggest missing piece is an import/export filter for popular spreadsheet programs so that non-programmers can use this format.
-
Easy Nested Data
{"name": "Gilbert", "wins": [["straight", "7♣"], ["one pair", "10♥"]]} {"name": "Alexa", "wins": [["two pair", "4♠"], ["two pair", "9♠"]]} {"name": "May", "wins": []} {"name": "Deloise", "wins": [["three of a kind", "5♣"]]}JSON Lines' biggest strength is in handling lots of similar nested data structures. One .jsonl file is easier to work with than a directory full of XML files.
Some users would like to know where the document they parsed is in the input array of bytes.
It is possible to do so by accessing directly the iterator and calling its current_index()
method which reports the location (in bytes) of the current document in the input stream.
You may also call the source() method to get a std::string_view instance on the document.
Let us illustrate the idea with code:
auto json = R"([1,2,3] {"1":1,"2":3,"4":4} [1,2,3] )"_padded;
simdjson::dom::parser parser;
simdjson::dom::document_stream stream;
auto error = parser.parse_many(json).get(stream);
if (error) { /* do something */ }
auto i = stream.begin();
size_t count{0};
for(; i != stream.end(); ++i) {
auto doc = *i;
if (!doc.error()) {
std::cout << "got full document at " << i.current_index() << std::endl;
std::cout << i.source() << std::endl;
count++;
} else {
std::cout << "got broken document at " << i.current_index() << std::endl;
return false;
}
}
This code will print:
got full document at 0
[1,2,3]
got full document at 9
{"1":1,"2":3,"4":4}
got full document at 29
[1,2,3]
When your input is a large NDJSON / JSON-lines file on disk, the most
efficient way to feed parse_many is to use simdjson::padded_memory_map.
It returns a padded_string_view with the right amount of trailing padding,
so you can pass it directly to parse_many without copying the file content
into your own buffer first.
padded_memory_map is available on POSIX systems (Linux, macOS, BSD, ...) by
default. On Windows it is an opt-in feature with the following
requirements:
- Build simdjson with
-DSIMDJSON_ENABLE_MEMORY_FILE_MAPPING_ON_WINDOWS=ON, or — if you consume simdjson as a pre-built library — defineSIMDJSON_ENABLE_MEMORY_FILE_MAPPING_ON_WINDOWS=1, raiseNTDDI_VERSIONto at leastNTDDI_WIN10_RS4(0x0A000005, Windows 10 version 1803), and addonecore.libto your link line yourself. The Windows implementation uses the modern memory APIsCreateFileMapping2/MapViewOfFile3, which are available starting with that version of Windows and are exported byonecore.lib. #include <windows.h>before#include "simdjson.h"in every translation unit where you want to usepadded_memory_map. simdjson deliberately does not pull in<windows.h>itself, so the class is only declared when the Win32 types are already visible.
If either requirement is not met on Windows, the padded_memory_map class is
not declared at all and any code that references it fails to compile with an
"unknown identifier" error. The availability of the class can be tested with
the macro SIMDJSON_HAS_PADDED_MEMORY_MAP.
On POSIX, padded_memory_map uses mmap to map the file directly into
memory with zero copies. On Windows (when enabled), it uses
CreateFileMapping2 + MapViewOfFile3 for true zero-copy mapping
whenever the file does not end within SIMDJSON_PADDING bytes of a page
boundary; for those rare cases, it transparently falls back to reading
the file into a heap-allocated padded buffer so that the returned view
always has SIMDJSON_PADDING accessible zero bytes after the file content.
#ifdef _WIN32
#include <windows.h> // Must come BEFORE <simdjson.h> on Windows
#endif
#include "simdjson.h"
// ...
simdjson::padded_memory_map map("huge_stream.ndjson");
if (!map.is_valid()) { /* file missing, unreadable, too large, ... */ return; }
simdjson::dom::parser parser;
simdjson::dom::document_stream stream;
auto error = parser.parse_many(map.view()).get(stream);
if (error) { std::cerr << error << std::endl; return; }
for (auto doc : stream) {
// process each JSON document in the stream
std::cout << doc << std::endl;
}Important lifetime rule: the padded_string_view returned by map.view() is
only valid while the padded_memory_map instance is alive, so keep map
alive for as long as you are iterating the stream.
The file must not be modified while the memory map is in use. If you need a
fully independent copy of the data, use simdjson::padded_string::load(...)
instead.
If you prefer single-document parsing on a memory-mapped file, the same
pattern applies to parser.parse(...):
simdjson::padded_memory_map map(myfilename);
if (!map.is_valid()) { /* handle error */ }
simdjson::padded_string_view view = map.view(); // view is usable while padded_memory_map is in scope
simdjson::dom::element doc = parser.parse(view); // parse the JSONSome users may need to work with truncated streams. The simdjson may truncate documents at the very end of the stream that cannot possibly be valid JSON (e.g., they contain unclosed strings, unmatched brackets, unmatched braces). After iterating through the stream, you may query the truncated_bytes() method which tells you how many bytes were truncated. If the stream is made of full (whole) documents, then you should expect truncated_bytes() to return zero.
Consider the following example where a truncated document ({"key":"intentionally unclosed string ) containing 39 bytes has been left within the stream. In such cases, the first two whole documents are parsed and returned, and the truncated_bytes() method returns 39.
auto json = R"([1,2,3] {"1":1,"2":3,"4":4} {"key":"intentionally unclosed string )"_padded;
simdjson::dom::parser parser;
simdjson::dom::document_stream stream;
auto error = parser.parse_many(json,json.size()).get(stream);
if (error) { std::cerr << error << std::endl; return; }
for(auto doc : stream) {
std::cout << doc << std::endl;
}
std::cout << stream.truncated_bytes() << " bytes "<< std::endl; // returns 39 bytesImportantly, you should only call truncated_bytes() after iterating through all of the documents since the stream cannot tell whether there are truncated documents at the very end when it may not have accessed that part of the data yet.
RFC 7464 defines a format for streaming JSON values using ASCII Record Separator (RS, 0x1E) as a delimiter. Each JSON text is preceded by RS and optionally followed by ASCII Line Feed (LF, 0x0A).
Example input:
<RS>{"name":"doc1"}<LF>
<RS>{"name":"doc2"}<LF>
<RS>{"name":"doc3"}<LF>
To parse JSON text sequences, use the stream_format::json_sequence parameter:
// Build input with RS (0x1E) and LF (0x0A) delimiters
std::string input_str;
input_str += '\x1e'; input_str += "{\"a\":1}"; input_str += '\x0a';
input_str += '\x1e'; input_str += "{\"b\":2}"; input_str += '\x0a';
input_str += '\x1e'; input_str += "{\"c\":3}"; input_str += '\x0a';
simdjson::padded_string input(input_str);
simdjson::dom::parser parser;
simdjson::dom::document_stream stream;
auto error = parser.parse_many(input, simdjson::dom::DEFAULT_BATCH_SIZE,
simdjson::stream_format::json_sequence).get(stream);
if (error) { std::cerr << error << std::endl; return; }
for (auto doc : stream) {
std::cout << doc << std::endl;
}The stream_format enum has the following values:
stream_format::whitespace_delimited(default): Standard NDJSON/JSON Lines formatstream_format::json_sequence: RFC 7464 format with RS delimitersstream_format::comma_delimited: Comma-separated JSON documentsstream_format::comma_delimited_array: A single JSON array whose elements are iterated as comma-delimited documents (see below)
The trailing LF after each JSON text is optional but recommended by the RFC for robustness.
Some systems produce JSON documents separated by commas, like {"a":1},{"b":2},{"c":3}. This is common when extracting elements from a JSON array or when APIs return comma-separated results.
To parse comma-separated documents, use the stream_format::comma_delimited parameter:
auto json = R"({"a":1},{"b":2},{"c":3})"_padded;
simdjson::dom::parser parser;
simdjson::dom::document_stream stream;
auto error = parser.parse_many(json, simdjson::dom::DEFAULT_BATCH_SIZE,
simdjson::stream_format::comma_delimited).get(stream);
if (error) { std::cerr << error << std::endl; return; }
for (auto doc : stream) {
std::cout << doc << std::endl;
}
// Prints: {"a":1}
// {"b":2}
// {"c":3}Whitespace around the commas is allowed:
auto json = R"({"a":1} , {"b":2} , {"c":3})"_padded; // Also worksNested commas inside objects and arrays are preserved:
auto json = R"({"arr":[1,2,3]},{"obj":{"x":1,"y":2}})"_padded;
// Correctly parses as 2 documents, not 6Extra top-level separators are tolerated for compatibility with the legacy On-Demand comma-separated mode. Leading commas, trailing commas, and repeated commas are treated as empty separators rather than documents.
Unlike the legacy allow_comma_separated parameter, stream_format::comma_delimited supports multi-batch processing and threading for optimal performance on large files.
Sometimes an input is a single, well-formed JSON array — [{"a":1},{"b":2},{"c":3}] — but you want to iterate its elements one at a time without materializing the whole array. Use stream_format::comma_delimited_array:
auto json = R"([{"a":1},{"b":2},{"c":3}])"_padded;
simdjson::dom::parser parser;
simdjson::dom::document_stream stream;
auto error = parser.parse_many(json, simdjson::dom::DEFAULT_BATCH_SIZE,
simdjson::stream_format::comma_delimited_array).get(stream);
if (error) { std::cerr << error << std::endl; return; }
for (auto doc : stream) {
std::cout << doc << std::endl;
}
// Prints: {"a":1}
// {"b":2}
// {"c":3}The parser strips the outer [ and ] plus any surrounding JSON whitespace (space, tab, LF, CR) and then behaves exactly like stream_format::comma_delimited over the remaining bytes. All comma-delimited features are inherited: multi-batch processing, threading, mixed scalar types, and nested commas preserved inside inner objects and arrays.
// All of these work:
auto a = R"([1, "x", true, null, {"k":"v"}, [1,2]])"_padded; // mixed scalars
auto b = R"( [ 1, 2, 3 ] )"_padded; // whitespace
auto c = R"([])"_padded; // empty array → 0 docsIf the input is not a well-formed outer array (missing [, missing ], or empty / all-whitespace), parse_many returns TAPE_ERROR. Content inside the array is not validated up front — individual document parse errors surface when you iterate, just like comma_delimited.
Positions reported via current_index() are relative to the stripped buffer (the bytes between [ and ]), not the original input, for consistency with the existing BOM-stripping behavior.