Processing model: Streaming API
aka Incremental Processing, or Token Streams
Of 3 major processing modes that Jackson supports, Streaming Processing (also known as Incremental Processing) is the most efficient way to process JSON content. It has the lowest memory and processing overhead, and can often match performance of many binary data formats available on Java platform (see "Performance Comparison" link below)
This performance comes at a cost: this is not the most convenient way to process JSON content, because:
All content to read/write has to be processed in exact same order as input comes in (or output is to go out) -- for random access, you need to use Data Binding or Tree Model (which both actually use Streaming Api for actual JSON reading/writing).
- No Java objects are created unless specifically requested; and even then only very basic types are supported (Strings, byte for base64-encoded binary content)
As a result, Streaming API is most commonly used by middleware and by frameworks (where performance benefits are available to wider range of using applications, and competition between implementation drives performance as one of measured features), and less often by applications.
Parsers are objects used to tokenize JSON content into tokens and associated data. It is the lowest level of read access to JSON content.
Most common way to create parsers is from external sources (Files, HTTP request streams) or buffered data (Strings, byte arrays / buffers). For this purpose org.codehaus.jackson.JsonFactory has extensive set of methods to construct parsers, such as:
JsonFactory jsonFactory = new JsonFactory(); // or, for data binding, org.codehaus.jackson.mapper.MappingJsonFactory JsonParser jp = jsonFactory.createJsonParser(file); // or URL, Stream, Reader, String, byte
Also, if you happen to have an ObjectMapper, there is also ObjectMapper.getJsonFactory() that you can use to reuse factory it has (since (re)using a JsonFactory instances is one Performance Best Practices).
But you can also create parsers from alternate sources:
Reading JSON tokens from these sources is significantly more efficient than re-parsing JSON content from textual representation.
Generators are objects used to construct JSON content based on sequence of calls to output JSON tokens. It is the lowest level of write access to JSON content.
Most common way to create generators is to pass an external destination (File, OutputStream or Writer) into which write resulting JSON content. For this purpose org.codehaus.jackson.JsonFactory has extensive set of methods to construct parsers, such as:
JsonFactory jsonFactory = new JsonFactory(); // or, for data binding, org.codehaus.jackson.mapper.MappingJsonFactory JsonGenerator jg = jsonFactory.createJsonGenerator(file, JsonEncoding.UTF8); // or Stream, Reader
An alternative method is available by using org.codehaus.jackson.util.TokenBuffer (added in Jackson 1.5): since it extends JsonGenerator, you can efficiently buffer any JSON output in case it needs to be re-processed:
TokenBuffer buffer = new TokenBuffer(); // serialize object as JSON tokens (but don't serialize as JSON text!) objectMapper.writeValue(buffer, myBean); // read back as tree JsonNode root = objectMapper.readTree(buffer.asParser()); // modify some more, write out // ... String jsonText = objectMapper.writeValueAsString(root);
JsonParser.Features can be enabled/disabled to alter operation of JsonParser
JsonGenerator.Features can be enabled/disabled to alter operation of JsonGenerator
Performance comparison between various formats (Protobuf, Thrift, XML, JSON) that includes stream-based Jackson codec.