"Streams in node are one of the rare occasions when doing something the fast way is actually easier. SO USE THEM. not since bash has streaming been introduced into a high level language as nicely as it is in node."
Streams emits Events, the native observer pattern of NodeJS.
At this moment exists 3 iterations of the Stream implementation that depend of your version of node/iojs.
Instead of use the native API (that depend of your node version) better use readable-stream or through2. Both are backward compatibility and works fine in browser build. (this last is more lightweight because just expose a Duplex Stream).
.pipe()is just a function that takes a readable source stream and hooks the output to a destination writable stream (as UNIX commands):
.pipe()has other benefits too, like handling backpressure automatically so that node won't buffer chunks into memory needlessly when the remote client is on a really slow or high-latency connection.
A good library that collect stream utilities are mississippi.
You can implement a Stream using inheritance or composition.
For emit chunks of data you need to create a object that implement the ._read method.
dataevents each time they get a chunk of data. From the implementation this is synonymous of
endwhen it has no more data
this.push(null). In others words, the event
endis triggered when the last chunk of data arrives, signifying that this is it and there is no more data after this last piece.
When you are using a Readable Stream you can use
pause()methods to control the data flow of the stream.
For emit chunks of data you need to create a object that implement the ._write method.
.endto close the stream and also you can pass the last chunk to
Just provide the callback if you want to wait, but the order of the successive calls is guaranteed.
finishis triggered when all the data has been processed (after end has been run and been processed).
A duplex stream is one that is both Readable and Writable, such as a TCP socket connection.
It was implemented in the most recent node version but you can use through2.
It's a special type of streams because interact with the filesystem.
openevent to control the file state of the
Also it's an especial kind of streams. They particularry fire
exitevent that is different from
stdioto setup stream communication between the child_process and where the output have to be write/read (by default
stderrthat are align with UNIX standard streams).
What about Callback
You can convert whatever stream interface into a callback. See my stream-callback library that makes easy this conversion.
It's also possible transform an async callback function into a stream interface. You need to be sure to handle correctly the backpressure of the stream. In my experience in this area I use from2. Check fetch-timeline or totalwind-api as examples.
Interested libraries to use with streams are:
- progress-stream, read the progress of a stream.
- throughv, stream.Transform with parallel chunk processing.
- emit-stream, turn event emitters into streams and streams into event emitters.
- pretty-stream, format a stream to make it more human readable.
- squeak, a tiny stream log.
- hyperquest, make streaming http requests.