Creates a new BlobStream wrapping the given Blob or File.
The blob's contents are not loaded into memory at construction time. Each byte range is fetched on demand and cached for subsequent access.
The blob (or File) to stream.
Resets the read/write position to the beginning of the stream.
Inserts data at byte offset start, optionally replacing replace
bytes of existing content. Sets the position to start + data.length.
The bytes to insert.
Byte offset at which to begin the insertion.
Number of existing bytes to replace. Defaults to 0.
Returns true — BlobStream is always open.
Returns the total logical byte length of the stream in O(1) time.
Returns the file name when the backing object is a File, otherwise "".
Reads up to length bytes from the current position, spanning segment
boundaries as needed. All uncached BlobSegments in the range are
fetched in parallel via Promise.all and their results are cached on the
segment for future reads.
Maximum number of bytes to read.
Resolves with a ByteVector containing the bytes read.
May be shorter than length if the logical end of stream is reached.
Returns false — BlobStream supports write operations.
Removes length bytes beginning at byte offset start.
Byte offset of the first byte to remove.
Number of bytes to remove.
Moves the read/write position within the stream.
Number of bytes to move.
Reference point for the seek. Defaults to Position.Beginning.
Returns the current read/write position in bytes from the logical start.
Assembles a new Blob from the current piece table without loading the
full content into memory. Each BlobSegment becomes a
blob.slice() reference and each BufferSegment is passed as a raw
Uint8Array. The new blob's MIME type is copied from the source blob.
A new Blob reflecting all edits made to this stream.
Truncates or zero-extends the stream to exactly length bytes. If the
current position exceeds the new length it is clamped.
The desired stream length in bytes.
Writes data at the current position, overwriting existing content and
extending the stream if necessary. Advances the position by
data.length.
The bytes to write.
A read/write IOStream backed by a browser/Node.js
Blob(orFile).Reading
Each
BlobSegmentis fetched from the blob on first access and its bytes are cached on the segment object. Subsequent reads of the same range are served from the cache without any async I/O. When a singlereadBlockcall spans multiple uncached segments, allarrayBuffer()requests are issued in parallel viaPromise.all.Writing
A piece table tracks the logical content as an ordered list of Segments. Mutations only manipulate this list; they never copy the original blob. The cached total length is kept up-to-date on every mutation so that
length()is O(1).Exporting
toBlob assembles a new
Blobfromblob.slice()references and in-memory buffers — no full-file copy. The new blob's MIME type is copied from the source blob.