Certainly!
If you want me to help making a pull request for the things I suggest below, do let me know.
Reading a file in chunks in javascript is done by first doing slice()
on the File
, producing a Blob
(files are also Blob
s, by the way). The you can just use FileReader
as usual, but on the Blob
instead, and you get the slice you wanted.
So here is our code that does the chunking:
function readChunk(file, start, end, callback) {
var reader = new FileReader();
var blob = file.slice(start, end);
reader.onloadend = function() {
callback(reader.error, reader.result);
}
reader.readAsArrayBuffer(blob);
}
The calling code does readChunk
with a error-first callback that gets the array buffer, processes it, and then if there is still data to read (by looking at the file length) it calls readChunk
again.
Looking at the elm/file
implementation of File.toBytes
it looks like it works the same way (but you’d probably want to catch errors too and have a Task
that can return errors. Common errors are read permissions or that the file has been deleted. The error is an DOMError/DOMException with name
and message
fields.
If this is functionality that should be included, there are two main options:
- Add a
File.sliceToBytes : { from : Int, to : Int, file : File } -> Bytes
- Add a
File.slice : { from : Int, to : Int, file : File } -> File
(just returning File.slice(...)
The second one is more general (you can slice and then call File.toBytes
, File.toString
, Http.fileBody
etc.) and in the Http case also more efficient, you don’t have to copy the bytes before sending them.
The drawback with the second solution is that some functions on the resulting File
will be undefined: name
, mime
, lastModified
, so that has to be taken into account (by returning empty values or Maybe a
) or by copying these values from the original File
object when creating the slice.