One more gotcha in #Laravel when working with S3.
Storage has a readStream() method, which returns a stream resource instead of trying to jam a heckin' 2GB zip into memory.
The method will always return a resource -- but out of the box, it won't be a resource to stream from S3.
For whatever reason, Laravel defaults the setting stream_reads to false:
The behaviour I was seeing, despite using readStream(), was that for large files, I would get null instead of the resource.
If I examined the resource with stream_get_meta_data(), I saw it was a TEMP resource. The PHP docs don't really explain that much.
For a while, I assumed it was fine.
Then I ran it locally with a large file and my queue driver se to sync and noticed ... it took about 4 seconds to just read the file.
I didn't think that was right, so I dug into it a bit more and realized Laravel was setting that stream_reads setting to false.
I dug a bit more and realized exactly what that TEMP resource actually did.
It's something from Guzzle, presumably done to stop you shooting your feet off: it writes the data to disk to stop you going OOM.
However, I'm running all of this on AWS Lambda.
Which means my /tmp dir is 512 MB.
What I'm *trying* to do is get a stream and write it to another disk (readStream() passed into writeStream()).
What I'm *actually* doing is writing the whole file to my Lambda's disk (fail) before writing it to the intended destination.
I get the null back because it runs out of disk space!
Syndicate is Strong