One more gotcha in when working with S3.

Storage has a readStream() method, which returns a stream resource instead of trying to jam a heckin' 2GB zip into memory.

The method will always return a resource -- but out of the box, it won't be a resource to stream from S3.

For whatever reason, Laravel defaults the setting stream_reads to false:

The default in Flysystem's driver is true:

this threw me off a bit, since I was looking at the driver on S3 and assumed the defaults.

And I was getting *a* resource, just not one that behaved in the way I assumed it would.


The behaviour I was seeing, despite using readStream(), was that for large files, I would get null instead of the resource.

If I examined the resource with stream_get_meta_data(), I saw it was a TEMP resource. The PHP docs don't really explain that much.

For a while, I assumed it was fine.

Then I ran it locally with a large file and my queue driver se to sync and noticed ... it took about 4 seconds to just read the file.

· · Web · 1 · 0 · 0

I didn't think that was right, so I dug into it a bit more and realized Laravel was setting that stream_reads setting to false.

I dug a bit more and realized exactly what that TEMP resource actually did.

It's something from Guzzle, presumably done to stop you shooting your feet off: it writes the data to disk to stop you going OOM.

However, I'm running all of this on AWS Lambda.

Which means my /tmp dir is 512 MB.

What I'm *trying* to do is get a stream and write it to another disk (readStream() passed into writeStream()).

What I'm *actually* doing is writing the whole file to my Lambda's disk (fail) before writing it to the intended destination.

I get the null back because it runs out of disk space!

Once I understood the problem, fixing it was easy. You can just add read_streams => true to your S3 filesystem's config.

After that, calling stream_get_meta_data() reveals an HTTPS resource instead of a TEMP resource.


Sign in to participate in the conversation

Syndicate is Strong