A while ago I wrote an article about the common pitfalls of handling file downloads in PHP. One thing I did not realize at that time is that in most cases developers don’t have the time to write such a script and they’ll use whatever they can find, even if it has flaws.
Because of this, I decided to write a download script and release it free for everyone with a BSD License. It’s not a class, just a script that accepts a “file” parameter via GET or POST and outputs the file. For security purposes any paths are stripped and replaced with a path in the script (the folder containing the downloadable file(s) should be protected against direct access).
The script sets the correct MIME type for ZIP files, all other files are sent as octet stream. You may customize that part depending on the type of docs you host.
The download script also accepts range download but not multiple ranges; for the vast majority of cases this is enough.
The script is in active use and has handled tens of thousands of downloads from a vast variety of browsers. I tested it only on Apache 2 / PHP 5. Some hosts have really weird setups and limitations but hopefully you won’t get any issues.
Here’s the full script (Updated on October 31, 2012):
very useful post.thnx!
Hi and thank you very much for sharing your insights. I just compared this with the download script that I created a year ago from various snippets around the internet. It does the job but as you stated earlier, it does contain some misuse of headers etc.
I’d like to use this code, but I’m not very PHP-keen. I can read and understand it, yet I have trouble writing my own modifications. Since you mentioned the use of the Apache module (X-sendfile i think), I was wondering if you can write an adaptation of the above code for using that module (since I use this on my site). I was wondering if ranges etc are also supported with x-sendfile.
Last but not least, I try to maintain a download counter when the file is accessed, but I’m assuming that if I use ranges, only the first range should increment the database, and all other ranges should skip this incrementation.
I was hoping you could post an adaption of the above code using x-sendfile and “doing other stuff when file is requested” placeholder (for me that is a counter, but I’m sure people have countless of other applications).
Many thanks in advance.
Marc, if you use the Apache Module mod_xsendfile you won’t need most of the PHP script, have a look at https://tn123.org/mod_xsendfile/, they have complete docs and a small PHP example.
tried copy/paste from the page, file downloads but is corrupted (bad image, bad zip etc).
Don’t know why you got the error, seems fine to me… Anyone else having issues?
Armand, hi and thanks for this.
This is just a warning for users using cut and paste – not a problem with your code!
I, like William, did the copy/paste route rather than download your zip and hit a similar problem.
When I pasted into my PHP I failed to get my “” as absolute start and end items on the page – your example is correct. I had to delete some tags that my editor “helpfully” added and managed to leave some extraneous whitespace. My suspicion is that php adds this whitespace into the stream, like it would add normal html content.
I managed to spot this as I’m using text-based files, but for other file types this will probably be a lot more important.
The content within the quotes (“”) in my comment got stripped. I was referring to the php start and end tags – left-angle, question-mark etc. Reading other comments the “ob_clean()” approach i.e. flushing the output buffer before writing content may also be a solution to the same “self-inflicted” problem.
Interestingly this “user copy error” aligns with your original “right-way” post about copying code and not understanding what is happening (my bad!) – in this case PHP server rendering every character outside the php tags as “page content” – whitespace included.
Hello and thank you very much for these good and insightful articles!
I really do not know much abut using HTTP headers in PHP, but I do agree that one should always strive to use code that is correct as well as working.
Though I have not managed to try it yet, I cannot help noticing one thing: in the conditional on line 79 of the script you use $size – but it is not declared anywhere… surely you meant $file_size, like it’s used in the rest of the script?
Thank you very much again!
You’re so right, I modified my production-ready script a little to make it more readable but I had to manually rename the variables (I need to get myself a PHP editor with refactoring support) and I missed that var. It’s fixed now.
I got bad image and video file after download completion!! It download full file but corrupted!! As i’m working with downloading large video files, It’s good that it download the whole file! but corrupted, is something disappointing!! by the way nice work!!
Shaun, I haven’t tested with files over 30 Mb, but in my tests the downloaded files are 1:1 identical to the originals. I’m using a very similar script to download apps, after ~7000 downloads I never had any complaint. Can you try downloading a small text file and compare it to the original? I suspect there’s a server configuration issue, I couldn’t test my script in all scenarios (in all honesty I made it to suit my purposes only).
You have done great work and I’m appreciate it! 🙂
I tried with small images too.. but it may be server configuration problem, I’ll check for that too!
Thanks friend!! 🙂
Try with a text file, this way you can see what happens.
I have a few improvements for you.
– Turned off error reporting for Notices — important in your calls to list().
– Turn off gzip compression which causes browsers to abort the download sometimes.
– Added ability to specify “stream” in the query string to stream the file contents.
– Cleaned up Content Type variables because I find switch/case statements very wordy.
Very nice, thanks for your contributions. I mentioned turning off gzip but didn’t code it. I’m used to set all these in php.ini but indeed not everyone has access to it, especially in shared hosting. And yes, content type is handled neater in your version. I will integrate your changes in my code.
Try to add
on line 92… Might solve the corruption problem…
thanks ,it works.
The script was great ,but images are corrupted but when i put ob_clean() on line 92 ,it works. Thanks a lot
In the other article, you say “First of all, I notice the use of headers like Content-Description and Content-Transfer-Encoding. There is no such thing in HTTP.”
But why you use Content-Transfer-Encoding here?
Opps, you’re right. The Content-Transfer-Encoding header was added by contributor Hargobind and I didn’t check. That header should be removed.
Thank you for posting this…it’s very helpful!
My specific usage is only for ZIP files on a shared Linux server
I removed the inline option
and simply forced the “Content-Type”
I also had to remove the “apache_setenv” line in the provided code or it would crash (again Linux)
I noticed that using this code…Firefox works wonderfully (Pause and Resume)
Chrome could be paused, but only resumed if the request was relatively quick. (If I waited a couple of minutes, the file could not be resumed)
I was unable to Pause the download at all in IE 9x
(I find this topic very confusing. This solution is still superior to what I was using before, and I hope will help with users suffering from poor connections where a more standard fopen/fread will simply fail. I plan on testing soon.)
I needed a download solution for downloading large files (6 Gb) from a IIS server with PHP. Your script is the best I have found so far but failed on such large files. I (finally) found that the PHP filesize function is the problem as explained in https://www.borngeek.com/2011/03/28/php-and-large-file-sizes. For the Windows environment there is a solution by using the filesystem object. The filesize function could than be replaced by:
$fsobj = new COM(“Scripting.FileSystemObject”);
$f = $fsobj->GetFile(realpath($file_path));
$file_size = $f->Size;
These lines of code solved my problem with downloading large files.
i needed a download solution for downloading a file over #G connection. when we download our content or file using Wifi connection then its successfully downloaded. At the time of 3G connection, it’s failed after 1 Mb download for every file. Can you explain, it’s a problem of application or other????
using this script, i download the image then it provide the
“Could not load image ‘Lighthouse (7).jpg’.
Error interpreting JPEG image file (Not a JPEG file: starts with 0x3c 0x6c)”
where i added the content type of jpg and jpeg image..
yes, i got a solution for my image issue from your contribution.
I need to be able to have my download links like these:
As you can see the files are in different directories.
Is there any way I can do this with your script.
Not with the script as-is. Some changes are required. Personally I prefer to use a database and specify just the file ID. I would NEVER specify the path to the file in the request, it’s a security vulnerability (I have a previous post on that).
first of all thanks for the script. I just have problems with zipped files. The upload to the server works well and I can unzip all files. If I download the files with your script (and also with my own script) the ZIP file is broken. I have no idea why. Also tested with gzip files.
zlib.output compression is off. PDF files and text files are working well.
Does this happen in all browsers? Does it happen only with ZIP files? What web server do you use?
Load a simple PHP file in browser and check its headers (via developer’s tools), see if it’s being delivered compressed.
running gud on localhost but getting error on live site .
The website encountered an error while retrieving. It may be down for maintenance or configured incorrectly.
You have to check the PHP error log.
Sir, you didnt reply my previous query. Please reply this time. Is my query too bad to reply ?
Query : “When I implemented your code to a live site, server load going too high to handle. Is there any way such that server load not going too high..”
Sorry, I did not understand your question. Are you saying that the CPU load is too high? That should never happen.
You have to understand though I cannot possibly think of all OS-WebServer-PHP configurations. PHP behaves differently depending on OS (Debian/RH/Windows), web server (Apache/nginx/IIS) and PHP configuration, plus versioning.
The code I provided works on 99% configurations, but especially shared hosts like to tinker with PHP configurations – some of them are truly weird and underpowered.
Not sure why this script turns off compression, but when I comment out those lines it works fine.
Also, for those hosting on go-daddy or similar, change $file_path to:
$file_path = $_SERVER[‘DOCUMENT_ROOT’].”/myfiles/” . $file_name;
I’m trying to implement this as a wetransfer-style thing which will retrieve a file from an ftp site given a link like oursite.com/download?file=2Iv03Fkm79Yc9. I’m finding that when it’s installed on our web server, each download gets to 63.6MB, and then cuts off. If the file is smaller than this, it completes fine, but if it’s larger, that’s all you get. Downloading more than one file at once results in the group of files being cut off at 63.6MB in total. Oddly, it runs fine on my laptop with WAMP installed – I’ve tested it up to files of 2GB in size – so it seems it’s something to do with the way the server is configured, as in each case it’s trying to get the files from the same ftp server. I’ve been through php.ini and increased limits for default_socket_timeout, max_execution_time and memory_limit, but with no difference in outcome. Any ideas?
Comments are closed.