Package home | Report new bug | New search | Development Roadmap Status: Open | Feedback | All | Closed Since Version 1.5.0

Bug #18505 Possible incorrect handling of file names in TAR
Submitted: 2011-05-05 05:11 UTC
From: mariushm Assigned: mrook
Status: Closed Package: Archive_Tar (version SVN)
PHP Version: Irrelevant OS: Irrelevant
Roadmaps: 1.3.14    

 [2011-05-05 05:11 UTC] mariushm (marius marius)
Description: ------------ I'm looking over the code to see how long file names are handled and couldn't help notice both in the stable version and the SVN the following code: function _readLongHeader(&$v_header) 1417 { 1418 $v_filename = ''; 1419 $n = floor($v_header['size']/512); 1420 for ($i=0; $i<$n; $i++) { 1421 $v_content = $this->_readBlock(); 1422 $v_filename .= $v_content; 1423 } 1424 if (($v_header['size'] % 512) != 0) { 1425 $v_content = $this->_readBlock(); 1426 $v_filename .= trim($v_content); 1427 } I assume this section runs after the code detects a 512 byte chunk of "L" type ("/./@LongLink"), and retrieves the long file name. I don't think trim is the right function to use there on the last chunk, because besides 0x00 bytes, it actually trims 6 character codes (spaces, newlines, tabs,nulls). From these, all but 0x00 are valid characters on a Linux system and can be used anywhere in a file name, including right at the end. Windows is more sensitive about it, but while uncommon, it's possible for a file name in Windows to end with a "space" character, and the other characters could also have some meaning if file names are in Unicode (or for example if someone uses \\?\c:\[very long path] - a sanctioned by Microsoft method to go around Windows limitations regarding long paths) Basically, substr should be used, especially since the full length of the file name is known. Also, in the regular function "_readHeader", I don't see the file name actually reduced to its actual length by removing the nulls at the end. Though I'm not sure if this is a problem - too busy to test now if PHP automatically removes the null bytes at the end of the string or not. Test script: --------------- No test script Expected result: ---------------- Don't expect anything as there's no script


 [2011-05-05 05:26 UTC] mariushm (marius marius)
PS. Perhaps in the case of long file names, the actual size field should also be double checked. As that field contains 11 bytes of data, someone could hack a tar file to enter there the maximum value possible in Octal - 77.777.777.777 or float(8589934591) - the result of Octdec would be a float value since the value is bigger than 32 bit integer, so value / 512 would give you float(16777215) but value % 512 would result in -1, which would make the _readLongHeader read an extra 512 byte of data because -1 != 0 But it's a bit silly anyway, though one could gzip/bz2 8 GB of nulls in a few hundred KB just to mess with some websites using this script.
 [2015-02-26 21:15 UTC] mrook (Michiel Rook)
-Roadmap Versions: +Roadmap Versions: 1.3.14
 [2015-02-26 21:18 UTC] mrook (Michiel Rook)
-Status: Open +Status: Closed -Assigned To: +Assigned To: mrook
This bug has been fixed in SVN. If this was a documentation problem, the fix will appear on by the end of next Sunday (CET). If this was a problem with the website, the change should be live shortly. Otherwise, the fix will appear in the package's next release. Thank you for the report and for helping us make PEAR better.