1. 30 Mar, 2021 1 commit
  2. 13 Mar, 2021 1 commit
  3. 07 Dec, 2020 5 commits
    • Julian Andres Klode's avatar
      Release 1.8.2.2 · 95e417cb
      Julian Andres Klode authored
      95e417cb
    • Julian Andres Klode's avatar
      CVE-2020-27350: tarfile: integer overflow: Limit tar items to 128 GiB · 0e3b54db
      Julian Andres Klode authored
      The integer overflow was detected by DonKult who added a check like this:
      
      (std::numeric_limits<decltype(Itm.Size)>::max() - (2 * sizeof(Block)))
      
      Which deals with the code as is, but also still is a fairly big limit,
      and could become fragile if we change the code. Let's limit our file
      sizes to 128 GiB, which should be sufficient for everyone.
      
      Original comment by DonKult:
      
      The code assumes that it can add sizeof(Block)-1 to the size of the item
      later on, but if we are close to a 64bit overflow this is not possible.
      Fixing this seems too complex compared to just ensuring there is enough
      room left given that we will have a lot more problems the moment we will
      be acting on files that large as if the item is that large, the (valid)
      tar including it probably doesn't fit in 64bit either.
      0e3b54db
    • Julian Andres Klode's avatar
      CVE-2020-27350: debfile: integer overflow: Limit control size to 64 MiB · ed786183
      Julian Andres Klode authored
      Like the code in arfile.cc, MemControlExtract also has buffer
      overflows, in code allocating memory for parsing control files.
      
      Specify an upper limit of 64 MiB for control files to both protect
      against the Size overflowing (we allocate Size + 2 bytes), and
      protect a bit against control files consisting only of zeroes.
      ed786183
    • Julian Andres Klode's avatar
      tarfile: OOM hardening: Limit size of long names/links to 1 MiB · 29581d10
      Julian Andres Klode authored
      Tarballs have long names and long link targets structured by a
      special tar header with a GNU extension followed by the actual
      content (padded to 512 bytes). Essentially, think of a name as
      a special kind of file.
      
      The limit of a file size in a header is 12 bytes, aka 10**12
      or 1 TB. While this works OK-ish for file content that we stream
      to extractors, we need to copy file names into memory, and this
      opens us up to an OOM DoS attack.
      
      Limit the file name size to 1 MiB, as libarchive does, to make
      things safer.
      29581d10
    • Julian Andres Klode's avatar
      CVE-2020-27350: arfile: Integer overflow in parsing · 66962a66
      Julian Andres Klode authored
      GHSL-2020-169: This first hunk adds a check that we have more files
      left to read in the file than the size of the member, ensuring that
      (a) the number is not negative, which caused the crash here and (b)
      ensures that we similarly avoid other issues with trying to read too
      much data.
      
      GHSL-2020-168: Long file names are encoded by a special marker in
      the filename and then the real filename is part of what is normally
      the data. We did not check that the length of the file name is within
      the length of the member, which means that we got a overflow later
      when subtracting the length from the member size to get the remaining
      member size.
      
      The file createdeb-lp1899193.cc was provided by GitHub Security Lab
      and reformatted using apt coding style for inclusion in the test
      case, both of these issues have an automated test case in
      test/integration/test-ubuntu-bug-1899193-security-issues.
      
      LP: #1899193
      66962a66
  4. 17 Jul, 2020 33 commits