-
Notifications
You must be signed in to change notification settings - Fork 886
feat: return better error if file size is too big to upload #7775
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
provisionersdk/archive.go
Outdated
if limit != 0 && totalSize >= limit { | ||
return xerrors.Errorf("Archive too big. Must be <= %d bytes", limit) | ||
return fileTooBigError | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I kept this because this is checking the actual bytes written. I can't imagine the fileInfo.Size()
being incorrect, as the comment on Size
is
// length in bytes for regular files; system-dependent for others
But the others
is why I kept this here. Idk how sym links or other edge cases are handled, and then the question of "what does windows do" is not something I care to look into. Doing the check twice is super cheap though, so I can't see this being bad.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this actually fix the underlying problem? AFAICT we're not measuring actual tar size? So there's a small range of inputs where total file size < limit but add tar overhead (prefix, file names) and it exceeds the limit? Or does Write
take that into account?
provisionersdk/archive.go
Outdated
@@ -106,7 +111,7 @@ func Tar(w io.Writer, directory string, limit int64) error { | |||
} | |||
totalSize += wrote | |||
if limit != 0 && totalSize >= limit { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Either the error or this (+ the other) check are wrong? Error says <=
, this says >=
, who's right?
Another observation is that this doesn't account for tar overhead, AFAICT. So we might end up with an archive that's too large anyway at which point we would hit the original issue again (bad message from API)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps we can wrap w
(wc := writeCounter{w}
) before passing it on to tar writer, and count bytes written to it? Then we don't need to track wrote
here and just check wc.written
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right about the headers. Honestly I didn't look too deep at the code and just assumed the totalSize
was being tracked correctly 🤦.
As for the <=
vs >=
, I'll change the conditional to >
on the check to be consistent with the api.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mafredri good catch. I am using a limit writer now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Writing a test or two
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice improvement 👍
fixes: #7071
The issue was the error condition was only checked after the bytes have been written. So the coderd side was returning an api error for the content being too large before the cli could do the same client side check.
To fix this, I just check if we are going to exceed the limit with the
fileInfo.Size
before we write.