I learned from my brilliant colleagues a nice trick that left me once again wondering why on earth I discover these things only now. It's funny to run into something literally everyone else knows, and somehow, I miss. Say you have a file that you want to download on some trusted server. You may have a version of the file downloaded, but it may be somehow corrupted or partially downloaded. You could verify the file integrity by comparing the local and remote file hashes, and that is indeed a proper way to do it. If you trust the remote, you can use plain old HTTPS to get the headers only with a HEAD request and get the content length. Just compare the result to local, and there you have it, a naive yet adequate method to check if you need to redownload a file. Note that this does not validate that the file contents are equal.
Notes about what I do