If you're just asking what the overhead is on a volume you have in your
hand:
VolumeSize - SumOverAllFiles(filesize rounded up to cluster size)
Trying to go at it the other way, counting all of the bytes in this or that
stream of metadata (USN journal, MFT, directory index, security index, etc.)
will suffer from a lot of imprecision and just be a headache. File
fragmentation affects how many MFT file records are required to store the
allocation information for the file, you can't precisely predict the cutoff
for small files being embedded in their MFT record (as Al pointed out), and
so forth.
You can roughly, but reasonably, predict overhead at 1KB/file by just
accounting for the MFT record. Since all kinds of specific details factor
into this, it could just be easier for you to empirically derive the average
overhead given your volume usage pattern.