The new forums will be named Coin Return (based on the most recent vote)! You can check on the status and timeline of the transition to the new forums here.
The Guiding Principles and New Rules document is now in effect.
[Programming] How to estimate the size of JPEG after compression? [SOLVED!]
For a personal project, I'm using an algorithm that takes a JPEG, compresses it, and then spits out the compressed JPEG. The compressed JPEGs are of course much smaller in size. What I am trying to do right now is trying to find a way to estimate what size the compressed JPEG will be before compressing it so that the user can make a more informed decision between size and quality.
The problem seems to be that compression results in wildly variable results depending on the qualities (such as bit depth) of the JPEG that are fed into the algorithm. So there probably isn't a straight-forward formula I can use to calculate it.
Does anyone have experience with this sort of thing? Do I just do benchmarking with my algorithm using various kinds of images, or what?
You would do the compression for a set range of quality and then present the user with the quality/size list and let them choose for themselves.
There is no way to predict the compression, just compress and see what it is. Any decent prediction is going to be just as costly as compressing it in the first place.
I may be wrong, but judging from the delay that most apps have in computing the output size of a JPEG image, I'd say they're compressing it in memory and reporting the resulting size from there. In fact, most apps give you a visual preview when you change the compression %, so it's very likely that the compression is being done before you save.
If this is too slow, maybe you can offload it to another thread and get the preview asynchronously, whenever it's ready.
I did something somewhat similar for an application I was developing - I needed to know how many images I could store on a hard drive. Every time the user saved an image, I adjusted the available HD space based on the size of the image they might save.
I did it by multiplying the width and length (in pixels) of my input image, and then multiplying that by the input bit depth after the jpeg alg was applied.
For me, I was running always running at 0 compression (i.e. minimum compression) because we needed the best resolution output.
I'd suggest you do something similar but benchmark it for the major compression %'s?
Work out the size of the image at 0, 25, 50, 75, 100% JPEG compression, and then base any previews off those initial benchmarks?
When I was researching this, I found a snippet from a site - shown below
What you should be most interested in is the compression ratio and not the bit depth of the image. in other words mostly all images nowadays uses a 24 bit depth, translated into R = 8bits, G = 8 bits and B = 8 bits (RGB). now to calculate an image size first you'll need the size in Mega pixels MP which is basically the width multiplied by the height of the image. then to get the size in bytes you'll multiply the size in MP by the bit depth and divide by 8.. in your case that is (N * M * 24)/8. This will give you the raw size of the image which is the same as that of a raw BMP.
You'll have to divide that by the compression ratio. for example if your compression ration is 4:1 this means that you'll divide by 4 making the equation looks like this: ((N * M * 24)/8)/4
cmsamo on
0
acidlacedpenguinInstitutionalizedSafe in jail.Registered Userregular
edited September 2010
alternatively, you could run the compression on some small portion of the image and then extrapolate an estimation based on that.
What if you automatically compress it at a few different quality levels and then linearly interpolate between them? So you compress it at 0%, 25%, 50%, 75%, 100% and then you can estimate 17.5%, for example, as
alternatively, you could run the compression on some small portion of the image and then extrapolate an estimation based on that.
This would be way off if for example you used the top 100x100 corner, and the image was blue sky there but busy in the rest of the picture.
The algorithm isn't that intensive, especially if you're not doing these thousands at a time. Doing some compressions quick to send back some choices to an end user is not going to see much delay.
alternatively, you could run the compression on some small portion of the image and then extrapolate an estimation based on that.
This would be way off if for example you used the top 100x100 corner, and the image was blue sky there but busy in the rest of the picture.
The algorithm isn't that intensive, especially if you're not doing these thousands at a time. Doing some compressions quick to send back some choices to an end user is not going to see much delay.
Maybe it's a cloudy day!
But yeah, jpeg compression isn't terribly intense.
bowen on
not a doctor, not a lawyer, examples I use may not be fully researched so don't take out of context plz, don't @ me
Posts
There is no way to predict the compression, just compress and see what it is. Any decent prediction is going to be just as costly as compressing it in the first place.
If this is too slow, maybe you can offload it to another thread and get the preview asynchronously, whenever it's ready.
I did it by multiplying the width and length (in pixels) of my input image, and then multiplying that by the input bit depth after the jpeg alg was applied.
For me, I was running always running at 0 compression (i.e. minimum compression) because we needed the best resolution output.
I'd suggest you do something similar but benchmark it for the major compression %'s?
Work out the size of the image at 0, 25, 50, 75, 100% JPEG compression, and then base any previews off those initial benchmarks?
When I was researching this, I found a snippet from a site - shown below
taken from http://stackoverflow.com/questions/1740448/how-to-estimate-size-of-a-jpeg-file-before-saving-it-with-a-certain-quality-facto
What you should be most interested in is the compression ratio and not the bit depth of the image. in other words mostly all images nowadays uses a 24 bit depth, translated into R = 8bits, G = 8 bits and B = 8 bits (RGB). now to calculate an image size first you'll need the size in Mega pixels MP which is basically the width multiplied by the height of the image. then to get the size in bytes you'll multiply the size in MP by the bit depth and divide by 8.. in your case that is (N * M * 24)/8. This will give you the raw size of the image which is the same as that of a raw BMP.
You'll have to divide that by the compression ratio. for example if your compression ration is 4:1 this means that you'll divide by 4 making the equation looks like this: ((N * M * 24)/8)/4
[size_of(0%) + size_of(25%)] / 2
and so on for any arbitrary number in the scale.
This would be way off if for example you used the top 100x100 corner, and the image was blue sky there but busy in the rest of the picture.
The algorithm isn't that intensive, especially if you're not doing these thousands at a time. Doing some compressions quick to send back some choices to an end user is not going to see much delay.
Maybe it's a cloudy day!
But yeah, jpeg compression isn't terribly intense.