I don't know much about photography and much less about image formats. The little I know comes mostly from web development. There I use images as semi-static assets, and I know how to work around the potential problems.
That experience was of limited use when I worked on end-user-created images on a React Native app I'm working on. I needed a crash course into JPEGs.
We use react-native-camera to take photos. Snapping one has an elusive option called photo quality. It is an integer in a range between zero and one, and in the example code, it was set to 0.5. Zero point five. What does that mean? Is that good? 0.5 does not feel that good. To me, a modern iPhone is more than 0.5? The default value is 1 though, is that better? I wanted to find out what turning that mystery knob entails.
Spoiler alert: the JPEG quality is clearly explained in the JPEG Wikipedia article, but obviously, I needed to jump through some hoops to understand that. I want to go through some generic thoughts on how to approach these kinds of voodoo configuration parameters. Perhaps it is a suitable default, but who knows, right? Taking photos is a core feature, so I want to set the quality properly.
Initially, without knowing anything about JPEG compression and quantization tables, numerous things were puzzling:
- As said, the default is 1, but the example code sets it to 0.5
- Quality 0.5 intuitively seems low to me? Is it something that was added to the example years ago and is now outdated? Is that a good-for-all or a not-worst-case-for-all type of default?
- What happens if I change that. Is that platform-specific? Will it affect performance? Indeed it will affect the file sizes
- If I set it too low, the photos look terrible, but we save on our cloud costs. If I set it too high, the photos look great but are slow to load, and we need to pay more to the cloud provider.
- Do I need to do some A/B testing with the mystery knob?
- How can I explain to the next developer that the mystery knob should be set to a specific value?
Figuring out that 0.5 is indeed a perfectly acceptable value for the app was awkward. It involved watching a YouTube video about JPEG, Intense googling about quantization tables, glancing through a whitepaper, reading some Android documentation, extracting JPEG quantization tables from a couple of test images, and finally, reading the Wikipedia article, which would have been the obvious step to start with.
Although the example code works for us, I think the default is an unfortunate choice, and the example code works only for applications not requiring a decent image quality. I'll settle that my lumpish exercise was not in vain. Once again, a lesson in the dangers of copy-pasting code from GitHub (although it's fine for 90% of the time 😉).
PS: If you want to produce a "good enough" JPEG, use a quality value between 50-100. Q=50, which is the standard, is considered nowadays low.
djpeg can be used to extract the quantization tables. The above one is for the color, the below for luminance.
djpeg -verbose -verbose -verbose image.jpg > /dev/null
This is the "standard" compression (Q=50) |
I pulled a random image from Instagram and looks like it had less compression which is expected from applications where the images are a core feature
Comments
Post a Comment