A year ago, I have defined an estimated efficiency of AI photo culling. In the meantime, the company I am leading, Camera Futura, has made further improvements on their flagship product, Futura Photo, and it is now possible to refine this estimated efficiency.
The rates of “false positive” (FP: wrongly detected to be discarded, painful) and “false negative” (FN: missed to be detected) have not changed dramatically though. But dramatic improvements have been possible with new rules and an improved workflow.
First, when you shoot with burst mode on, it is important to remember that only one or two images from this burst should be post-processed. There are few reasons to keep all the others. So, a new rule has been implemented to detect bursts, group altogether the images of this burst and try to detect the sharpest ones for instance, or those with the best exposure, or those with the sharpest faces.
Second, for photoshoot with more than hundreds of images, it can be really challenging to cull all the images you should in one round. Indeed, there are just too many and you don’t want to discard some images. But you could, and that’s something you detect during the post-processing step. So, the workflow of Futura Photo has been changed and a rule “2nd Analysis” has been implemented. After the culling, another round of analysis starts to focus at similar images not yet culled. The similar images will be grouped together, like during the 1st analysis and this will improve the culling rate, discarding typically 20% more images.
A year ago, it was possible to cull automatically typically 50% to 60% of the images:
Thanks to the rule “2nd analysis” (another round of analysis done after a 1st culling), this ratio has moved up to typically 60% to 70%
For action shoots, this ratio can now be as high as 75-85% thanks to the new rule “Burst”.
You can try for free Futura Photo of course and check what AI photo culling can do for you.