Why Futura Photo ?

November the 14th of 2019 has been the day of the official launch for Futura Photo 1.0. The company behind this software if Camera Futura Sàrl, headquartered in Geneva, Switzerland.

The software aims at streamlining the pre-process steps needed after a photo session. Indeed, over the last years, I have had the feelings I was wasting my time at culling images after a photo session (1) and (2) I was not always very efficient.

Of course, professional photographers who are always shooting the same kind of images have learned how to cull efficiently their thousands of images every week. But for amateurs shooting 10’000-20’000 images per year or for people who like to try and experiment, I concluded a tool was not only needed to fasten the culling itself and to do it in a more efficient way, but also to propose a “quality gate”. Many images should indeed not even pass this gate as they just not fulfil some requirements when it comes to acutance, exposure. Furthermore, duplicates have always been a struggle for a photographer. Similarly, managing hundreds or even thousands of time lapse members can be time consuming. Panorama members can easily be missed when you start to build panoramas of 10 plus images.

I could add much more examples where we would need a software to help us, photographers, even before starting the post-process work: aligning time lapse members when our camera has been shaken by wind, choosing either JPG or RAW when both shot altogether, and much more.

I have noted that over the last year, I have done a good job at keeping only the best images – only 5% maximum of what I am shooting but because I am investing time after each photo session. And these tasks are not funny. They are boring, time consuming… and from my perspective very important. I don’t keep the useless images which would “pollute” my images library, or which would require several terabytes. From these years, the need to automate these tasks have become more and more obvious. Last and maybe not least, this is important for our future in a sustainable world!

That’s Futura Photo goal and being a complement to the existing many software which are helpful to photographers, helping them spending more time doing what they like and less time at what is needed but not exactly exciting.

The need for streamlining the pre-process of a photo session

This is not the most glamorous title one can expect as image processing after a photo session is often a pain at its worst, a necessity at least. It is something photographers don’t like much to talk about. They prefer to discuss how to enhance their images. Fair enough. But execution is key, and it is not because it is boring that it is not important !

So, even before going to the processing itself, meaning classifying images in categories (Best, to be archived, to be deleted, …), enhancing the best with ad-hoc software (photoshop, lightroom, Capture One, whatever, …), there are some steps which are all uninteresting, time consuming. In particular:

  • If you shoot RAW + JPG, you need to find out what to do with either the RAW or the JPG,
  • If you shoot time lapse and manually compose them (with software like LR Time Lapse),
  • If you shoot Panoramas that you want to also manually compose,
  • If you don’t like, like me, storing dozens of similar images, you must delete first duplicates,
  • If you don’t accept some images because of their exposure, grain, or other technical issue,
  • If you shoot both Videos and still images (both will require usually dedicated workflows),
  • … and much more.

I am still surprised to see these steps have not been automated, or very partially and certainly not in an integrated way to let photographers with different need improve their productivity, but also be supported by modern technologies to help them choosing maybe not the best image, but at least to automate the files move/deletion or just to fasten these different steps whilst making them more efficient. For example, I am not aware of a tool which would help to detect which images are part of a panorama. Many software exists to let you assemble a panorama but when you shoot thousands of images, it is not so trivial to detect panorama members, from below average images.

Another example will be for the time lapses. Hundred or thousands of images with some of them part of time lapse, other not, can be tricky to detect and at least, it will take some time to sort out the whole set of images.

Last and not least, I understand the “one stop shop approach”. That’s the holy grail in software and what Lightroom (or its direct competitors) tries to achieve for most of photographers. But I am not convinced as needs can be antagonistic and one stop shop means “compromise”. This means I would rather, maybe naively, believe in long-term trends to have software working together and not just one doing everything. My point? There is still room in 2019 for new software when it comes to the image’s workflow.

Why it is important to only keep the best shots after each photo session

I see several reasons to keep only the best images after each photo session and archive or delete all other shots. I mean, we should not keep more than probably 5% of the photos we are taking. And the ratio tends even to decrease the older the photo gets. Not so many photographers have the discipline to take all the time needed to go through every image and remove duplicates, poorly exposed and badly focused photos. But there are several key advantages to do so:

First, we are not “polluted” again by average or poor images when we search in our images catalogues or when we look back at our work, whatever the reason. Furthermore, it will of course reduce drastically the storage needed. One could argue it is now so cheap it is a pretty weak reason but as I wrote already, this is good practice for our planet.

As it is painful to clean the backlog, at least it would make sense to apply the principle to any new photo session and apply it occasionally when we browse some older archived photo sessions.

That’s a classical quality gate methodology which also makes sense for photography. As 80% to 95% of images tend to be useless and let’s be honest not so great, the impact will be significant. Believe me or not, it is so good to browse only images you really like. But again, this is both a question of discipline and technology as there are so far few software to help you focusing at the best images.

They are no rules for good photographs, but they are rules for poor photographs

A "good" image for some, but no rules can apply and some will not even like this image

As DPReview’s Nigel Danson reminds us, and to quote Ansel Adams: “There are no rules for good photographs. There are just good photographs”.

They are no rule for good photographs, fair enough, but I am convinced they are rules to define and detect the poor ones, whatever poor may mean for the photographer. In a digital world, we can take really a lot of pictures. I shoot 10’000-20’000 photos per year (a pro can shoot over 100’000 per year). I don’t use more than 1’000 of them. I like to believe it is important to delete most of them, just to make my life simpler when I will start the post process steps and when I look back at my images, for search or for other reasons.

Less is more?

Taking a lot of picture is not always bad habit, but at the end of the day, we all must cope with this huge and useless number of poor pictures. Therefore, it seems important to define some tangible rules that one can apply manually or through software to eliminate the wrong ones as early as possible in the workflow. Ideally, this should be done at “run time” during the shot itself, which is certainly possible if images are uploaded real time to the cloud and analyzed right away .

But to be more concrete, let’s say there is a need to detect and delete (non exhaustively):

  • Images poorly exposed,
  • Motion blur (not on purpose) and focus blur (not on purpose as well),
  • Useless duplicates (and whatever it may mean).

Many photographers may claim there is no way to detect poor images programmatically due to the non-deterministic nature of art. For instance, histogram might be not enough to detect poorly exposed image. At the same time, it will be difficult to convince me that when a photographer fails to take the photo like wanted, it is worth being kept as soon as one believe there is quality standard to comply with when it comes to art. It is also about being disciplined and mastering what we are doing. So, it may not be acceptable to continue working on images for which we wrongly set up too high ISOs, too slow speed or with the main subject not in focus like we wanted.

It is simple, but not easy

As a conclusion, I tend to disagree about the impossibility to detect poor images by a software. And it is certainly possible to detect poor images automatically and get rid of them. It will not be the same for any photographer, everyone might have to set up the quality level acceptable in terms of exposure, acutance and duplication.

It may be very difficult to delete all the poor images but fine tuning the parameters and the algorithms so that we get rid of most of the non interesting ones would be more than good practice. It would save time and let the photographer focus at what really matters: the good photographs, for which there is indeed no rules.