Why Futura Photo ?

November the 14th of 2019 has been the day of the official launch for Futura Photo 1.0. The company behind this software if Camera Futura Sàrl, headquartered in Geneva, Switzerland.

The software aims at streamlining the pre-process steps needed after a photo session. Indeed, over the last years, I have had the feelings I was wasting my time at culling images after a photo session (1) and (2) I was not always very efficient.

Of course, professional photographers who are always shooting the same kind of images have learned how to cull efficiently their thousands of images every week. But for amateurs shooting 10’000-20’000 images per year or for people who like to try and experiment, I concluded a tool was not only needed to fasten the culling itself and to do it in a more efficient way, but also to propose a “quality gate”. Many images should indeed not even pass this gate as they just not fulfil some requirements when it comes to acutance, exposure. Furthermore, duplicates have always been a struggle for a photographer. Similarly, managing hundreds or even thousands of time lapse members can be time consuming. Panorama members can easily be missed when you start to build panoramas of 10 plus images.

I could add much more examples where we would need a software to help us, photographers, even before starting the post-process work: aligning time lapse members when our camera has been shaken by wind, choosing either JPG or RAW when both shot altogether, and much more.

I have noted that over the last year, I have done a good job at keeping only the best images – only 5% maximum of what I am shooting but because I am investing time after each photo session. And these tasks are not funny. They are boring, time consuming… and from my perspective very important. I don’t keep the useless images which would “pollute” my images library, or which would require several terabytes. From these years, the need to automate these tasks have become more and more obvious. Last and maybe not least, this is important for our future in a sustainable world!

That’s Futura Photo goal and being a complement to the existing many software which are helpful to photographers, helping them spending more time doing what they like and less time at what is needed but not exactly exciting.

They are no rules for good photographs, but they are rules for poor photographs

A "good" image for some, but no rules can apply and some will not even like this image

As DPReview’s Nigel Danson reminds us, and to quote Ansel Adams: “There are no rules for good photographs. There are just good photographs”.

They are no rule for good photographs, fair enough, but I am convinced they are rules to define and detect the poor ones, whatever poor may mean for the photographer. In a digital world, we can take really a lot of pictures. I shoot 10’000-20’000 photos per year (a pro can shoot over 100’000 per year). I don’t use more than 1’000 of them. I like to believe it is important to delete most of them, just to make my life simpler when I will start the post process steps and when I look back at my images, for search or for other reasons.

Less is more?

Taking a lot of picture is not always bad habit, but at the end of the day, we all must cope with this huge and useless number of poor pictures. Therefore, it seems important to define some tangible rules that one can apply manually or through software to eliminate the wrong ones as early as possible in the workflow. Ideally, this should be done at “run time” during the shot itself, which is certainly possible if images are uploaded real time to the cloud and analyzed right away .

But to be more concrete, let’s say there is a need to detect and delete (non exhaustively):

  • Images poorly exposed,
  • Motion blur (not on purpose) and focus blur (not on purpose as well),
  • Useless duplicates (and whatever it may mean).

Many photographers may claim there is no way to detect poor images programmatically due to the non-deterministic nature of art. For instance, histogram might be not enough to detect poorly exposed image. At the same time, it will be difficult to convince me that when a photographer fails to take the photo like wanted, it is worth being kept as soon as one believe there is quality standard to comply with when it comes to art. It is also about being disciplined and mastering what we are doing. So, it may not be acceptable to continue working on images for which we wrongly set up too high ISOs, too slow speed or with the main subject not in focus like we wanted.

It is simple, but not easy

As a conclusion, I tend to disagree about the impossibility to detect poor images by a software. And it is certainly possible to detect poor images automatically and get rid of them. It will not be the same for any photographer, everyone might have to set up the quality level acceptable in terms of exposure, acutance and duplication.

It may be very difficult to delete all the poor images but fine tuning the parameters and the algorithms so that we get rid of most of the non interesting ones would be more than good practice. It would save time and let the photographer focus at what really matters: the good photographs, for which there is indeed no rules.

High Iso: how far a photographer should go?

Shooting at high iso is a topic highly commented. Some believe it is certainly overstated. And indeed, it is far from being so important in photography. At the same time, we should know the limits: how dark can it be? How far can we go at high ISO when shutter speed is critical? Not from a purely technical perspective but to stay consistent with our overall artistic approach. Some photographers may shoot only at very low light but that’s unusual. Most of us are just shooting at high ISO and at the same time also at lower ISO value. So, high ISO noise level is just a constraint we need to deal with.

The problem is to know for each of our camera we own the ISO limit at which we should shoot. Indeed, too high noise or too much underexposure leads to unacceptable image quality. The usual approach is purely empirical: when you believe the noise level has become unacceptable, you just don’t shoot at this value or above.

The problem with this approach is twofold: it can be biased, there is no tangible comparison until you use a scientific measurement of noise and it does not consider the fact that you may shoot with different kind of cameras (from smartphone to drones, DSLR, Full frame or small sensors). However, regardless of sensors, a photographer should keep consistency from a noise-to-signal ratio between the cameras (assuming noise-to-signal a proxy of noise level). Nobody cares which camera you have used for a photoshoot. But all images should be delivered with a similar if not equal quality level irrespective of the sensor.

I have started to measure SNR (Signal-to-Noise Ratio, proxy of noise level) for a given camera at different iso levels. The process is simple:

  1. Take photos at different ISO with the same histogram of the same object or landscape (no over or under exposure between images) but with different cameras. Images must be as similar as possible.
  2. Define a limit above which you believe noise is too high for your best sensor (Nikon D750 in my case).
  3. Define the ISO limit for each camera for the same value of SNR to ensure consistency in the quality

Results on the graph below:

Based on this method, I concluded I can shoot up to 12’800 ISO with my D750 maximum and if possible, not above 6’400. It is not that the image is not acceptable above, it is just to be sure shooting at high ISO has no significant impact on image quality according to my own standard. The SNR is indeed stable up to 6’400 ISO.

But with the Nikon D7000, an older APS-C camera, it is no more than 1’600 ISO. With my APS-C mirrorless Fuji X100s, it is 3’200 ISO (due to a more recent sensor). And with my compact Panasonic LX100, no more than… 400 ISO.

This has been a surprise. Indeed, I used to shoot way above 400 ISO with my compact but indeed, at a closer look, it is not without consequences on the image quality.

It is also proving how some sensors are just much better because their SNR stays stable (D750 or LX100) before dropping at very high iso while other are decreasing steadily (like X100s or D7000). with the former, you just shoot at whatever ISO you want below a given limit whereas with the later, you try to keep ISO as low as possible every time.

Please contact me if you want to know more about this approach and how to shoot at high iso without image quality loss.

RAW images are finally supported by Microsoft explorer

(About the new Microsoft Raw Image Extension on Windows 10)

Until recently, the Microsoft explorer strategy with regards to RAW files has been “not very consistent” – to say the least. For instance, it has indeed been possible for a while to display in the explorer thumbnails of Canon and Nikon raw files (.CR2 and .NEF files) but most of the other proprietary files could not be displayed. It was mandatory to use a 3rd party viewer. There are quite a few, free, and which works well. But using another tool when all you need is to browse quickly thumbnails, or copy/paste files is clearly overkilled.

Early January 2019, Microsoft has released a new application for Windows 10 to fix this well-known issue. Basically, it adds native viewing support for RAW format images: Microsoft Raw Image Extension.

RAW and JPG, different formats, same experience?

It is now possible to view the thumbnails of quite a few RAW formats as any JPG image. If you are using Microsoft Photos (this is still not a very mature application but, let’s be honest, improving year after year), you can similarly have a great snapshot of your RAW image full screen. Again, you can get this with 3rd party viewer but it makes our life easier to have it well integrated like for JPG files. Similarly, you have access directly to the basic EXIF metadata of the RAW files when rolling the mouse over the thumbnail of the RAW image, like for any JPG.

Which file formats are supported?

This Microsoft application is based on LibRaw, a quite well know open source library when it comes to RAW files management. So, in theory, it should support most of RAW files format from most cameras.

I have tried with RAW files from Sony (.ARW), Nikon (.NEF), Canon (.CR2), Panasonic (.RW2) and Fuji (.RAF) and everything is fine at first glance.

How to use it?

First, you need to know this application is not available yet on the official Windows 10 latest release. You need to join the Windows 10 Insider Preview, and then download the latest build available from the Windows 10 settings menu (search for Windows 10 update). Finally, you can download and install from the Microsoft store the application itself (Raw Image Extension).

If you don’t want to do this, you will need to wait a few weeks or months, but the application should be available in 2019 anyway. Either be patient or be bold.

For the software developers

So far, developers must use specific library to read and convert RAW files (For example, Image magick). Thanks to the new application, it is possible to re-use the Shell files if you need for instance to display gallery of raw images, like for any JPG file. This is nice as it will improve performances of viewer displaying raw files. It is no more needed to convert the RAW in JPG, this has been done by Windows 10.

Limitations

Be aware the below limitations might change as I am not using the final version of Windows 10 to be released with the RAW image extension.


(Tests done the 25/03/2019 on W10 build 18362.1)

  1. Extra Large Icon in the explorer is far from being extra-large for any 4K screen. They are like they use to be… which means they are ridiculously small for 2019 monitors.
“Extra Large icons” on a 4K screen
  • LibRaw is not yet supporting Canon latest RAW format .CR3, so will be the case for the application. It probably be in the future, but it is yet not even being developed at the moment, as far as I can check.
  • One more thing: for developers, there is no change in the Extra-Large Thumbnails size from the shell file. They are still no bigger than 1024 width – not exactly extra-large by 2019 standards.

Conclusion

From the tests done on the latest Windows 10 build, Microsoft is still not at the level one can expect when it comes to photographers’ main features, but it is improving maybe not quickly but at least steadily with the 2018 improvements of the Microsoft Photo application and this new Raw image extension application to be released in 2019.

About Clipping blacks and blowing highlights: an attempt to bring together art, science and discipline

Introduction

Clipping in photography is well known and whereas sometimes done on purpose, it mostly comes as a non-desirable effect, because of poor exposure (worst case) or at least reaching the limits of the sensor range (best case). By clipping, I mean both blowing highlights and clipping blacks. The topic has been debated countless times in different forums and blogs.

As a summary, some people believe it does not really matter as long as the photo is great and other advocate why and how to avoid it. Other rightfully point out it is sometimes better not to fix it whereas other explain in detail how to do it the right way.

This is a classic case of different opinions in photography between those who do not want to consider something else than the purely artistic result and the scientists obsessed by being consistent with some physical principles. As usual too, both are right and wrong at the same time. Indeed, what matters in photography is the result, the emotions a photography can carry, and whether you like it. Period. Clipping, no clipping, who cares. At the same time, it is true to say that blowing your sensor which can no longer deliver any information but “I am blown” (white burned) or “I am blind” (black clipped) is not really what someone can call good practise, to say the least.

I am trying in this post to find a way to make all these opinions somewhat aligned, in a very much Swiss-like consensus way.

How to detect it and how to fix it

There are also plenty of information about the topic. I would recommend reading:

[1] How to Avoid Burned-Out Highlights
[2] Stop Doing This to Your Photo’s Highlights
[3] What is Clipping in Photography and How to Fix It!
[4] Restore Those Clipped Channels
[5] 6 Ways to Reduce Blown Out Highlights in Your Outdoor Photography
[6] Highlight Clipping in Adobe Photoshop Camera Raw (and Why You Should Care)¨
[7] What Is Clipping and How To Fix It
[8] Blowing Highlights And Clipping Blacks: The Rule Behind Lost Details

“Physical” and “visual” clipping

Most of the people know well the “physical” clipping: when the sensor is blown. Technically speaking, it means the pixels of a given channel (R, G, B) or its luminance (Based on the square root of R, G, B weighted according to the human eye characteristics) is at its maximum value (typically 255 for 8 bits JPG) or its minimum (0).

But it is also important to remember that what matters is the “visual” clipping: the pixels that are almost blown or clipped also matter because (at least for JPG images), there can be no way to really fix them properly and get information from the clipped regions of an image.


Example: a JPG image of a very high contrast scene.

Let’s have a look at the clipped pixels highlighted in blue for the blacks and red for the highlights in the image below. First, one could argue that using JPG for such conditions is not the best idea, RAW would have been by far a better choice but without going to start another countless debate RAW vs. JPG, the image has been poorly exposed as there is no clipped pixels in black (they would be in coloured in blue in the image below) whereas they are quite a few blown ones (in red below). So, basically, it says the image should have been significantly less exposed..


Same image with highlights in red, blacks in blue (none in this case)

But whereas the number of actuals blown pixels in red is not so significant, the number of visually clipped pixels is at an unacceptable level. It is making the image ugly whereas it was an interesting one. There are almost blown from a physical perspective, but for our eye, they are just blown… you can try to reduce highlights or exposure, there is basically no information recorded into the sunny mountains part of the photo. The image will stay poor. So, what matters is not the truly clipped pixels but those which look clipped. Using Lightroom or other software tool is not enough even if, again, you can’t do much to fix it when you shoot JPG. That’s a good transition to the next point.

Clipping is not the same animal when you shoot RAW or JPG

I believe it does make sense to differentiate JPG from RAW images when it comes to clipping. For RAW images, with modern sensors, clipping images is rare. Or you really do it on purpose. Or you have no idea how to use your camera’s exposure systems! The below example shows how tolerant sensors are now to clipping:

Very high contrast image with my son in an hotel room, completely in the shade. The skyline behind him is of course much brighter. This is really an extreme case and with a good but really mainstream full-frame sensor (Nikon D750 in this case), there is almost no clipping shooting RAW.

I know it is not so simple and you can clip some parts of a photo despite your goodwill and expertise whilst shooting RAW. My point, however, is to say it is rarely a problem and it is easy to identify and to anticipate as it will only concern extremely high contrasted images.

When it comes to JPG, this is a totally different story. It can be easy to clip parts of an image and it can be difficult to fix it as we have seen above. What matters is first to know quite well how to detect that the image will have some clipping. Second, you need to know whether it is a problem for your image or it is not. There is no good answer to this (from my perspective, though, it will very often be a problem). One approach would be of course to shoot RAW anytime there is a risk of clipping, just to have more latitude in the process, but it is not always possible or desirable. At least you know what to do. So, it looks important to understand the causes and the consequences of clipping and how RAW can fix it while bringing the usual inconvenient of shooting RAW (processing time, file size, buffer limits, …). If you don’t shoot RAW, you normally have reasons for this choice. This is a good transition to the next point: this is where good and bad clipping matters as well in your decision.

The Good clipping and the bad clipping

The bad clipping is the one you should not get. Just expose better your image by underexposing it when you have bright parts or underexposing it when you have potentially too many black clipped pixels.

The good clipping is just inevitable. Below an example:

When we analyse the image below, we can see we have both blacks clipped and highlights burnt. In red the burned pixels, in pink the “visually” clipped ones. In dark green the black clipped ones and in light green the “visually” clipped.

Having significant both red and green zone just say you are going beyond the capabilities of your sensor. Just buy a better one with a higher dynamic range… or use an artificial way (flash, umbrella, filters …) to decrease the contrast, which of course is not always possible or desirable depending on the kind of pictures you are shooting.

Conversely, if you aim at having high (or low) keys images, the result will be clipped, fair enough but the pre-processed images – before you start to work at them, the RAW images or the JPG out of the camera should not be clipped. And to illustrate this, a cute gallery that I like of high keys-on-purpose images:

Knot
Gallery on Flickr of white and high key images

I like these images but I would not bet they were clipped out of the camera.

Good principle: clipping is bad

Long story short, it will be difficult to convince me clipping is not bad. Indeed, if you are looking to shoot high or low keys image or if you want to stylize your images, that’s more a post process thing. If you know what to do, you can argue “I clip on purpose” but most of the time, clipping is just bad. Your sensor doesn’t provide information any longer but a very black and white approach of the reality. What you will do in the post process is a different discussion, when you shoot, and you anticipate clipping, unless knowing exactly why, you should just take whatever it takes to limit it (thanks to under/over exposing or bracketing) or avoid it (same actions + RAW + stacking/HDR).

Conclusion and summary

Let’s start by another example. From my perspective, this image below is poorly exposed, over clipped in a white grey ugly sky:

The city of Mopti, Mali

The light was terrible, due to some haze caused by hot air. This image looks ugly to me whereas Mopti is such a dramatic city and I have tried to post process it, there was no way to fix it (I was travelling, was short on time, and I did not see a way to avoid clipping). Light is bad, it is what it is.

My point: clipping is (very often) bad even if you can’t avoid it. They may be some counter examples (try to shoot an image of a polar bear in the artic without clipping the snow…) but they demand to have at least understood how to produce a pleasant image and taking counter measure to reduce the visual impact (shooting RAW, shot only when there are some shadows to produce some darker zones, …).

Tokyo Japan, RAW image underexposed by 1.5 EV, no final clipping whilst the original image before post-process looked challenging with both under (the sphere) and over exposed parts (backlight windows).

When you always shoot the same kind of picture, you know what you are doing. You don’t really need the following conclusion as you have no problem to deliver images you are familiar with. But it is also good in life to try new things. And it is good to remember some good principles because when you shoot new subjects, in a new way, in new places, you will have many reasons to fail delivering great images. It is good to remember some basic principles. Beyond all the discussions and remarks, I like to remember something easy not to forget:

Shoot whatever you like, but clipping is bad.

The more you know how to detect it, avoid it or at least manage it, the better. It is not a fight between art and science, it is about discipline.

The three pillars of photography

3 pillarsI have written several times that technical innovation can be either a way to foster your creativity or could be, most of the time, a useless distraction. I don’t say I am opposed to innovation, that’s more the other way around of course, but I like to believe one should always remember the basic:

1. Subjects’ choice

Whatever the technology and the gear, and even if you know how to post process well images, you need to be creative and to have the artistic skills if you want to create “great” images. That’s not the bottom line, in my humble opinion, but I like to believe it starts here: learn to be creative, be yourself and express yourself.

2. Shooting skills

Some photographers have “the eye”, most have not. You can’t hardly learn that. Some know how to compose and when to shoot.

3. Post process and technology

Yes, never be overwhelmed by them, they are nothing but tools useful for the artist, but it sounds more important than never to know everything about the photography’s technology and how to post process images.

At the end of the day, photographers who excels at the three pillars of photography are usually admired or, at least, can produce amazing pictures. Know where you are, and in which topics you need to improve yourself!

Digital photography in 2013: what can come out from the end of a revolution

The digital revolution may have began around 1999 or 2000 with the first real DSLR of Nikon and Canon. Almost 15 years later, evolution will continue, every quarter great cameras, software, or new web services are released, but I am more and more believing it is the end of the digital revolution. And that’s good news for photography because we may be able to focus again at what really matters: the picture, not the technology.

Ubiquity

Cheap point and shoot cameras and smartphones are making everyone a photographer. Modern sensors and skilled engineers allow everyone to take very decent shots, should they have no knowledge of photography. Digital filters and photosharing make the pictures looking even better and available right away for those who matters to everyone. Anonymous can become very famous thanks to Instagram, much more than many legendary photography. So what? That’s fine, just the consequences of the modern digital revolution. It is time to learn living with it.

Technology

We have learned HDR, digital filters, advanced post processing, and much more during the last years. We can now have a small camera with a x40 zoom for less than a fraction of the price of a whole set of lenses we used to need ten years ago. Or a mirrorless, or a tiny compact taking better pictures that DSLR a few years ago. We can store and share on line so easily nowadays. Much more will come, of course, and we will have to adapt. But I am wondering whether most of the breakthrough might not be behind us. And that’s also good news. Revolutions are exciting but they distract us, when they don’t exhaust us. A necessary evil, but still an evil.

Here above an example of how my pictures have evolved in 20 years while mountaineering! Is it better or worst? It does not matter, things have changed, and dramatically to say the least.

No revolution lasts forever

Mirrorless did not change anything to this revolution even if they are great cameras and improved the revenues of major vendors. I like to say they rang the bell: this is the end. We are entering a new era. Despite being a major innovation, it does not change so much the game. And I doubt that Lytro would bring anything significant too by the way.

Same for Google+ and Facebook recent photosharing improvements. Photosharing is becoming a commodity nowadays. It may be good for every one, but it won’t change the game.

The bottom line

We are getting bored with the revolution. We can now focus again at what really matters: taking pictures. We don’t have to spend weeks testing the new stuff, we have to spend weeks focusing at creativity, photography and what we want to show to the others. It’s no more about software and hardware, it’s about life and creativity.

Many photographers have never stopped working this way, fair enough, but I like to believe they were really lost in the turmoil of this revolution. The dust is settling done, so I want to see in 2013 the new raise of great photographers, not those showing HDR on Flickr, their meal on instagram, or selfie on Facebook but those who have something to say.

Every revolution evaporates and leaves behind only the slime of a new bureaucracy – Franz Kafka

Digital photography needs a clear back up strategy

Some people can lose all their digital work in a few minutes, that’s still very rare. More frequent, your hard drive can crash, any time, without notice. At the end of the day, our digital assets are just becoming so important we cannot live without a clear back-up strategy. For a photographer, it cannot be more important.

The risks

You can’t have everything just in the clouds, that’s too dangerous. Without being paranoiac, services can shut down, someone can steal your password and delete your files, and maybe more important, it is good to keep control on your assets.

That said, the main risk today is still a hard drive crash. So it is very dangerous not to back-up your work on at least another source.

A robbery could make you loosing all your hard drive, but if you have one in another locations.

The constraints

Back-up is boring, sometimes even painful, but always time consuming, and can cost significantly. It is like going to the dentist. No one likes that, but we have to.

Defining the minimum back-up strategy

My “minimum” back-up strategy is to store my digital assets on:

  1. a desktop (or a laptop) hard drive,
  2. the cloud thanks to Google drive (or dropbox, or skydrive),
  3. another hard drive.

Actually, I am also using a second hard drive, not located in my apartment. I am using it only once per year. It is not so redundant as I don’t have any pass word on my desktop (I don’t need one). So I can see a scenario, unlikely of course, where I could lose everything during a robbery. Thanks to this second hard drive not in the same place, that’s look impossible.

Pricing

For less than 100 € you can buy a 1 To Hard drive disk. That’s a very cheap way to be sure your photography won’t disappear.

Clouds services are not that cheap. Google drive will cost you 100 € per year to store 200 Mo, so that’s way more expensive than buying a hard drive! But very flexible, and you can have access to your images from any device, anywhere on Earth. That said, it is not a very cheap way to back-up your images, but it is not only a back-up service, though. Dropbox is even twice as expensive than Google, and for photography back-up, does not bring something else imho. Box can be expensive if you have just a couple of hundreds of megabytes, but is becoming very competitive for storage above 500 Mo.

The bottom line

Keep it simple. But do it.

It has become very affordable to secure all your digital assets, don’t miss this chance… and be sure your data are resilient to most of the threats.

Some thoughts about the future of photostreaming

Photostreams are now just a commodity for any social service, and has improved dramatically for the last months. Facebook and Flickr recent improvements may be two obvious examples. Let’s list first some trends in 2012:

Bigger is better
That’s obvious: the experience improves with the picture size and photostreams are now able to display much bigger pictures.

Adjust to your screen size
Services like Flickr allow you to see a given picture of a stream, by clicking it, at the maximum size possible. Cool.

Endless stream
That’s also obvious, we hate clicking on “next page please”, and, freaking consumers that we are all like to see as many pictures as possible in the shortest time frame available.

Mosaic and no blank space please
Similarly, photostreams are now more and more mosaics without any blank space between pictures. That’s sound obvious nowadays, but that’s something actually recent. However, how and if you must crop pictures to display the stream is something yet not so clear:

Square crop
Square crop has become very popular, but for some, like Flickr, who are still preferring a justified streaming. Unlike some, I am not a fan of square crop, even if I do understand its usefulness, and to be more specific, for mobile device.

Present limitations
I understand my pictures must be cropped, but right now, it is far from being ideal. And browsing my Google Picasa albums with front page badly square cropped is just depressing for a photographer. Intelligent cropping tools are now starting to be available, so hopefully, this will improve soon.

Another problem with streaming is actually the search. We mainly browse mainly by either “time” or “tags”. This experience can be really frustrating, as pictures are now counted by billions, and you don’t want to see most of them for many reasons. Actually, it is becoming a real challenge to find the kind of pictures you are looking for. The trend – more and more pictures taken every year – will make the search experience on pictures more and more frustrating without some dramatic innovations.

That’s why I am sure, or at least I hope, that many new intelligent tools will be soon available. There are many ways to find what you are looking for and that’s how photostreaming may evolve: you can already give an aesthetic number to a picture. That’s very arguable, though. But I prefer from far missing some good pictures because the tool miscalculated their aesthetic value and browsing only “nice” pictures than browsing tons of uninteresting pictures for me. Yes, interestingness is really something photostreams are taking care of in a very basic way so far. Other tools are now providing a context to pictures, trying to avoid not only the tagging tasks, but providing some semantic to a picture to let people browsing it the way we want.

On top of this, there are now many ways to push your content to different channels and one person can post a picture to many streams. So services like Pictarine look to me a great way to display photostreaming by person. This dimensions does matter too. Search & display by people is really something still at its infancy for photostreaming. But let’s go a little bit further, I would love being able to look after a kind of photographer. Right now, finding new photographers that you like is a challenge, as we have all very different tastes. And photography has become so popular, there is no such thing than discovering new skilled photographers lost in armies of photographers you can’t care less about. Anyway, tha’ts something else I would discuss in another post.

Conclusion: how to handle an exponential volume of pictures?
So to make a long story short, the main challenge of photostreaming, in the future, would be to take care of “volume”. I believe that several business will be extremely successful if they will be able to improve the photostreaming experience of users. Welcome to a digital world of billions and billions of pictures… and counting.

I don’t mind the performances, controls suck

New DSLR, new mirrorless, new high end compact, new point and shoot, new smartphone. Every week will start with some good news with photographer. Sensors’ capabilities are now outstanding in low light, in high contrasts for landscapes, and for depth of colours for portraits. Other cameras’ performances are also always improving in terms of Autofocus, how fast the camera will shoot, and much more.

This does not matter so much to me

Cameras’ manufacturers are following the herd, that’s a marketing law. We, users, are supposed to be mostly early adopters and geeks. But we are not that much, we may even be deceit by this character. We are just photographer. Performances are right now really impressive, I will always need better ones, but that’s not the point. Manufacturers have forgotten the basics.

Three dials or nothing

It looks so trivial to me, I just don’t know why I am writing this: photography is first and foremost about f/, speed and ISO. That’s it. Shoot RAW if you don’t want to bother with anything less, and shoot JPG and take care of WB (White balance), DR (Dynamic range) and so on. But photography is mostly about these three parameters. Why can’t we change them so easily? Why these damned menus? (I know the answer…). When you are using programs, Av or Tv (Aperture / Speed priority), you still need the 3rd dial for correcting exposure. When you have shot a few times, let’s say a few dozens of thousands, more or less, sometimes much much less, you know the bias of your exposure, you also still need the ISO choice and the variable parameter. 3 dials or nothing. Period. How many cameras comply with this basics? Not so many.

Much more complex ?

But it is not that simple. There is also the AF mode, the WB, and much more. You will find hardly photographers shooting the same way. However, most of the cameras are still mostly products, not what I am calling photo platform with heavy customization capabilities. That’s a real pain because we are not the same, and we need to customize our control. We need customized display on buttons to remember what they are used for, we need much more “custom modes” (u1, u2 modes or C or whatever the name), we need to get control. Some manufactures are masquerading the past years, like Fuji, in a rather sensible way, what has been done before, which was not that stupid but which is already just obsolete. It was indeed stupid to remove the f/ control from directly the lens. Some are saying it was a way to build cheaper lens, but photography is not a cheap hobby, so that’s a wrong answer. However, I am very rarely impressed by controls efficiency of new cameras.

Few innovations, at the end of the day or too much to forget the basics?

There is obviously some common belief. New high end cameras are a plebiscite, mostly because of old fashioned controls and incredible performances. But it does not matter. I want a bigger view finder, I want my three dials back, I want great lenses, I want my customised controls. Nothing else really matter – at least that would not be missed.

Which camera manufacturer will not listen to the usual suspects, and will focus at what photographers really need? This blog might sound arrogant, fair enough, give me my three dials back (like in the Sony NEX-7 but for DSLR please), give me my great lenses back (not like the NEX-7!). Don’t forget my AF controls, my custom modes, my customized buttons, not just one or two, but all of them somewhere, don’t charge me 40% more for getting a bigger sensor I don’t need or a new lens line which bring nothing but a higher price and I will again accept that I am just an arrogant blogger.