MemoryMiner and iPhoto 09

I promised a post on this topic a while ago, so here goes.

It was hardly a few minutes into the iPhoto 09 section of the Macworld Keynote when people started contacting me to see if I was mad/upset/etc.

Truth is, I was kind of wondering what took Apple so long. The quote “good artists copy, great artist steal” definitely comes to mind: more on that later.

When I went to Macworld the next day, the first thing I did was go to the Apple booth and start poking about (making the poor guy at the stand a bit nervous). I was curious to see how well the face recognition worked, and of course what they did with it. The short answers seemed to be “quite well” and “surprisingly little.” After having worked with my own copy of iPhoto 09 since it was released, my initial assessment holds true.

Let’s look at face recognition. Put simply, like speech to text, it’s hard to do well, and Apple’s implementation is the best I’ve seen: not just because of the initial accuracy, but because of the ease with which it can be trained. What’s surprising is how cumbersome the process of creating manual selection markers is. It’s also odd that you can’t resize the markers that are automatically created.

Now, let’s look at what’s done with the data: when I say “surprisingly little” it’s because the person info becomes just another tag. Great, so I can create a smart album for a given person. The slideshows ignore the markers, as does the Flickr export. Finally, the data about faces (along with the titles, captions, keywords) only live in the iPhoto database. If you take a photo from iPhoto and send it to someone else, all the work is lost. I’m sure one day Apple will become a little more hip to the concept of embedding metadata within digital files.

For all these things, I should say “thanks Apple for leaving something for third parties” (at least until the next update of iPhoto). When I first created MemoryMiner, I didn’t really want to have to create a bunch of core photo/media management functionality (even though I’d done that for many many years in my prior life). My core interest is in using photos to trace the threads that connect people across place and time. For me, photos are but frames in a storyboard that capture moments in peoples’ lives. I’m interested in the stories that they tell, as well as the questions they raise.

Many people have asked why MemoryMiner’s functionality couldn’t be created as an iPhoto plugin. The short answer is there’s no API that would allow me to do so (which is also the case, by the way, with Aperture, Lightroom, Picassa, and every other photo/media manager I’m aware of) You can create export plugins, and you can read iPhoto data, but that’s it. I would love for Apple to give third party access to create editing tools along with a safe, programmatic way of updating the iPhoto data store. Given that iPhoto is practically a part of OS X, there’s an argument to be made for such an API, but I’m not holding my breath.

iPhoto is optimized for ease of use, and as with other Apple software, is marked as much by what it leaves out as what it leaves in. MemoryMiner, just like tons of other software that use the most excellent iMedia Browser framework can use iPhoto managed photos as a starting point. The fact that iPhoto managed photos may now have faces and places data available is great, in that it provides an even better starting point for creating a MemoryMiner library.

I can’t think of any third party software that doesn’t take its cues from Apple’s work, and the opposite is absolutely true as well. This is a known risk in writing for any platform where the makers of the platform also provide a set of core software. The trick for MemoryMiner, along with all other third party developers is to keep innovating.

This is good for everyone, so with that, I’ll get back to MemoryMiner 2.0, which is coming along beautifully.

Leave a Reply