If All Else Fails, Try Brute Force…

March 26th, 2009

A while ago, I posted on how happy I was with the new People Picker for MemoryMiner 2.0: I spent a lot of time refining it, since this is something that is at the very core of using the application. About a month ago, I took the pan & zoom engine that I had rewritten using Core Animation and married it to the annotation view. This is great because you get immediate feedback as to the best place for creating a selection marker.

Things were great… Until.

Turns out mixing layer hosting views (i.e. the pan & zoom) with layer backed views (i.e. the People Picker view) can lead to some nasty side effects, if the layer backed view happens to contain any NSScrollViews, which of course is the case with the People Picker, since it contains a table view with potentially a lot of items. To show what I mean, have a look at the screen movie below, and notice how the the table items fade in and out as the scrolling takes place:

Horrid, no?

By comparison, here’s another screen movie showing the original behavior, before I was mixing layer hosting and layer backed views:

The fundamental problem is, you have no control over the animation that is automatically created to fade in the sections of the scroll view. Bummer.

Over the course of a few weeks, I spent more time than I care to think about asking the smartest people I know how to get around this problem. I came up empty. Finally, a request to Apple DTS returned the suggestion of re-writing the table view using CALayers: thanks, but no thanks!

In the end, I ended up punting, and letting the Window Server deal with the problem. That’s right: I created a transparent window to host the view with the scrolling NSTableView, and added it as a child window of my application window. It took some fiddling to get the positioning right, and to deal with a number of other issues related to mouse tracking, but it all works now.

A real pain in the ass, but dare I say worth it. Sometimes a little brute force does the trick.

On Entrepreneur People Real Stories Podcast

March 17th, 2009

By now, I’ve done quite a number of Podcasts talking about MemoryMiner. Most of them have been with people I’ve never met in-person. Heres’s one with someone I hope to meet some day: Sherry Borzo, who hosts a show called Entrepreneur People Real Stories. The show delves in the back stories of entrepreneurs as they go about building their business.

Sadly, the audio quality is not great, but it was a fine chat that I very much enjoyed having. Give a listen here:


How soon until we get “Celebrity” VoiceOver for iPod Shuffle?

March 11th, 2009

I just watched the intro movie for the new iPod Shuffle with its VoiceOver feature and it made me curious. Depending on what OS is running on the computer to which it’s sync’d, you get a different quality of artificial voice. On OS X Leopard you get “Alex” which, while quite good, still sounds pretty awful to me. On OS X Tiger or, Windows you get something of even lower quality. Maybe it’s a generational thing, but I can’t stand listening to the current generation of text to speech (though I certain understand how incredibly useful it is).

At any rate, this must certainly mean that the text to speech processing is taking place in iTunes and the resulting audio files are being synced to the iPod.

So, I wonder how long it will take before the artists themselves will be able to record their own “announcements” of the songs title and have them embedded in the audio file itself (using the ID3 standard) for use with the VoiceOver system.

Of course this wouldn’t handle the voicing of playlists, but at least it would add a fun new twist to the use of VoiceOver.

MemoryMiner and iPhoto 09

February 28th, 2009

I promised a post on this topic a while ago, so here goes.

It was hardly a few minutes into the iPhoto 09 section of the Macworld Keynote when people started contacting me to see if I was mad/upset/etc.

Truth is, I was kind of wondering what took Apple so long. The quote “good artists copy, great artist steal” definitely comes to mind: more on that later.

When I went to Macworld the next day, the first thing I did was go to the Apple booth and start poking about (making the poor guy at the stand a bit nervous). I was curious to see how well the face recognition worked, and of course what they did with it. The short answers seemed to be “quite well” and “surprisingly little.” After having worked with my own copy of iPhoto 09 since it was released, my initial assessment holds true.

Let’s look at face recognition. Put simply, like speech to text, it’s hard to do well, and Apple’s implementation is the best I’ve seen: not just because of the initial accuracy, but because of the ease with which it can be trained. What’s surprising is how cumbersome the process of creating manual selection markers is. It’s also odd that you can’t resize the markers that are automatically created.

Now, let’s look at what’s done with the data: when I say “surprisingly little” it’s because the person info becomes just another tag. Great, so I can create a smart album for a given person. The slideshows ignore the markers, as does the Flickr export. Finally, the data about faces (along with the titles, captions, keywords) only live in the iPhoto database. If you take a photo from iPhoto and send it to someone else, all the work is lost. I’m sure one day Apple will become a little more hip to the concept of embedding metadata within digital files.

For all these things, I should say “thanks Apple for leaving something for third parties” (at least until the next update of iPhoto). When I first created MemoryMiner, I didn’t really want to have to create a bunch of core photo/media management functionality (even though I’d done that for many many years in my prior life). My core interest is in using photos to trace the threads that connect people across place and time. For me, photos are but frames in a storyboard that capture moments in peoples’ lives. I’m interested in the stories that they tell, as well as the questions they raise.

Many people have asked why MemoryMiner’s functionality couldn’t be created as an iPhoto plugin. The short answer is there’s no API that would allow me to do so (which is also the case, by the way, with Aperture, Lightroom, Picassa, and every other photo/media manager I’m aware of) You can create export plugins, and you can read iPhoto data, but that’s it. I would love for Apple to give third party access to create editing tools along with a safe, programmatic way of updating the iPhoto data store. Given that iPhoto is practically a part of OS X, there’s an argument to be made for such an API, but I’m not holding my breath.

iPhoto is optimized for ease of use, and as with other Apple software, is marked as much by what it leaves out as what it leaves in. MemoryMiner, just like tons of other software that use the most excellent iMedia Browser framework can use iPhoto managed photos as a starting point. The fact that iPhoto managed photos may now have faces and places data available is great, in that it provides an even better starting point for creating a MemoryMiner library.

I can’t think of any third party software that doesn’t take its cues from Apple’s work, and the opposite is absolutely true as well. This is a known risk in writing for any platform where the makers of the platform also provide a set of core software. The trick for MemoryMiner, along with all other third party developers is to keep innovating.

This is good for everyone, so with that, I’ll get back to MemoryMiner 2.0, which is coming along beautifully.

Warning: Geeky Post

February 16th, 2009

I spent a few hours this morning tracking down a problem that was so annoying and stupid that I wanted to post something on the off chance it will be helpful to someone in the future…

In MemoryMiner when a new Photo object is created, its “creationDateString” ivar is set to “????-??-??”. When the details of the file being cataloged become known, the value is typically re-set using a precise date/time value read from the photo file itself (e.g. from the EXIF data put there by a digital camera, or from IPTC data put there using a media manager application). In the case where no date is known (e.g. with a scanned photo), then the user can supply whatever components of the date they know.

The code for dealing with creation dates has remained unchanged for a long time. This past Sunday, I released a new test build of MemoryMiner 2.0, and much to my horror, this morning, while doing some testing with scanned photos, the creationDateString ended up in the database as “??~~??” which is decidedly not a good thing. WTF?!!

Turns out, the problem was due to a change I’d recently made in one of my XCode projects, notably to use the C99 “dialect”. I’d made this change after integrating an open-source class (Stig Brautaset’s most excellent Obj-C JSON parser, to be precise) that uses a C99-style “for::” loop.

As a consequence of using C99, my default creation date sting was being preprocessed as a Trigraph, which means each “??-” was being converted to a tilde character: “~”. The solution was to switch to using the GNU99 C dialect.

Now, I’ve been writing Obj-C code since the late-90’s (having made the jump from my first foray into writing software using Webscript, an interpreted version of Obj-C used with WebObjects), and until today, I’ve never known, much less cared, what a C trigraph is. It makes me recall a comment in this blog post on Theocaco from two years ago:

“From an implementation (Apple) perspective, Objective-C rests on top of raw C, but from the developer-user’s perspective, they’re worlds apart. You can write plenty of Cocoa code without knowing what asprintf or wchar_t is.”

You can say that again!

Many many thanks to Rainer Brockerhoff for helping me get to the bottom of this.

Greatest thing since sliced bread

February 13th, 2009

This past Wednesday, I was at the Magnes in Berkeley shooting a video about MemoryMiner and the Memory Lab installation at the Magnes.

The production company, eMotion Studios are friends and colleagues of mine who specialize in corporate and institutional storytelling. Here’s a link to a video they did about the story behind sliced bread:


Hope the MemoryMiner video turns out as well.

New People Picker

February 8th, 2009

I have another 11 days without wife and child (during which time I can stay up working all hours of the night). This past week, I completely re-did the People Picker for version 2.0 and it rocks. The People Picker in 1.86 is a modal panel which has two search fields, one for existing MemoryMiner users, another for entries in your system Address Book. The new People Picker has just one search field that searches both sources, and presents the aggregate results in a single table (along their birth year, which helps distinguish people with the same names).

The search algorithm is also pretty clever: it looks at the search text as “words” (using blank space as a separator). The first word is used to search against peoples’ first and last names. If you type two “words” (e.g. “john g”) it uses the first word to search against peoples’ first names and the second word to search against peoples’ last names.

The first search result is automatically selected, so hitting return selects that person. If more than one person matches the search result, you can use the page up or page down keys to select other items in the list.

If no matches are found, hitting the return key brings up the New Person form copying the text from the search field into the name field. Hit the tab key to enter the new person’s birth date (if you know it) or hit return again, and the new user is created.

In addition to making the annotation process so much faster (once you’ve created the marker, you don’t have to take your hands off the keyboard) the general approach of “one search field to rule them all” is very powerful. For example, many people have asked that we add Facebook friends as a person source. Doing so would be much easier now.

I’m nearly done doing the same thing for a new “Place Picker”, so gotta get back to work on that…

Cranking Along

January 31st, 2009

I realize that it’s been a month since I posted. I suck, I know. My wife and son have been in Italy for the last 10 days, which has allowed me to really crank on getting MM 2.0 ready for public beta. Every time I think it’s going to be ready in another week or so, I realize that there’s still plenty of polish that’s needed.

This past week, I got full screen annotation working. It makes a world of difference, but there are many devilish details. The great thing about working full screen is that the photo becomes the hero, and all the “clutter” can be hidden and brought on screen as needed. The trick of course is making sure that everything works smoothly as the views and windows are brought on -and off-screen.

Then of course, there’s the whole issue of iPhoto 09, and how the new Faces/Places functionality. That’s a subject for another blog post, one that won’t take another month for me to get to.


Big Love for MM 2 Early Testers

December 30th, 2008

As I’ve written in a prior post, an early version of MemoryMiner 2.0 has been installed at the Magnes Museum Memory Lab for a number of months now. In the last few weeks, testing has been opened up to a small group of early adopters. There’s nothing like getting high-quality feedback from avid testers. This is particularly true for those who are coming to MemoryMiner for the first time.

For example, Marek from the Czech Republic pointed out a problem with the Person Icon editor, this was something that was easy to overlook since it’s something that I don’t even “see” anymore. Similarly, Christian from Germany made an excellent suggestion for adding a secondary sorting of Photos by their title. This came in an email in which he said that he appreciated that MM imported 27K photos from his Aperture library without a hitch.

Slowly but surely, we’ll get MemoryMiner 2.0 to a feature-complete stage so that we can open up testing to a larger pool.

Null, Texas?

December 19th, 2008

While testing a mapping feature on an iPhone application I was working on (namely using the built in Maps application to show directions between two points on a map), I discovered that – according to Google Maps at least – there are not one, but two places, in Texas called “Null”.

View Larger Map

How did I discover this little factoid? My colleague Blain taught me that the iPhone’s Maps application intercepts URLs whose host name is maps.google.com. When developing this functionality, there were cases where there would be a nil value for the user’s current location, and so the staring point in the URL I generated had the string “null” in it.

Testing is not only good for your software, it’s good for your personal trivia cache.