Uncategorized

How one app got three different responses from Apple

Imagine you have a simple, straightforward, yes/no question, and you decide to ask a friend of yours. The first time you ask, your friend says no, but maybe yes later. Perplexed, you ask again, and your friend says yes, definitely. You ask a third time, and they tell you that you’re not asking a question at all, and are in fact making a statement, and thus they won’t answer you at all.

Well that’s almost exactly what happened to me when I submitted almost identical apps to the Apple App Store.

I’ve recently been working on diversifying the markets in which I personally publish apps. In an effort to do so in an AGILE manner, I’ve been releasing dead simple MVPs in various markets, seeing how consumers respond, and adding features once demand becomes clear.

On Android, this has been painless. My release cycles have been short, market responses clear, and dev planning easy.

I have, until now, avoided taking this approach on iOS. Why? Apple’s app review process is well known to be cumbersome, arbitrary, and slow. Why even take an AGILE/data driven approach to app development when you can’t respond quickly to market changes? When features have a two and a half week turnaround due to the anal retentive opinions of a 6 year departed former CEO (praise be unto his name), why bother doing things the right way?

Recently, though, I decided to give it a try, and conduct an experiment in the process. I decided to port four nearly identical apps that I built for Android to iOS, and submit them simultaneously for review. They were very simple Bible apps (each one being a separate translation) which shared almost 100% of their code: the only things that differed were the colors and the database containing the bible verses.

The results could not have been more emblematic of how the App Store operates: one was accepted, two had their metadata rejected, and one supposedly wasn’t an app at all. Let’s go through these one by one.

Only one interesting thing can be said about the app that was accepted: it was red. That is to say, the accent color of the icon and screenshots was red, while the other two apps used shades of blue. Read any psych book ever and it’ll tell you that people respond best to red things. Could that be all there was to it? Did the app reviewer who looked at my red app just instinctively like it better because of the color?

The metadata rejections were a bit strange. The reviewers claimed that the support link “does not properly navigate to the intended destination.” It’s the same link as the one that was accepted, so normally I’d give them the benefit of the doubt and say maybe there was a momentary glitch and the site didn’t load. There are two things that make me doubt this: a) the link was on blogspot, which is owned by Google, and thus is unlikely to go down even momentarily, and b) it wasn’t just one app that got it’s metadata rejected, it was two, indicating there was something actually wrong with it. Given that every other app I’ve published uses the same link, and I’ve had two in the last week approved with it, I can only conclude that it was simply the arbitrary nature of the review process.

The final reviewer claimed that the app wasn’t really an app at all. Instead, they said, it was a book, and should be submitted to iBooks. This complaint is the closest to what I was worried about with Apple: ironically, the company that prides itself on simplicity really hates simple, single function apps. When I last tried using modern business practices with Apple by starting out simple on a series of transit map apps that had had run away success on Android, the reviewer whined that it was too simple. So despite the fact that it actually decreased user satisfaction to add features, I did, and the app was approved (in fact, I have several examples on Android where my downloads have decreased after adding features). In this case, because I’ve already validated the idea on Android, I already have a roadmap that includes additional features. So I’ll just add them and resubmit the (needlessly) improved version after my next sprint.

What’s most disappointing, I think, is that this process is so inconsistent. To know that a perfectly good app would be approved by one reviewer, but denied by another, makes preparing to publish incredibly frustrating. If expectations were clear, I’d be happy to conform to them. But with unclear expectations, everyone gets let down.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s