@curious, thanks for the comments. We're still very open to changes to the guidelines, but the current guidelines were developed by a team that did a lot of research. There are definitely thoughtful reasons behind the current document.
Calculating hours worked at Federal minimum wage to pay list price for a smartphone isn't really a useful exercise. Most states have minimum wages higher than the Federal minimum, and it's incredibly easy to buy a used phone for a lot less than list price. My daughter just sold a 32GB iPhone 5s in very good condition for $335 ($599 list).
Obviously there are Android phones available cheaper than iPhones. I just bought a really impressive BLU dual-sim phone for Africa for $99 at Best Buy in the US. But it's a mistake to speculate on the distribution of devices among low-literacy populations based just on cost. We have been trying to get actual data and will continue to try to do so. But there's not a linear relationship from literacy -> income -> device. Mobile phones are an important status symbol and reflection of personal identity. People may spend relatively more money on a phone than their income might suggest, or they might buy a used iPhone instead of new Android phone.
We will provide whatever data we can collect about device ownership to registered teams. But we know that it is a mixed population, including Windows phones, even Blackberrys. Android and iOS devices together make up the substantial majority (we estimate over 90%), and in this demographic (as in the US as a whole) there are more Android users than iOS users. But since the US as a whole is split about 50% Android / 40% iOS, we believe iOS is still used by a significant fraction of our target demographic.
But the exact details don't really matter. What we're saying in the guidelines is that both Android and iOS have non-trivial numbers of users, and all other operating systems have trivial numbers of users. The plan is to select participants who have either iOS or Android smartphones, but not attempt to impose any further device selection criteria.
You might find the excellent interactive map at https://www.mapbox.com/labs/twitter-gnip/brands interesting. It's a map of smartphone device usage in the US based on tweets.
An earlier version of these guidelines required submittals to be web apps, usable on any smartphone. We changed that because we were concerned that such a requirement unnecessarily constrained the tools available to developers. It is, however, entirely OK to submit an OS-independent web app for this competition rather than two separate native apps (and, of course, hybrid apps are fine, too). The selection of two operating systems is an effort to bridge the gap by providing a range that is neither extremely narrow (one specific device) or too broad (all phones).
I agree that this is emphasized too much in the current guidelines. This is mainly an artifact of earlier drafts in which there was an intermediate judging step. Teams were judged earlier based on design documentation and then some were eliminated before software was submitted. This made the judging job easier, but the development timeframe is too short for such a process. Thanks for the feedback - we'll be updating this and using better language to describe what we expect.
Open Source / Cost
This prize was not designed as an open source prize, but pricing of solutions is not a factor in this competition because teams are not required to sell their entries and they probably won't make pricing decisions until it's over. This is a target demographic that can afford to pay for software - perhaps not very much, but they've already paid for a smartphone and a data plan.
The fact that iPhone users spend more time using their phones than Android users isn't necessarily relevant here. And the statistic you quote measures time browsing the web on their phones, not other app usage, so it's even less pertinent. We don't have any data one way or the other, but don't know of any reason why volunteers participating in a learn-to-read program would use their literacy apps more or less based on the device they're using. But as @jonobacon says, that's a problem for the app to solve. The children in the Global Learning XPRIZE field test will generally have a lot of discretionary time to spend with their tablets. The adults in the Adult Literacy XPRIZE field test may be working two jobs with three kids at home and squeezing literacy lessons in while on the train ride home. We are expecting volunteer participants to be motivated and engaged, but it's the app's job to keep them interested and active.
As with the Global Learning XPRIZE, the Adult Literacy XPRIZE field test was designed in concert with experimental design experts to help produce reliable results. We will assign apps to participants based on demographics so each app gets a comparable population (stratified randomization). But if one set of participants is then using your app more than another group uses their app, we believe we will have enough data to show that that's likely due to differences between the apps.
That's probably too much for now - thanks!