I apologize for any confusion our testing notes have caused. We asked our testing partner to install each entry and then spend a few minutes working with it in an unstructured way. Our intent is to ensure the software is minimally functional before asking our judges to spend any time on it.
We received testing notes with more information and feedback than we expected. We felt at that point that it was unfair to not share that feedback with each team, and to then allow each team to respond to or ignore it.
None of the feedback from your initial testing constitutes new requirements for your software. If the notes indicate that your software is operating as intended, there's nothing you need to do. Since we cannot know how your software is intended to operate, we can't tell if launching (or not launching) automatically is intentional.
I think Annie's correspondence should have made it clear that none of this testing information will be shared with the judges. It's not judging - it's testing. And you should use that testing feedback (as minimal as it is) to confirm that the results our testing partner is seeing are what you expect. That's all - there is no obligation to respond to any of it.
If your software is at all usable and meets the submittal criteria (it is, for example, submitted in both English and Swahili) then it will be sent to the judges for evaluation. Crashes, screen corruption, audio issues, etc. will not be criteria for elimination unless they make it completely impossible to use your entry for more than a few minutes.