User experience of the FUNetix® reading app


To understand the effectiveness and user experience of a new reading app that graphically represents phonetic sounds to teach english.


Solo UX Researcher


9 months


The American Youth Literacy Foundation is a 501(c)(3) non-profit in the USA, dedicated to helping children read for over 15 years. Their patented curriculum includes graphically representing the phonetic sounds of English, as a new alphabet system called “The Phonibet”. This phonetically-based alphabet system is then used to teach English words and sentences. The complete system from sounds to words/sentences is referred to as FUNetix®.


For the past several years, the foundation has successfully taught this curriculum in a non-digitalized format. The challenge was to take FUNetix® into the digital space as a mobile app, that is designed to independently teach children to read.


Digital transformation of FUNetix® is important because, while the current FUNetix® curriculum can only reach so many children via one-on-one tutoring sessions, the new mobile app is intended to reach thousands of children across the nation.


I was one of three Lead UX Researchers on this project. My job was to design and carry out a research protocol that addressed the following research questions, which were created in collaboration with the foundation’s program director.


The main scope of the project test the entire 12-hour FUNetix® reading curriculum in the digitalized format as a mobile reading app. It was important to test all 12-hours of material over a period of time because each reading screen/module was unique.


Another constraint we faced later on in the project was COVID-19, which ended up causing us to scrap our original plan and recreate a new plan that was totally remote.

Some of the constraints we faced from the onset of the project included:



FUNetix® was designed to get children with known reading difficulties, to an approximately 2nd-grade reading level. Ideally end users of this mobile app were those in 1st -3rd grade who were just learning ABCs, some sight words and maybe some primers.

Parents/caregivers of these children were also important end users of this app, because parental acceptance and buy-in to this new reading system would ultimately lead to the app reaching its final destination: the hands of children.



  1. Are participants with known reading difficulties able to get to a 2nd-grade reading level, using the mobile app reading module screens?

  2. Where are the opportunities for improvement on the reading curriculum or the reading module screens?

As a UX Research Lead, the first step I took was to learn from the stakeholder (foundation’s program director), which outcomes were most important to him.


The director’s primary goals were to gain video footage of participants using the mobile app and draw insights from their experiences, which could be used to improve curriculum content or create additional resources for the app.


Budget played a big role in our study design decisions. App testing was intended to take place in Philadelphia. Since there was no interest from the foundation to implement a remote unmoderated study, we diverted our limited budget towards renting out a small test center.

We also had to make the decision to divert our left over funds to cover travel expenses and any small incentives for participants like toys or stickers. The assumption was that learning to read was enough to keep participants committed to the process.


To gain deep insights into the reading situations of participants, the UX Leads decided to conduct on-camera interviews with the parents of the participants at natural time points along the reading journey (i.e. before starting, after the first 8 modules, at the end of the program).


Finally, the UX Research team needed address another big constraint, which was that not all the module app screens were built by the engineering team, when testing began. There was a big push from the foundation to begin testing as soon as possible, so we needed a way to circumvent this problem.


As a workaround, the UX Research team had participants test any reading modules that were already. For the modules that were not yet built, we decided to train tutors to mimic what the app screens would narrate and make notes on a form. This data would be used to build additional features in the app.


  1. Parents and children would arrive routinely at the center for 15–30 minute sessions.

  2. Tutors interviewed parents (tutors trained to conduct interviews by the UX research team, since the research team was remote and unable to conduct the interviews themselves)

  3. Child interacted with built reading modules.

  4. Tutors would mimic the narration on future modules.

  5. The UX research team would draw insights from interview data and tutors’ notes.

  6. The UX research team would analyze this data remotely using Miro.


We were all set to begin testing in March of 2020, when COVID-19 hit and our entire process needed to be reworked to be completely remote. New challenges and constraints were introduced.


With the onset of COVID-19, the foundation was more open to exploring remote unmoderated testing options and a small amount of the budget was now available to fund any research tools we may need. The Engineering and Design teams also had time to design and build out all remain modules.


The UX Research Leads completed a Competitive Analysis to determine which existing research tool would work best for our needs.​

We chose the Loop 11 tool to host our reading app screens/modules because it was the only one that fit in our budget and because it enabled us to conduct either moderated or unmoderated studies.


The team chose to go with an unmoderated solution because a moderated solution would require too much time commitment from voluntary researchers to test all 40 modules one-to-one with each participant.


Parental feedback was still extremely crucial for gaining deep insights, so we decided to use Zoom to conduct these at the same time points in the reading journey.




While the study is still in data collection phase, there are a few outcomes and lessons/takeaways that we have already uncovered:

  1. Preliminary findings show that it may be a little difficult for some users to understand and accept this new way of reading. It may mean that adults will have to work with the children initially to help them understand this new concept. The organization could use this data to build in-app or online resources (a cheat sheet) to help parents teach their children the concept, at least initially. The goal is for this app to teach reading independently.

  2. Parental feedback indicates that parents would love a preview of what to expect and what sounds will be taught at the onset of a new module. It might be a good idea to include this as a starter screen at the beginning of each new module.

  3. Screen recordings and feedback shows that children seem to enjoy the games included at the end of the modules. It could be a great aid to the regular reading curriculum taught in schools.


User Flow: A major problem we encountered is the drop-off rate of participants. This was a large study with multiple modules over multiple sessions, along with interview check-ins. The team had to change the user flow multiple times, to make it easier for participants to stay committed to the process. Perhaps a mini-pilot study to understand participants’ willingness to commit to such a program, the incentives needed and other information should have been conducted first to determine the best user flow.

Incentives: Due to budget constraints and the need to spend on resources like a UX research tool or (initially) the center, we made the assumption that learning to read after being in the program would be incentive enough to commit to it. In retrospect, and if our budget was bigger, it would have been ideal to provide at least a modest dollar amount to participants.

Like what you see?

Let's chat.

  • White LinkedIn Icon

© 2020 by Swati Nikumb