At Guide2Fluency.com, in-depth product research and testing is of the utmost importance to us.
The practical, first-hand manner in which we test language apps provides an objective and value-driven baseline for the rating system found on our website. The rating outcomes of our tests determine the language learning products that we recommend to our readers.
Every language app and tool featured on our site has actually been tested by the team. Over the years, our testing methodology has evolved from simply using these language apps themselves, to conducting more technical analysis and diving into the science behind them. Now, in addition to our practical, real-world use test, we also explore more technical components, such as speech software accuracy, measuring literal vs understood translations, and use of spaced reviews.
Each language app is analyzed based on a set of factors to ensure our data and recommendations are transparent, objective and independent. The categories we test for language apps include: use of conversational and verbal practice, inclusion of visual elements, accuracy of speech recognition technology, program pace, pronunciation correction, spaced review framework, user interface, and more.
Our testing team offers years of experience and expertise when it comes to testing language learning systems. Additionally, the different languages spoken by our team and our various strengths and weaknesses as learners offers readers a unique and diverse viewpoint as to various language apps.
Continue reading for a more detailed overview of our testing methodology.
Our product testing team is composed of linguists, polyglots, and language teachers and tutors, all with years of experience. This gives our team, and the content they create, a very diverse and unique viewpoint on language learning-related products.
For more detailed information about each team member, please see our About Us page here.
About Our Testing
To create the most practical and useful content possible, we conduct all of our reviews by actually using the apps we cover. We use the apps we discuss everyday, learning new languages along the way, and in many cases, struggling right alongside our readers. Although this is a pain from a practical standpoint, it adds significant value to our testing process.
This approach gives us a distinct advantage over large media outlets that post reviews of various language learning companies and their products without actually using the apps themselves. The content we create is more authentic and provides a real-world viewpoint.
Beyond this practical, real-world perspective in product testing, however, our team also has the technical know-how. Our lead tester, Mathias, speaks four languages at a C1 or C2 level. He understands what goes into the design of language learning apps, as well as what exactly learners need from their language learning tools to be successful.
For every language program we test, we assign each app a score of 1-to-5 for a number of factors. The criteria we consider includes:
- use of conversational and verbal practice
- inclusion of visual elements (images, videos)
- accuracy of speech recognition technology
- grammar instruction
- variety of drills and exercises
- program pace
- lesson length and detail
- pronunciation correction
- spaced review framework
- user interface
This wide range of factors highlights the importance of the diversity of opinions our team shares, as each factor can vary dramatically by tester. We always average the scores of our testers to reach our final results for each language learning program.
Additionally, this is why we frequently include viewpoint callouts in our content, so readers have a rich cross-section of perspectives.
If you have any questions about our testing methodology or process, please contact us at info@Guide2Fluency.com.