FAQs
-
Since 2005, Girls in the Game has worked with a team at Loyola University Chicago’s Psychology Department under the direction of Amy Bohnert, PhD, to evaluate the effectiveness of Girls in the Game’s programs. Due to their long-term commitment to the project, the Loyola team is very familiar with the organization as a whole and has years of experience with our goals, curriculum and data to provide accurate and detailed evaluations. Their partnership has been immensely beneficial to Girls in the Game over the many years.
-
We use a validated pre- and post-survey in our long-term programs like After School and Teen Squad. This means that we use “surveys and screening questionnaires that have been tested to ensure production of reliable, accurate results” (University of California San Francisco). In layman’s terms, we aren’t just making up questions that we think will measure confidence or leadership; instead, our Loyola team finds validated survey questions that have been tested and proven in previous studies to measure the outcomes we’re looking for in our programs.
-
Girls take the first survey at the start of their season and take the same survey again at the end of the season to gauge change. We customize the survey to evaluate outcomes that girls will cover in the curriculum that season. So, if participants will be covering our Healthy Relationships curriculum, we will tailor that survey to measure their progress and beliefs around healthy relationships. Additionally, there are three components that we always measure: teamwork, enjoyment of physical activity, and resilience. Once we’ve gathered the surveys for the school year, the Loyola team takes over, presenting the final results to Girls in the Game.
-
In some of the completed data, you will see what’s called a composite score. This simply means that the survey asked multiple questions around one topic, and those answers were combined to generate a composite score. For example, a composite Safety Awareness score would be the combined results of girls’ agreement with the statements “I know when I am in an unsafe situation,” and “I can make a plan to get out of an unsafe situation” to create a composite Safety Awareness score.
-
First, we always want to look closer at the data. Did girls measure high when they took the initial survey? If so, we don’t expect to see much change over time. Or did one program site make up most of the negative results? Then it’s time to evaluate why that site was different from the rest. Next, we look at our curriculum. Is it sufficient and engaging for participants? Should we form a focus group of participants to get more information? Finally, we need to consider outside influences. During the pandemic, we saw a huge increase in the amount of “I don’t know” responses to survey questions that asked them to select “Yes”, “No” or “I don’t know”. The mass uncertainty that we all experienced that year was directly reflected in the survey results.
-
Surveying youth can be a big challenge. The younger the participant, the harder time they have looking beyond what they’re feeling in the moment, to how they’ve been feeling over the past months. A young participant who has a bad day in school right before they fill out their survey may not show any progress. But when combined with research by larger institutions (like the Women’s Sports Foundation) into the overall trends, surveys give us vital insight into what is and isn’t working. This is also why it’s so important for us to work closely with the Loyola team to find validated measures, and then adjust the questions to each grade level. High school participants can fill out much more detailed surveys than elementary. Additionally, we gather data through a variety of means; the Loyola Evaluation is our biggest method of measuring programs, but we also utilize satisfaction surveys, focus groups, and coaches' reports.