ResLife Student Staff Training: Is It Working? (Part 1)

Across the country a week or two before all students return to campus, we see housing student leaders come back early to prepare and train for the upcoming year. This often involves team building, crisis management, facetime with campus resource leaders, and hopefully some time carved out for hall preparations. After the whirlwind of the training season and opening is completed after catching our breath, there is a question lingering over housing departments: did the training work?

Posts in this series:
Part 1 | Part 2 – Behind Closed Doors | Part 3

Kirkpatrick’s Four Levels of Training Evaluation

Let’s assume we are all on the same page here: we believe resident advisor training should be assessed. Not only because it is often a large budget expenditure, sometimes in the thousands of dollars, but also because we want to make sure our resident advisors are prepared to perform their jobs. A best practice to answer the question did the training work is to have clearly identified metrics for what success looks like before the training happens. But where should you focus those metrics and how can you make sure they are grounded in something? May I present, Kirkpatrick’s Four Levels of Training Evaluation (1994):

A visualization of Kirkpatrick’s Four Levels of Training Evaluation, a four-level pyramid. Level 1 says reaction with the question “did the learners enjoy the training", level 2 says learning with the question “did learning transfer occur”, level 3 says impact with the question “did the training change behavior”, and level 4 says results with the question “did the training influence performance”.

Source: KudoSurvey

This model was originally built for corporate training, but the translation over to training in student affairs is an easy shift! The way this model reads is from the bottom to the top. Kirkpatrick’s model says that if learners are enjoying the training, then they can learn the content. If they learn the content, then they can change their behavior because of the training. If they change their behavior, they can then improve their performance. It isn’t a guarantee that if a level in this model has a positive outcome that the next level will be fulfilled, but it sure will be a lot easier. Each of these levels has different needs, which means you will likely need to complete a multi-phased and mixed-method assessment to gather information. Level 1 and level 2 are usually immediately measured within training, while level 3 and level 4 are explored over time. In this specific blog, I am going to dig into assessing level 1 and level 2. In two future blogs, I will explore level 3 and level 4 individually.

Level 1 – Reaction

From my experiences, this is the aspect of resident advisor training that is assessed most often and probably something that will come easily for your team. I have seen this done as surveys and as focus groups–both allow for the biggest swath of participants to have their voices heard. I personally prefer a focus group and a survey. My preference is for the focus group to be set up in each resident advisor team’s meeting after training is fully completed to understand how people felt about the training since it allows for follow-up questions to be asked to gain clarity. A lot of the other methods I am going to share throughout this series are more quantitative, so I encourage you to try to do something qualitative here to let your resident advisors voice their experience. However, success metrics are often quantitative so supplementing that focus group with a daily survey on the sessions will allow you to measure your metrics more effectively. Having the survey completed daily allows for more accurate reflection of feelings since asking people to be reflective of a distant past has been shown to not give accurate feeling reflections.

Examples for metrics at level 1 could be:

  • Resident advisors describe at least 60% of their sessions as fun.
  • Every session is described as at least moderately useful by 80% of resident advisors.
  • Resident advisors have more things that they enjoy about training than they would change about training in the future.

Here is what a daily survey could look like that measures these example metrics:

A four question Qualtrics survey. The first question is a select all that apply question that asks which of the following sessions from today would you describe as fun, the second question is a likert scale question block which asks to what extent were the following session from today useful, the third question is a short answer response that asks what about training today was enjoyable, and the fourth question is a short answer response that asks what about training today would you want to see changed in the future.

Beyond the daily survey, here are potential questions that could be used in a focus group after training is fully completed:

  • What are your thoughts on the timing of training?
  • What was something you enjoyed about training?
  • What was something you did not enjoy about training?
  • Were there any particular sessions that stood out as positive or negative? Which ones and why?
  • What is something we could do next year to help make training 10% better?

Level 2 – Learning

Since we understand how resident advisors are feeling about sessions, we can now understand what their learning is looking like. This level may be new for you and your department. I often see schools think they are measuring this level because they are asking students how confident they are in applying their learning. Measuring learning is not the same as measuring confidence in learning. There is value in being able to understand confidence in learning for things like deciding on a continuing development schedule, however, I will not be focusing on that approach here.

I would recommend building both a test of learning and some kind of practical practice observation, like a Behind Closed Doors activity. I am going to spend a full blog explaining how to build up a practical practice experience with an observation protocol as Part 2 in this series, so for now I am only going to explore the testing of learning.

When considering what to test, the learning outcomes for each of your sessions will guide your test construction. A learning outcome is a statement of what participants will be able to do after a session. For example, if you are training resident advisors on your Residential Curriculum a learning outcome for that session might be that resident advisors will be able to name the learning goals of your Residential Curriculum. Every hour-long session will likely have no more than 3 learning outcomes, which should answer the question why are we holding this session. These learning outcomes will let you build your metrics, because what resident advisors are expected to learn should be what you are measuring in your test. When building your metrics I encourage you to not have 100% as your goal consistently, since it is just not a realistic goal. Instead, ask yourself what is a realistic goal and if you are setting 100% as your goal ask why it needs to be 100%.

Examples of metrics at level 2 are:

  • 60% of resident advisors can name all the learning goals of our Residential Curriculum.
  • 95% of resident advisors know who to contact in a duty crisis.
  • 90% of resident advisors remember the correct order of the steps to operate a fire extinguisher.

I recommend running a quiz daily to be able to measure the learning that has happened. It is easy to have this quiz for level 2 learning added to the end of the survey for level 1 information every day. A daily quiz allows you to address any misconceptions that may have happened during training immediately instead of letting those misconceptions fester. Having the previous day’s quiz overview as the start to the next day of training gives you space to reinforce the learning and realign as necessary. Plus having the quiz the same day lets you have higher confidence that the learning you are measuring came from that day’s sessions and not something different, like continued conversations within staff spaces. 

Here are a few quiz items that could be used to measure the learning from training that day:

A three question Qualtrics survey. The first question is a short answer response that asks what are the four learning goals without our Residential Curriculum, the second question is a multiple choice question which asks who must be contacted for every duty crisis, and the third question is a drag and drop question which asks what is the order of the steps to operate a fire extinguisher.

If quiz construction isn’t something you have a lot of experience doing, I recommend checking out Chapter 8 in James McMillan’s Classroom Assessment: Principles and Practice that Enhance Student Learning and Motivation. It is a great resource to start thinking about quiz question construction and is full of best practices.

But how do I do it?

One of the biggest questions once you have your assessment plan is to decide how to get it done, after all a daily assessment sounds like a lot of work and who knows if the resident advisors will submit a daily survey and quiz. 

I recommend adding in assessment time to your daily schedule. Set up a 15 minute time at the end of every day where your resident advisors complete their daily survey and quiz. I had each of my staff show me their completion screen to be excused for the day, so we knew we had 100% completion from my team.

Beyond getting the assessment data, use your assessment data during training. Start up every morning with a 15 minute quiz run down. If you use Qualtrics or Google Forms, the scoring function will give you quick analytics for commonly missed questions. The downfall here is that written response questions do not get graded automatically and do require a person to manually grade them. Using 15 minutes every morning to go over commonly misunderstood content allows you to realign on the correct information and showcases that your department cares about their training and wants them to succeed. Plus resident advisors know folks are actually checking out their responses!


Whew, give yourself a pat on the back, this was a lot of information! I hope that this framework and these examples can help you build a meaningful, strategic assessment plan for your resident advisor training. By practicing meaningful and strategic assessment, your department can more effectively tell your story and have actionable data.

I will be back soon with part 2 where we will dive into observation protocols for practical practice sessions, like Behind Closed Doors!

Comments are closed.

Up ↑