We’re all familiar with the term “garbage in, garbage out.” While the expression is harsh, the underlying principal is sound: you can’t get meaningful results from evaluation or survey questions that don’t ask what you need to find out.

Student evaluation of teaching in higher education has come under fire lately for issues ranging from a correlation with grade inflation,[1] to reflecting a bias against female instructors,[2] to outright uselessness.[3] While these studies have important points to make, let’s not forget the maxim above: is the problem with the concept of student evaluations or with the content of the evaluation survey?

Indeed, if your survey content is a simple “smile sheet” (like a customer satisfaction card at Denny’s, one of the examples cited in the articles above) or contains poor or superficial questions, you may find that the data you’re collecting isn’t worth the effort—or worse, impedes your instructors’ ability to provide quality classes. But if, instead, you work to create meaningful questions that must be answered thoughtfully, that focus on something more objectively measurable than “did you like this?”, you may find that you increase the number of students willing to respond and the quality of the responses you capture.[4]

Before we begin discussing tips and tricks, let’s define some terms:

  • Evaluation/Survey
    The instrument, also called a questionnaire, you use to gather information. You can distribute an evaluation or survey formatted for mobile devices, online, or on paper. Depending on your survey solution, you may be able to distribute your questionnaire using all three methods and collect the information into a single database.
  • Item
    A combination of question/stem and answer choices.
  • Question Stem
    The part of an item where you pose the question to be answered.
  • Answer or Response Choices
    The options you provide to answer the stem.
  • Open Question
    A question stem with no answer choices. Respondents are able to type more extended responses in free text.
  • Accessibility
    The design of products and environments to be usable by all people, to the greatest extent possible, without the need for adaptation or specialized design.[5]

Armed with these definitions, let’s examine four categories of tips and tricks that will help you create surveys that deliver real answers you can use to drive improvement and change. These tips are also presented as a checklist at the end of this document (if you’re reading the PDF) or made available for download (if you’re reading it online).

1: Plan Your Evaluation Survey

As is so often true when getting useful results matters, a strong evaluation starts with a plan. It’s important to take a holistic approach so that you’ve clearly identified what you want to measure and how measuring it fits your overall goals before beginning to write survey items.

Consider the following when developing your plan:

  • How does this evaluation survey provide additional value to grow relationships?
    For example, are you trying to engage students more in the educational process? Are you trying to inspire instructors to promote and follow continuous improvement in their courses? If so, you need to design the survey so that those stakeholders get answers they’re looking for, in addition to any institutional goals you might have.
  • Can you use the survey (the development process, the results, or both) to create collaboration opportunities?
    Whether you follow a centralized or decentralized approach to evaluations, it can be helpful to reach out to a variety of teams to collaboratively develop questions and brainstorm ways to use the results.
  • Can you expand surveys beyond the current user base?
    Maybe there are departments in your institution that do not evaluate instruction. Maybe there are study programs or campus services that could benefit from being looped into an evaluation of instruction process. You might even discover that there are strong cases for alternate uses for your survey tool, making it easier to justify the expense of the solution. For example, many of our clients also use our survey solution for 360° performance evaluations, to present and capture opinion on various institutional initiatives, for alumni surveys, etc.[6]
  • Are you gaining insight into behaviors?
    Part of the purpose of a survey is to gain information about perceptions. Another very valid use is to gain information about behaviors: do students regularly attend lab? Do they use campus resources, such as a learning center, as a regular part of their course behavior? Do instructors routinely perform certain actions in the classroom that you want measured? There are many questions you can ask during a course evaluation to identify and evaluate student and instructor behaviors.
    Tip: This is a great way to move beyond “how do you feel about…?” questions.
  • Are you using surveys to promote consistency?
    Consider whether you want a set of questions that all departments and programs must ask. There’s a common saying in management: you get what you measure. If students and instructors see they’re being measured on certain actions, they’re likely to pay more attention to those actions, which can increase consistency of instruction regardless of instructor or department.
  • Can you easily use data captured from surveys?
    You’re not doing a survey simply because it’s a box to be checked (at least, we hope you’re not); you have some goal in mind or some use for the data. What is that use? Are you asking questions that deliver answers you can use to further that goal? Are you distributing results so that all stakeholders can see them and participate in reaching that goal? Our clients have experienced that the more they promote survey results and the effect of those results, the more likely students and instructors are to participate. An increased response rate strengthens your case for surveys as a worthwhile activity as well as providing greater reliability for the results themselves. Further, these reports should aid you in understanding your findings, identifying problems, and suggesting opportunities for improvement. Consider how you want to use results, and plan your questions accordingly.

A thoughtful plan, prepared before you begin writing items, streamlines the item-writing process and helps ensure that you can leverage the results you get to drive institutional quality improvement.

2: Write Items Carefully

Once you have a plan, you’re ready to start writing evaluation survey items. While it is beyond the scope of this article to tell you what questions to write,[7] strong and useful survey items share some basic characteristics:

Overall tips for writing items

  • Make sure all items in the survey align to the overall goals for the survey. Using your plan, evaluate each item.
  • Avoid writing overly complex items that are difficult to read. Those items are more likely to be skipped.
  • Write items clearly, without ambiguity. Another facet of this tip is to ensure that you are not doubling up on questions. “And” and “or” are your enemies in item writing because they can set up a conflict or a false dichotomy. If your question asks whether a course was entertaining and informative, how is the respondent supposed to answer if it was entertaining but not informative or vice versa?
  • Ensure items are consistent with each other both grammatically and conceptually, particularly if you’re reusing questions from previous years or matching different surveys across departments.
  • Ensure that the results obtained from the question create feedback report recipients can use to take action and make improvements.
  • Verify that questions are simple (e.g., they do not use convoluted vocabulary or grammar), specific (e.g., they measure one thing), and short (e.g., they can be easily read and comprehended at a glance—less than 25 words is a good guideline).
  • Avoid jargon and acronyms. If you must use acronyms, be sure to spell them out at first use.
  • Develop questions that are objective and do not predict or signal a “correct” answer. You want honest feedback, and if you bias the question to signal a preferred answer, you’re not going to get valid feedback.
  • Ensure questions promote honest replies. Your survey solution and any anonymization features it provides can support some of this, but you can also make sure you’re phrasing the question to secure an honest response.

For example, in open-ended questions, you can include a prompt reminding students not to name names in their free-text responses. As another example, consider omitting demographic questions from student evaluation surveys. Typically that info can be linked on the back end for statistical purposes, without compromising anonymity.

  • Evaluate whether questions can be used year over year so that you get consistent data over time. Avoid questions that contain dates or refer to contemporaneous events that may not apply the next time you perform the evaluation.
  • Make sure questions apply to all respondents. For example, replace questions like, “If you participated in a lab, rate the lab exercises on a scale of 1-5” with questions like, “Did you participate in lab? Y/N.” Then you can use those questions to set up what’s called “branching” in online surveys or “skip next” in paper surveys.

Branching in online surveys opens or closes question sets based on the response to an earlier question. Skip next in paper surveys clearly communicates to the respondent that they can simply move on to the next main question. These tactics help keep your survey focused and short and prevents respondents from abandoning a survey that looks “too long.”

Tips for writing response choices

  • If it is a scaled question (e.g., “Rate X on a scale of Y–Z”), make sure the response choices use an appropriate rating scale and labels. Some tips to consider for this:
    • A scale with 4-5 options often gathers as much useful feedback as scales with more than 5 options and are easier for respondents to answer accurately.
    • Scales should always be presented in the same order (either from positive to negative or vice versa) to prevent misinterpretation.
    • Consider adding a “does not apply” option to round out your scale choices.
  • Align response choices with the stem. For example, say your question is, “How many hours did you spend studying for this course?” A poor set of response choices would be “Very often, often, sometimes, rarely, never.” You can see how those answers are not how someone would answer the question. Either reword the question or rework the response choices to something like, “10 or more hours, 7–9 hours, 4–6 hours, 1–3 hours, less than an hour.”
  • Avoid response choices that overlap. Using the previous example, you would not provide response choices of “10 or more hours, 7–10 hours, 4–7 hours, 1–4 hours, an hour or less.” If someone studied 7 hours, which answer should they choose? They’re more likely to skip the question than to pick.
  • Make sure response options, as much as possible, do not influence answers. Similar to avoiding a leading question, response choices should also be objective and not lead respondents to the “correct” answer or influence the option chosen.

Again continuing our example, the hour ranges given above may be far too long, and might influence respondents to inadvertently inflate their responses by indicating how much they should have been studying. This is natural; students do not necessarily track the exact number of hours they study, but they usually have a very good idea of how much they study relative to other students in the class.

So, if a student knows that she has studied a medium amount compared to other students, she might pick the middle answer, regardless of how well it actually matches the time she spent studying. Make sure your answer choices are realistic, or even just provide an open box for respondents to type a response.

  • Consider including open-ended questions where it makes sense. Experts are divided over the true utility of open-ended questions; below are some pros and cons to help you decide what’s best for your survey:

Pros
Cons

No limit on responses
Prone to short answers (less in online surveys)

Provides rich, individual feedback
More likely to be skipped

Responses may be more precise
Can take more time to analyze and group responses

Reduces surveyor influence/bias
Responses tend to be polarized (only those very happy or very unhappy are likely to take the time to respond)

3: Edit Items Before Finalizing

Everyone likes to believe that they write cleanly enough to not need to review their work after they’re done drafting. Unfortunately, we are almost always blind to our own mistakes. Never underestimate the value of at least one other set of eyes on your work. Beyond that, if you’re working in a collaborative environment, you need to review items developed as a group so that they speak with one voice. Everyone has slightly different writing styles, but those differences should not be apparent to survey respondents.

A robust and team-based editing process adds rigor and consistency to your surveys:

  1. Round 1 provides overall editing and a consistency check. Consider involving stakeholders in this round to ensure you have covered all the topics needed.
  2. Compile and implement received edits (the ones that make sense).
  3. Round 2 is a more specific edit that looks for some of the same things as Round 1, but also adds a bias check.
  4. Repeat step 2.
  5. Copyedit and proofread the final survey.
    NOTE: Consider having a professional copyedit and proofread your survey. Professional editors are trained to look for consistency, coherence, and grammar issues. They bring a fresh viewpoint to your work and are typically more rigorous than reviewers who’ve already seen the material, in some cases twice, before.

Don’t be too alarmed by this process. Although it may look complicated, it generally happens much faster than you think, especially if you set and enforce deadlines. Producing a clean survey, free of distracting typos and unclear information, makes it much easier for respondents to provide the answers you need.

Tips for ensuring a robust and effective review:

  • Allow time for the process above, but don’t allow your survey to become trapped in an endless review cycle. Typically, two rounds of review and edit are enough before you copyedit and proofread. Round 1 provides the initial feedback, and Round 2 verifies that the necessary edits were made.
  • Include stakeholders in your review. Consider seeking input from at least one instructor and at least one student, in addition to representatives from the different departments you work with. Those fresh viewpoints can identify potentially embarrassing problems so you can address them before they occur.
  • Consider providing your plan, a checklist of the items described in this article, or both to your reviewers. That way, reviewers have a basis for evaluation; they can see what you’re trying to accomplish and help support it. Further, providing direction to your reviewers ensures you get more meaningful feedback on consistency, use of jargon or acronyms, possible bias, and other important aspects of your survey. Don’t just hand them the survey and say, “See what you think.”
  • Encourage reviewers to keep the goal in mind. Some stakeholders may want to add questions because they “want to know,” which can result in a survey that is unworkably long. Unless those added questions clearly speak to the overall goal, it’s okay to resist adding them.

4: Don’t Forget Item Layout and Survey Accessibility

Creating an effective survey doesn’t stop with writing and editing questions. Think about how those questions will be represented on screen or on paper. Consider the visual appearance of your survey—and don’t forget about accessibility.

Tips for layout:

  • Use clear and appropriate fonts. Choose a font that is easy to read, not one with elaborate detailing or a “fun” appearance. You want respondents to take your survey seriously and complete the questions. That means making the appearance as clean as possible with strong, readable fonts, easily consumed by a screen reader (more on accessibility later). Don’t set the fonts too large or too small; typically a font size of 12-14 points is effective.
  • Avoid unnecessary bolding and italicizing. Remember: if you emphasize everything, you have emphasized nothing. Use emphasis appropriately and sparingly:
    • Bolding grabs the eye and draws attention to a word, phrase, or sentence. Readers will usually skip non-bolded text to focus on bolded text.
    • Italicizing slows the reader down. Because the letter shapes are tilted, readers must read the material more slowly to ensure they understand its meaning.
    • Avoid underlining as an emphasis technique. While it does draw attention, in today’s world an underline usually signals a hyperlink to be clicked. That may not be too much of a distraction on a paper survey, but it will cause frustration for online survey takers as they try to click on an underlined word.
    • Also avoid using ALL CAPS. While this may seem emphatic, it comes across as shouting and can distract or even repulse respondents.
  • Provide good visual cues:
    • Include white space between questions to keep them visually distinguished from one another. However, avoid including too much white space between a question stem and the response choices. Doing so can cause respondents to disconnect response choices from the question stem.
    • It can be effective to set all questions in bold, although that seems contrary to the tip above. Remember, though, that the eye is drawn to bolded material, so setting questions in bold can help respondents navigate the survey and more easily locate the next question.
    • Avoid including too many response choices. Respondents will not necessarily read the entire set before selecting a choice.
  • Consider breaking related questions into sections, pages, or tabs. Doing so makes it easy to provide special instructions for certain areas.
  • Speaking of special instructions, provide instructions or tips for each question where appropriate. For example, if the question type supports multiple selection, be sure to identify that respondents may choose all that apply. While online surveys make this behaviorally easy to identify, it’s still a good practice to provide cues for your respondents. Make sure special instructions are visually apparent.

Tips for accessibility:

U.S. public institutions are required under the Americans with Disabilities Act to ensure materials are accessible to those with disabilities. In fact, there is now legal precedent for this law to be applied even to privately owned organizations.[8] European institutions have similar requirements under the European Accessibility Act. Even if you are not required by law to provide accessibility, it is still a good practice to make your surveys accessible to the broadest number of respondents possible.

Build accessibility into your survey from the beginning. Some solutions, such as Class Climate by Scantron, include features that support accessibility. However, even if your solution already provides these tools, they may not prevent you from designing an inaccessible survey. Unfortunately, it’s easy to design a survey that’s less accessible than you think, but you can avoid most common errors by considering the following:

  • Choose a font that screen readers can easily consume and that those with visual impairments who do not require screen readers can also see clearly and easily. Avoid decorative fonts, type set at very small sizes, and other formatting options that can make your survey difficult to see.
  • Carefully consider colors. Do not use color or imagery simply for decoration, and especially think very hard before you set colored text on a colored background. Doing so can make your survey inaccessible to respondents with a wide variety of visual impairments such as color-blindness.
  • Regardless of the colors you choose, make sure there is strong contrast between the background of your survey and the text on it.
  • Make sure the selection method for your response choices is accessible to assistive devices. Do not place response choices too close together or set them too small. Respondents with physical disabilities will have a hard time completing your survey.
  • Ensure that all images used have alternate text specified to support screen readers and that images do not convey critical information. Further, avoid using emoticons or non-standard symbols in your text—screen readers treat those as images and will be unable to parse the information.
  • Consider using redundant signals wherever possible. For example, traffic lights use both color and position as a cue to the action required. Look for places where you can use more than one signal in the same place to identify desired behavior.

Conclusion

The tips in this document should help you create more effective evaluations and surveys. While this may look like a lot to consider, you’re surveying so you can get the results you need to make decisions. Why not use a strong plan, effective questions, and useful design to ensure that those results are as valid and reliable as possible?

Footnotes

[1] Tobin, Michael. “UO Study Finds Correlation between Grade Inflation, Student Course Evaluations.” Emerald Media. Daily Emerald, 18 July 2017. Web. 20 July 2017.

[2] Boring, Anne, Kelli Ottoboni, and Philip B. Stark. “Student Evaluations of Teaching Are Not Only Unreliable, They Are Significantly Biased against Female Instructors.” Impact of Social Sciences. The London School of Economics and Political Science, 08 Mar. 2016. Web. 20 July 2017.

[3] Stark, Philip B., and Richard Freishtat. “An Evaluation of Course Evaluations.”ScienceOpen. ScienceOpen, 29 Sept. 2014. Web. 20 July 2017.

[4] For an example of how one of our clients has done just that, see our case study on St. John’s University [hyperlink].

[5] “About UD.” The Center for Universal Design. North Carolina State University, 2008. Web. 20 July 2017.

[6] See our companion article, Uncommon Uses: Expanding Your Options with Class Climate, at http://www.scantron.com/articles/cc-uncommon-uses for examples and ideas.

[7] Our evaluation solution, Class Climate, includes a bank of prewritten, valid and useful questions you can include in your evaluation surveys, supplemented by any questions you design using these guidelines.

[8] https://www.forbes.com/sites/legalnewsline/2017/06/13/first-of-its-kind-trial-goes-plaintiffs-way-winn-dixie-must-update-website-for-the-blind/#5b9a6921b38a


Ask about Class Climate for course evaluations - Start the conversation