Skip to main content Skip to main navigation Skip to footer content

Fresno State Student Ratings of Instruction (FSSRI)

Frequently Asked Questions

First, make sure that opting out is permissible. Each department has a policy about this. Check Departmental Policies. When you opt out, your department chair will be notified, in order to verify that this is permissible.

Please consider opting out of labs or activity classes that are attached directly to lectures if they have the same instructor. Please consider opting out of independent study or thesis units with very few students.

Navigate to the For Faculty/Instructions page to view a video walk-through of opting out of student ratings for a specific class.

Simply follow the same steps as you did to opt out:

1. From your myEvalCenter page, select the menu item Opt Out of Evaluations
2.  Select the class tile of the class you want to opt back in to
3.  Enter your initials in the text field and click Submit Initials
4.  At the top of the screen, text highlighted in yellow will confirm the class has been opted back into 

No. The dates are set each semester to run the last two weeks of instruction.

Course evaluations in SmartEvals are open for a two-week period at the end of the semester. This evaluation window is set at the university level to ensure consistency across all courses. The two-week window cannot be shortened or modified at the individual course level. Instructors are encouraged to remind students to complete their evaluations during this time, but the start and end dates will remain the same for all courses.

No. SmartEvals does not offer this function.

The survey consists of 12 required areas (four that are related to instructional design, four that are related to instructional delivery, and four that are related to assessment).These are specified and required by APM 322. Therefore, you must have at least one item in each of the 12 categories. But each category includes multiple options. You may have up to 24 total questions. You may add written questions in addition to the standard items.

Navigate to the For Faculty/Instructions page to view a video walk-through of selecting/adding questions.

Yes! To set up your default questions in SmartEvals:

  1. Log in to your SmartEvals dashboard.
  2. Select “Select Questions” → “Modify Default Questions.”
  3. Choose all the questions you want to include, then click “Add Selected Questions.”
  4. If you’ve already chosen questions for a specific section and want to replace them with your default set, select “Override All Selection."

Once saved, your default questions will automatically apply to all future courses.

If you do nothing, then your course(s) will have a survey consisting of the 12 default items. Afterward, the report will be generated and will be available to you and your Dean’s Office, through SmartEvals, to be placed in your personnel file.

There is automated integration between PeopleSoft and SmartEvals. All courses listed in PeopleSoft are automatically included, and each survey is generated only for the individual designated as the primary instructor in PeopleSoft.

Why we’ve moved to SmartEvals:

A more streamlined and accessible interface: SmartEvals provides a modern, mobile-friendly experience for both instructors and students, simplifying setup and review.
Enhanced reporting capabilities: You’ll see clearer dashboards, flexible filters, and faster access to course- and department-level insights.
Significant cost savings: The new contract reduces overall expenses to the university while maintaining high-quality service.

 

What you need to know:

Instrument: The questions will remain the same.
Fully online course evaluations: Students access their evaluations using their computer, tablet, or mobile device, and results are collected and reported electronically. Because SmartEvals is designed as a fully digital platform, it does not support paper-based evaluations. This shift ensures:

  • Faster turnaround of reports
  • More reliable participation tracking
  • Enhanced security and confidentiality of student responses
  • Greater accessibility for students, who can complete evaluations on any device

Course evaluation window: Course evaluations in SmartEvals are open for a two-week period at the end of the semester. This evaluation window is set at the university level to ensure consistency across all courses. The two-week window cannot be shortened or modified at the individual course level. Instructors are encouraged to remind students to complete their evaluations during this time, but the start and end dates will remain the same for all courses.


Historical data: Your historical SRI results from Explorance have been transferred to SmartEvals and made available within the new system.

  • SmartEvals does not associate specific responses with students' identities.
  • SmartEvals is configured so that instructors do not receive results until after final grades are submitted.
  • SmartEvals is configured so that a report is not generated for a class with less than 3 students.
  • Instructors may read written (qualitative) comments, but without knowing who submitted them.

Potential Problems with Student Ratings of Instruction

There is very strong evidence from randomized controlled trials that there is a gender bias in student ratings. Within the same course, if students think their instructor is female (a perception that can be randomly assigned in online classes), they rate the instructor more poorly than if they think the instructor is male. The discrepancy can be as large as half a point on a 5-point scale.   (e.g., https://thekeep.eiu.edu/cgi/viewcontent.cgi?article=1509&context=jcba

Race has been more difficult to study. There are more racial/ethnic groups than gender groups, and it is confounded with issues such as immigration status and language skills. A small replication of the design described above with the perception of an instructor's race in an online class found evidence of race bias with a much smaller effect size than gender bias. A very thoughtful discussion of this study, and this entire body of literature, can be found here:  https://www.cambridge.org/core/journals/ps-political-science-and-politics/article/exploring-bias-in-student-evaluations-gender-race-and-ethnicity/91670F6003965C5646680D314CF02FA4 

That said, both gender bias and race bias are rampant within the written comments on student ratings. This is why they are not included in personnel files, but instead or seen only by the instructor and department chair. In addition, gender and race bias are much bigger problems when ratings are generated by low-quality instruments that have not been created by social scientists and vetted for reliability and validity. RateMyProfessor ratings are a classic example of ratings that are not reliable or valid, and therefore quite vulnerable to bias. The Fresno State SRI Questionnaire was created by scholars with expertise in survey construction and tested thoroughly for reliability and validity. We have not found evidence of gender and race bias here on our own campus using our new instrument, but we continue to review institutional data for these problems. 

There are some calls within academia to abolish the use of student ratings altogether because of this evidence of bias. In a recent survey of Fresno State faculty, we found that only 15% of respondents take this position. Furthermore, 95% of respondents report that they have used information from student ratings to improve their own teaching. And the faculty union has never taken a position against student ratings. Therefore, we think it is unlikely that student ratings are going away on the Fresno State campus. That would remove students' voices entirely from the evaluation of instruction and from the process of improving instruction. The Student Ratings Committee hopes to generate discussion on the Fresno State campus about how we want to address this issue, while continuing to honor the voices of our students.  

Some student ratings are surely just popularity contests. When the question is totally vague (e.g., "how satisfied are you with this instructor?") then the answer is bound to be a general impression, because that was what was asked for. Ratings based on items like this are not related to how much students actually learn. e.g.,  https://www.sciencedirect.com/science/article/abs/pii/S0191491X16300323  

Our Student Ratings Committee strove to create an instrument for student ratings that would be different from this. The Fresno State Student Ratings of Instruction instrument is based on the following principles: 

Student cannot accurately report: 

  • how much they learned, because human being are not good reporters of this, especially when the learning is new. To learn more about why, read about the Dunning Kruger Effect.  https://www.nytimes.com/2020/05/07/learning/the-dunning-kruger-effect-why-incompetence-begets-confidence.html  
  • invisible things, like how much their faculty care about students, or how knowledgeable they are in their fields. 

Therefore, the FSSRI does not include items such as these. 

But students can report:

  • Whether or not THEY understood things like the purpose of their assignments, how they would be graded, that their questions were welcome, etc. Their understanding tells us if our efforts at conveying these things were successful. 
  • About our directly observable behaviors. If those behaviors are known to produce learning, as demonstrated in published empirical research, then it's worth asking students if faculty did those things.  

Therefore, the FSSRI includes only items such as these. 

While some student ratings instruments may be popularity contests, ours is not.