Common Assignments: A Year in Review

We have completed the pilots of the ICaP common assignments. In the Fall of 2017, ICaP administrators in conjunction with the grad student-run Pedagogical Initiatives Committee (PIC) began developing six different common assignments to trial run in English 106 and 106-I courses during the Spring 2018 semester. This was a decision made largely in response to the Council of Writing Program Administrator’s (CWPA) external review of our program a year prior, which suggested a common assignment could be a good way to introduce more consistency to our diverse syllabus approach system while at the same time preserving instructor freedom.

Since then, a team led by and consisting entirely of graduate students across the English department has collectively developed and taught, as well as rated and analyzed samples of, six different common assignments piloted by 39 instructors and completed by more than 780 Purdue composition students. From the outset, this project strived to be a grassroots, bottom-up effort so that those with the most skin in the game had a seat at the table. At every stage of this project, we tried to engage with grad students and lecturers in order to cultivate a localized assessment initiative attuned to the actual experiences of ICaP instructors.

We learned a great deal from these pilots, and we have taken that knowledge into account to make evidence-based decisions about changes to the common assignment options going forward. Given that participation was voluntary for these pilots, our findings should be interpreted with a certain amount of caution. More specifically, there are issues with sample size and randomization, which impact generalizability. Nonetheless, the data we did gather and analyzed helped to start assembling the larger picture, and we used appropriate caution when making final decisions. As the common assignment component of ICaP becomes mandatory this semester, the assignments have been updated to address the most pressing issues we encountered during these pilots.

The assessment report, linked below, contains a full write up of the conditions leading to the advent of the common assignments, the process undertaken developing them, the findings of initial assessment efforts, and the changes we made and recommend making moving forward. We share this report to maintain the grassroots spirit and transparency of a truly bottom-up assessment project we set out with. Any questions, comments, or concerns should be directed to Daniel Ernst at ernst9@purdue.edu. Thanks.

Assessment Report 2017-18

Common assignment: assessing the rhetorical analysis

Hi there. It’s your friendly Assessment Research Coordinator, Daniel Ernst, checking in with an update on things we’ve learned so far from some of the 2018 ICaP common assignment pilots. A group of ICaP instructors recently read and rated a very popular first-year writing assignment, the rhetorical analysis. Raters not only came to consensus about their ratings, but found strong evidence of student improvement.

Assessment methods and results

For this assessment, we randomly selected 23 pairs of essays from a pool of 60 submitted by instructors. Each pair included two rhetorical analysis essays, one written as a diagnostic pre-test and one written as a post-test to conclude a four week class unit on rhetoric. In total, 46 essays (23 pre-tests and 23 post-tests) were read and rated at least twice by eight graduate student instructors and one professor. Essays were de-identified to prevent raters from knowing who wrote them, which class or instructor they were written for, and whether they were a pre or post test. Raters used a simple rubric built from ICaP outcomes one and three, which focus on rhetorical and analytical knowledge and critical thinking skills.

Comparing the ratings from the nine raters showed substantial agreement and statistically reliable results. (Pearson’s product-moment correlation coefficient of .73.) The highest essay scored an 11/12; the lowest scored a 3/12. Most pre-tests scored at or below 6/12; most post-tests scored at or above 7/12. Here’s what we found:

Pre (5.78) and post test means (7.63) respectively.
Mean pre-test score, 5.78; mean post-test score, 7.63 [t(22)=4.25, p < .001; Cohen’s d=1.04]
There is a highly significant (p < .001) improvement in mean scores between the pre and post test essays. We can confidently say the improvement in mean scores is not likely due to chance but instead likely due to the effect of the treatment: the class and concepts taught. Additionally, the improvement is not just significant but meaningful: the Cohen’s d value of 1.04 indicates the distribution average improved by one standard deviation from pre to post test. This means that a pre test essay scoring at the 84th percentile of all pre tests would score at just the 50th percentile of all post tests. Finally, the post test mean score (7.63 +/- .41) sits right at the midpoint (7.5) of our rating scale (3-12), indicating a distribution of student performance around the true mean of our scale.

Graph showing overlaid distributions
Pre and post test score distributions

So far, so good… both methods and results

Although the sample is limited in certain ways (size, variety, degree of randomness), we are seeing evidence of significant and meaningful growth in writing quality on rhetorical analysis assignments over the course of a single unit in ICaP courses. The evidence is bolstered by the strength of the rating instrument, which almost perfectly sorted pre and post test writing and scored the mean of post test essays at its exact midpoint, as well as the high correlation coefficient obtained by the nine raters using the ICaP Outcomes-derived scaled rubric (.73). As we begin to develop methods for assessment which will be applied program-wide, these results suggest the ICaP Outcomes and Detailed Learning Objectives can serve as source material for designing assignment rubrics, at least for the outcomes on rhetorical and analytical knowledge.

To be sure, we should expect such strong results due to the design of this specific assessment. Any writing done before learning concepts and skills and then measured again after should demonstrate significant improvement. But by building our rubric and scale directly from the ICaP Outcomes, we also show that instructors are meeting the outcomes related to rhetorical and analytical knowledge as our program currently articulates them. Content knowledge about rhetorical concepts and ability to critically analyze texts are fundamental to any writing class. We’re pleased not only with the scores, but the conversations our readers had as we rated sample essays and discussed the rubric ICaP staff and I developed.

We are currently preparing a full report on the common assignment pilots that will use results and data from the rating sessions to make evidence-based decisions on the future direction of program-wide common assignments and the culture of assessment within ICaP. The next steps going forward include revising the rhetorical analysis and other assignment instructor guides for our second generation of common assignments, as well as refining rubrics and assignment sheets. We welcome feedback and questions from any participating instructors or those interested.

Thanks again to all instructors and raters involved in this assessment, including but not limited to Parva Panahi, Mac Boyle, Deena Varner, Libby Chernouski, Julia Smith, Joe Forte, Carrie Grant, April Pickens, and Bradley Dilger.