Common assignment: assessing the rhetorical analysis

Hi there. It’s your friendly Assessment Research Coordinator, Daniel Ernst, checking in with an update on things we’ve learned so far from some of the 2018 ICaP common assignment pilots. A group of ICaP instructors recently read and rated a very popular first-year writing assignment, the rhetorical analysis. Raters not only came to consensus about their ratings, but found strong evidence of student improvement.

Assessment methods and results

For this assessment, we randomly selected 23 pairs of essays from a pool of 60 submitted by instructors. Each pair included two rhetorical analysis essays, one written as a diagnostic pre-test and one written as a post-test to conclude a four week class unit on rhetoric. In total, 46 essays (23 pre-tests and 23 post-tests) were read and rated at least twice by eight graduate student instructors and one professor. Essays were de-identified to prevent raters from knowing who wrote them, which class or instructor they were written for, and whether they were a pre or post test. Raters used a simple rubric built from ICaP outcomes one and three, which focus on rhetorical and analytical knowledge and critical thinking skills.

Comparing the ratings from the nine raters showed substantial agreement and statistically reliable results. (Pearson’s product-moment correlation coefficient of .73.) The highest essay scored an 11/12; the lowest scored a 3/12. Most pre-tests scored at or below 6/12; most post-tests scored at or above 7/12. Here’s what we found:

Pre (5.78) and post test means (7.63) respectively.
Mean pre-test score, 5.78; mean post-test score, 7.63 [t(22)=4.25, p < .001; Cohen’s d=1.04]
There is a highly significant (p < .001) improvement in mean scores between the pre and post test essays. We can confidently say the improvement in mean scores is not likely due to chance but instead likely due to the effect of the treatment: the class and concepts taught. Additionally, the improvement is not just significant but meaningful: the Cohen’s d value of 1.04 indicates the distribution average improved by one standard deviation from pre to post test. This means that a pre test essay scoring at the 84th percentile of all pre tests would score at just the 50th percentile of all post tests. Finally, the post test mean score (7.63 +/- .41) sits right at the midpoint (7.5) of our rating scale (3-12), indicating a distribution of student performance around the true mean of our scale.

Graph showing overlaid distributions
Pre and post test score distributions

So far, so good… both methods and results

Although the sample is limited in certain ways (size, variety, degree of randomness), we are seeing evidence of significant and meaningful growth in writing quality on rhetorical analysis assignments over the course of a single unit in ICaP courses. The evidence is bolstered by the strength of the rating instrument, which almost perfectly sorted pre and post test writing and scored the mean of post test essays at its exact midpoint, as well as the high correlation coefficient obtained by the nine raters using the ICaP Outcomes-derived scaled rubric (.73). As we begin to develop methods for assessment which will be applied program-wide, these results suggest the ICaP Outcomes and Detailed Learning Objectives can serve as source material for designing assignment rubrics, at least for the outcomes on rhetorical and analytical knowledge.

To be sure, we should expect such strong results due to the design of this specific assessment. Any writing done before learning concepts and skills and then measured again after should demonstrate significant improvement. But by building our rubric and scale directly from the ICaP Outcomes, we also show that instructors are meeting the outcomes related to rhetorical and analytical knowledge as our program currently articulates them. Content knowledge about rhetorical concepts and ability to critically analyze texts are fundamental to any writing class. We’re pleased not only with the scores, but the conversations our readers had as we rated sample essays and discussed the rubric ICaP staff and I developed.

We are currently preparing a full report on the common assignment pilots that will use results and data from the rating sessions to make evidence-based decisions on the future direction of program-wide common assignments and the culture of assessment within ICaP. The next steps going forward include revising the rhetorical analysis and other assignment instructor guides for our second generation of common assignments, as well as refining rubrics and assignment sheets. We welcome feedback and questions from any participating instructors or those interested.

Thanks again to all instructors and raters involved in this assessment, including but not limited to Parva Panahi, Mac Boyle, Deena Varner, Libby Chernouski, Julia Smith, Joe Forte, Carrie Grant, April Pickens, and Bradley Dilger.