For the 2018-2019 academic year, ICaP has two extensive assessment projects underway. First, we are continuing our own programmatic assessment based on the external review completed by the Council of Writing Program Administrators’ Consultant-Evaluator Service in 2017. ICaP instructors will recognize this as our common assignment initiative. We’re currently scheduling two program-wide norm, read, and rate sessions this semester which will allow us to finalize (a) the selection of the common assignment to be implemented in all ICaP courses in AY19–20 and beyond, and (b) a comprehensive assessment plan which will include both short- and long-term measurements of ICaP’s effectiveness.
Second, we are completing an assessment of English 106 and 108 focusing on their role in Purdue’s Undergraduate Core Curriculum (UCC) Foundational Outcomes, specifically written communication and information literacy. This assessment draws on the direct measurement of student writing ICaP instructors are supporting through our norm, read, and rate sessions, and adds background to showcase how ICaP policies like syllabus review, as well as our assessment efforts, ensure we meet and exceed Purdue’s UCC Foundational Outcomes.
In a follow-up report, ICaP will provide the additional data requested for the foundational outcomes assessment.
My staff and I would like to take this opportunity to thank the student workers, instructors, and administrators who have helped us with these assessment efforts. Without our “grassroots curricular assessment model” (Conti, LaMance, and Miller-Cochran, 2017), our assessment efforts would not have been as rich or as rewarding as they are now. And I would like to highlight Derek Sherman’s work as Assessment Research Coordinator: he’s built on previous efforts extremely well, and I’m looking forward to our next steps.
Attachments: Foundational outcomes assessment preliminary report (as PDF, 284kb) and appendices (as a single PDF, 5.9mb, and also as separate PDFs).
Everything instructors need for spring syllabus preparation is now ready. We’ve made no changes to common assignment instructor guides, and we’ll be using the same syllabus review checklist from the fall. Remember, all ICaP courses must include one of these five common assignments in Spring 2019. (For those teaching online 106, English 106-Y, we’ve already shared materials by email.)
Please use the checklist in the construction of your syllabus to ensure you’ve included all the necessary details. If you get stuck, check Linda Haynes’ syllabus template, developed last fall. (Again—no changes.) This template, based on materials given to new instructors, offers everyone a second way to check their syllabi and provides robust examples of policies and possibilities. Not sure how to frame something? Check the template for examples.
There are two changes instructors should keep in mind, however:
First, instead of emailing syllabi to Dilger, we’ll ask you to fill out this Qualtrics form, which allows you to submit multiple syllabi at once (if necessary). We think this will make processing syllabi easier for us.
The second change is a required addition for your calendars. In Spring 2019, we will be asking all ICAP instructors to participate in two program-wide assessment norming, reading, and rating sessions. We will have sessions both before and after Spring Break, with multiple time slots to minimize schedule conflicts.
Mon Feb 25, 12:30p to 3:00p
Thu Feb 28, 9:00a to 11:30a
Fri Mar 01, 10:30a to 2:00p
Tue Apr 02, 9:00a to 11:30a
Wed Apr 03, 2:30p to 5:00p
Please plan to attend at least two of these five sessions. If necessary, you may cancel classes to facilitate the time needed to participate. We will share the locations and specifics at Convocation (including some incentives for participation). For now, please make the appropriate plans in your course calendars.
Need something else? Let us know. We welcome instructor feedback and continued suggestions for resources which can support teaching.
We have completed the pilots of the ICaP common assignments. In the Fall of 2017, ICaP administrators in conjunction with the grad student-run Pedagogical Initiatives Committee (PIC) began developing six different common assignments to trial run in English 106 and 106-I courses during the Spring 2018 semester. This was a decision made largely in response to the Council of Writing Program Administrator’s (CWPA) external review of our program a year prior, which suggested a common assignment could be a good way to introduce more consistency to our diverse syllabus approach system while at the same time preserving instructor freedom.
Since then, a team led by and consisting entirely of graduate students across the English department has collectively developed and taught, as well as rated and analyzed samples of, six different common assignments piloted by 39 instructors and completed by more than 780 Purdue composition students. From the outset, this project strived to be a grassroots, bottom-up effort so that those with the most skin in the game had a seat at the table. At every stage of this project, we tried to engage with grad students and lecturers in order to cultivate a localized assessment initiative attuned to the actual experiences of ICaP instructors.
We learned a great deal from these pilots, and we have taken that knowledge into account to make evidence-based decisions about changes to the common assignment options going forward. Given that participation was voluntary for these pilots, our findings should be interpreted with a certain amount of caution. More specifically, there are issues with sample size and randomization, which impact generalizability. Nonetheless, the data we did gather and analyzed helped to start assembling the larger picture, and we used appropriate caution when making final decisions. As the common assignment component of ICaP becomes mandatory this semester, the assignments have been updated to address the most pressing issues we encountered during these pilots.
The assessment report, linked below, contains a full write up of the conditions leading to the advent of the common assignments, the process undertaken developing them, the findings of initial assessment efforts, and the changes we made and recommend making moving forward. We share this report to maintain the grassroots spirit and transparency of a truly bottom-up assessment project we set out with. Any questions, comments, or concerns should be directed to Daniel Ernst at firstname.lastname@example.org. Thanks.
The information below answers most of the questions from the assessment Group-Think-Share session. We transcribed these questions from the whiteboards and added questions received via email. On Tuesday, September 4, we will share a handout that outlines the protocol for turning in your completed common assignments.
We have included some general guidelines for you and students as you begin the common assignments in the General Questions section. If you have further questions, please let Derek Sherman know via email@example.com.
General Questions that apply to all four common assignments:
How do we structure assignment sheets and rubrics? What actually needs to be on the assignment sheets vs. rubrics?
To be transparent with students about the common assignment, we ask that you align your assignment sheet AND rubric to the outcomes listed in your common assignment’s instructor’s guide found here. Please use the common assignment outcomes-based rubric as a starting point in the creation of your assignment sheet and rubric. As long as your assignment sheet and rubric show students the outcomes that determine their grade, your assignment and rubric are aligned with ICaP assessment.
How should students submit assignments? In what format?
It’s easiest for everyone if students submit assignments via Blackboard in a .doc or .docx format. Even if you don’t use Blackboard extensively, please ask students to submit their assignments there so that you can easily download and share completed work with us. Electronic submissions on Blackboard and in a .doc or .docx format make anonymizing for assessment purposes easier.
Do we need to share our rubric and assignment sheet as part of assessment?
We would like you to share your rubric and assignment sheet with us so that we can understand how various instructors approached these common assignments. However, we will use the outcomes-basic rubric for assessment purposes.
Would students recognize a logical progression of assignments?
This depends on how each instructor frames their course. Since we do not want to force a standardized curriculum on English 106/108 instructors, instructors must be transparent with their students on how this assignment is scaffolded into the curriculum. If students are made aware of this progression in the syllabus and classroom instruction, students would most certainly recognize a logical progression of skills and assignments.
Can the research essay be argumentative?
Yes, the research-based essay can be argumentative. However, it is not required to be argumentative.
What is the relevance of the research-based essay beyond 106?
ICaP outcomes address research writing in several ways, so this essay is meant to accompany our diverse teaching approaches. ICaP has a strong foundation of teaching research-based skills that programs on campus search for in their students, whereas other “writing” programs lack in teaching research-based skills. This assignment, therefore, was built to appeal to faculty and programs who want students with research-based skills. This assignment introduces students to academic research to better prepare them for future research-based assignments. Since Purdue is also striving for more undergraduate research, focusing on research helps students join that movement, not to mention ICaP as a whole. this also helps ICaP promote itself as an establisher of research-based skills.
Should instructors use solely secondary or primary sources — or will a mix of both source types be acceptable?
Instructors may choose to include all primary or secondary sources, or they may choose to have a mix of both. No matter what, students should be expected to summarize, analyze, and synthesize at least five sources in total, according to the research-based essay instructor guide, in whatever format you, the instructor, expects.
How students will find, use, and integrate CREDIBLE sources?
Credibility varies depending on the student/class’s topic and what type of research the instructor chooses/values. For secondary research, students become acquainted with Purdue Libraries’ databases to search for peer-reviewed articles, thus engrossing students in academic-based research. Primary research, however, varies depending on the research method and participant(s) chosen; therefore, guidance from the instructor should be given to determine what equates to credible research. Often, only interviews or surveys are feasible given the limited time for 106/108.
How are analysis, synthesis, and summary evaluated differently?
These are the three skills that often accompany academic-based work; therefore, for ICaP’s assessment purposes we are looking at these skill holistically (see ICaP outcome three: “Critically think about writing and rhetoric through reading, analysis, and reflection”). By holistically, we want to see if students are able to summarize, analyze, and synthesize academic sources cohesively into one academic paper. We are not seeking to isolate these skills or assess them individually.
Is there still a pretest and posttest for the rhetorical analysis essay?
Yes, please have students complete a pretest without any instruction. After teaching the rhetorical analysis unit, students should complete a second rhetorical analysis. Each essay should be approximately three pages in length.
How do we frame “rhetorical?” Is it important to focus on terms and strategies?
The big issue here is that we want to make sure students go beyond analyzing solely the text and the author’s use of rhetorical/literary devices (e.g., ethos, pathos, logos, simile, metaphor, etc.). In this sense, students should focus on analyzing how the text and the author(s) use of rhetorical/literary devices affect the context and audience. The text, consequently, should be not viewed in isolation but as working against and amongst the author, audience, and larger context.
What are the preferable methods for draft collection?
Since our assessment includes evaluating students’ rough drafts and whether or not they have addressed instructors’ comments and have shown growth per ICaP Outcome Four (i.e., “Provide constructive feedback to others and incorporate feedback into their writing”), it is important that students save a copy of their rough drafts. Please ask students to save a draft in a word document (e.g., ResearchEssay_Draft) and then create a new document for their final essay (e.g., ResearchEssay_Final).
What should be the portfolio’s medium?
We are looking into Purdue’s new portfolio tool and other options and will have an update as soon as we can. We’re hoping to offer several methods. We welcome your suggestions.
What is ICaP’s most important outcome for the professional email?
Rhetorical awareness of the situation is the most important outcome with other outcomes such as purpose and persuasion built in.
How flexible in terms of when during the semester can it be assigned? The time?
Per our discussion at Convocation, the timing is up to the instructor. If you can fit the assignment into an already existing assignment, please go ahead and include it whenever you teach that assignment. As Alisha Karabinus noted, this assignment was originally a kickstarter for rhetorical awareness, but you may include it wherever you see fit.
What are the two audiences?
According to the instructor guide, it states the following considering audiences: “In this assignment, you [students] write two distinct emails for two distinct situations:
In the first email, you will write to an instructor about a missed assignment or exam you would like to attempt to make up. You will create your own reasons for the make-up.
In the second email, you will write to a fellow student with whom you’re working on a group project that isn’t going so well. This email is meant to help establish better deadlines, clarity, communication, teamwork, or any of the above—you can invent a situation for this email or consider a problematic group project you’ve worked on in the past to create your reasons for this email.”
200 words is a lot for an email?
You are welcome to suggest a lower maximum as long as the focus remains on achieving the outcomes on the rubric (i.e., addressing the audiences effectively, etc).
We would like to see student samples?
Currently, we don’t have any samples that fit our two new audiences. Because this assignment was updated this summer, our previous samples no longer coincide with this update. The professional emails that students produce in Fall 2018 may serve as samples for the future.
What are the results from the previous semester’s assessment?
How is the APE exercise relevant to the professional email assignment?
Linda Haynes’s APE exercise is designed to be a fun, low-stakes activity which teaches editing and efficiency. While it certainly isn’t an essential activity, many instructors have had success working it in during this unit as a way to make the editing process visible and accessible.
How is ICaP quantifying positive improvement for the professional email?
In terms of student improvement, growth cannot be quantified because the two audiences are different scenarios and we cannot have a measurement of growth without a pretest. Because raters from the professional email’s prior assessment varied in their definition of an effective email, we are hoping that this more focused audience approach will help us establish a more consistent definition of an “effective email.” If a more consistent definition is established, we may be able to move forward in quantifying growth with a pretest and posttest in the future. However, our goal right now is to measure achievement of the outcomes and not individual student growth.
How do we make a short assignment feel more substantial?
The email is a shorter assignment, but it is also effective in teaching rhetorical situations and catering one’s message to one’s audience. Instructors, therefore, may choose to teach this as an introduction to rhetorical awareness at the beginning of the semester or they may embed it within a current unit. Ultimately, scaffolding this assignment in with others is the best way to make this assignment feel more substantial.
Is it pretest and posttest?
No, this is two separate emails to two separate audiences.
What are the number of drafts required?
This is up to the instructor. Instructors may guide students through sample emails first and then accept only one draft. Or, instructors may conduct peer review sessions on multiple drafts. The choice is up to the instructor.
How many class session should be dedicated to the professional email?
This, too, is up to the instructor. This assignment is not meant to take up a considerable amount of time; therefore, scaffolding it into a current unit may help help in establishing how many class sessions you want to dedicate to this common assignment.
Our second generation of common assignment pilots are ready to go! Remember, all ICaP courses must include one of these five common assignments in Fall 2018 (excepting online 106, English 106-Y, which draws from a different set particular to those courses).
We’ve made a few updates based on instructor feedback and the reading and rating sessions completed in spring and summer:
We are dropping the reading annotations assignment from this semester’s pilot.
The literature review has been replaced by a research-based essay.
The email assignment has been adjusted to require two different emails to two entirely different recipients (and has been further adjusted for content).
The information literacy pre/post test assignment has not changed.
We’ve updated the common assignments page accordingly. All the new instructor guides have been adjusted to streamline the assignments for assessing program outcomes. The instructor guides also now include the rubrics used for assessment so instructors may better see how these common assignments are being used to assess the program. Each pilot guide still includes resources and ideas for how to incorporate the assignment into your course.
We hope the changes make both assignments and instructional guides more clear for instructors. At Convocation, we’ll say a bit about the assessment data which supported these changes, and dedicate time to breakout groups so instructors working on common assignments can get assistance, share materials, and make plans for the semester. As always, if you need assistance developing your syllabus or integrating a pilot assignment into your assignment sequence, reach out—the ICaP support staff is happy to help.
I want to thank Carrie Grant, Alisha Karabinus, Daniel Ernst, and Derek Sherman for their help finalizing this second generation of pilots. I’m also grateful to everyone who helped us develop the assignments or participated in the first pilot in Spring 2018: Bianca Batti, Mac Boyle, Elizabeth Geib, Patrick Hoburg, Mitchell Jacobs, Amanda Leary, Alex Long, Eugie Ruiz, Margaret Sheble, Phuong Tran, and Sharry Vahed.
Hi there. It’s your friendly Assessment Research Coordinator, Daniel Ernst, checking in with an update on things we’ve learned so far from some of the 2018 ICaP common assignment pilots. A group of ICaP instructors recently read and rated a very popular first-year writing assignment, the rhetorical analysis. Raters not only came to consensus about their ratings, but found strong evidence of student improvement.
Assessment methods and results
For this assessment, we randomly selected 23 pairs of essays from a pool of 60 submitted by instructors. Each pair included two rhetorical analysis essays, one written as a diagnostic pre-test and one written as a post-test to conclude a four week class unit on rhetoric. In total, 46 essays (23 pre-tests and 23 post-tests) were read and rated at least twice by eight graduate student instructors and one professor. Essays were de-identified to prevent raters from knowing who wrote them, which class or instructor they were written for, and whether they were a pre or post test. Raters used a simple rubric built from ICaP outcomes one and three, which focus on rhetorical and analytical knowledge and critical thinking skills.
Comparing the ratings from the nine raters showed substantial agreement and statistically reliable results. (Pearson’s product-moment correlation coefficient of .73.) The highest essay scored an 11/12; the lowest scored a 3/12. Most pre-tests scored at or below 6/12; most post-tests scored at or above 7/12. Here’s what we found:
There is a highly significant (p < .001) improvement in mean scores between the pre and post test essays. We can confidently say the improvement in mean scores is not likely due to chance but instead likely due to the effect of the treatment: the class and concepts taught. Additionally, the improvement is not just significant but meaningful: the Cohen’s d value of 1.04 indicates the distribution average improved by one standard deviation from pre to post test. This means that a pre test essay scoring at the 84th percentile of all pre tests would score at just the 50th percentile of all post tests. Finally, the post test mean score (7.63 +/- .41) sits right at the midpoint (7.5) of our rating scale (3-12), indicating a distribution of student performance around the true mean of our scale.
So far, so good… both methods and results
Although the sample is limited in certain ways (size, variety, degree of randomness), we are seeing evidence of significant and meaningful growth in writing quality on rhetorical analysis assignments over the course of a single unit in ICaP courses. The evidence is bolstered by the strength of the rating instrument, which almost perfectly sorted pre and post test writing and scored the mean of post test essays at its exact midpoint, as well as the high correlation coefficient obtained by the nine raters using the ICaP Outcomes-derived scaled rubric (.73). As we begin to develop methods for assessment which will be applied program-wide, these results suggest the ICaP Outcomes and Detailed Learning Objectives can serve as source material for designing assignment rubrics, at least for the outcomes on rhetorical and analytical knowledge.
To be sure, we should expect such strong results due to the design of this specific assessment. Any writing done before learning concepts and skills and then measured again after should demonstrate significant improvement. But by building our rubric and scale directly from the ICaP Outcomes, we also show that instructors are meeting the outcomes related to rhetorical and analytical knowledge as our program currently articulates them. Content knowledge about rhetorical concepts and ability to critically analyze texts are fundamental to any writing class. We’re pleased not only with the scores, but the conversations our readers had as we rated sample essays and discussed the rubric ICaP staff and I developed.
We are currently preparing a full report on the common assignment pilots that will use results and data from the rating sessions to make evidence-based decisions on the future direction of program-wide common assignments and the culture of assessment within ICaP. The next steps going forward include revising the rhetorical analysis and other assignment instructor guides for our second generation of common assignments, as well as refining rubrics and assignment sheets. We welcome feedback and questions from any participating instructors or those interested.
Thanks again to all instructors and raters involved in this assessment, including but not limited to Parva Panahi, Mac Boyle, Deena Varner, Libby Chernouski, Julia Smith, Joe Forte, Carrie Grant, April Pickens, and Bradley Dilger.
We are continuing our initiative to use assessment data to refine our curriculum and learn more about the success of our approaches to teaching English 106 and 108. Last Thursday, we held our second reading and rating session for common assignment pilots. This was the second reading and rating session led by ICaP assessment research coordinator Daniel Ernst, following a similar event working with a professional emails assignment.
We began by discussing the rubrics used to rate the rhetorical analysis assignments under consideration, and isolating potential issues for discussion. We then read and rated four assignments, discussing the results each time, helping everyone think about the rubrics similarly and also giving us ways to refine the rubrics on the long term. Readers then spent about an hour rating assignments, with each receiving at least two readings.
After considering the results of these reading and rating sessions and instructor feedback on the pilots, we’ve decided on the following process for AY2018–19:
We will have a second generation of multiple pilots in Fall 2018. All ICaP instructors will participate by selecting common assignments, implementing them, submitting data for assessment, and participating in both reading/rating sessions and other measurements.
We will use the same six assignments as in Spring 2018, but assignment materials themselves will be updated to reflect what we’ve learned from our pilots. Requirements for each assignment will be released by July.
We will ask that instructors follow the templates as closely as possible. Assignment materials will describe permissible modifications (e.g. selecting texts or changing timing of deliverables).
Assignments must be graded and assigned points. This helps ensure students do their best work. Individual instructors will determine how the common assignments are integrated into their course assignment sequences and grading structures.
Rubrics will be provided for each assignment. Instructors can customize the format of the rubric to fit with the rest of their course (ie. point values, rating scales, etc.), and may add additional assessment criteria, but all common assignment rubric criteria must be represented in the customization.
The pilots have already been very successful in helping us understand how to balance the needs of syllabus approaches with the overall purpose of the common assignment: ensuring we have a data set from each semester which can be evaluated against ICaP outcomes and in comparison to other semesters. We have some more ideas about Fall processes which we’ll be sharing soon — not to mention the results from the assessments themselves.
We are grateful for the ICaP instructors who have been involved with this work since the beginning and we hope the strong participation will continue. Thank you to our readers: Parva Panahi, Allegra Smith, Amanda Leary, Ingrid Pierce, Mac Boyle, Deena Varner, Libby Chernouski, Julia Smith, and Joe Forte. Thanks as well to the many instructors who are participating in the pilots and focus groups, and to Daniel, Carrie Grant, and April Pickens for preparation, data processing, and continuing analysis.