Home » eportfolios » Eportfolios to change Assessment

Eportfolios to change Assessment

Eportfolios@Macaulay
Collect, Reflect, Present

Our motto for our eportfolios at Macaulay (one of our mottoes–I tend to proliferate mottoes) is “Collect, Reflect, Present.”  That spells out (not necessarily in order of priority) what I thought students would be doing with eportfolios, and those goals, from the beginning, have shaped the choices in building our platform. But there is another purpose to which eportfolios are frequently put, and it’s a purpose that I want to untangle a little in this post.

We don’t say “Collect, Reflect, Present, Assess.”  (Although it could be easily argued, and I’ll argue a bit below, that “Reflect” really can be a kind, an excellent kind, of assessment).  We didn’t implement a system or a platform that was primarily designed to assess our students’ or our program’s fulfillment of predetermined or externally structured criteria.

As I mentioned in my previous post, we have a system that works more organically, more flexibly, allowing students to determine for themselves what (if anything) will happen with their eportfolios, what they will collect, reflect, and present, and even what they will assess.  And as I described in that previous post, that choice of approach has had implications for what quality of eportfolios we get (in a very good way), and implications for what quantity of eportfolios we get (in a way we’re working on improving).

But we have also reached a point where assessment can be very productive–not the kind of assessment that checks whether standards are met or pre-designed structures filled in. I would call that kind of assessment “measuring up” (as in “does this eportfolio measure up?”).  We’re at the point where we can do something more difficult, but (possibly) more substantive and more useful…”measuring” (as in “how can we describe what is happening with this eportfolio? What does it tell us about this student?”)

For a long time I had a sort of inferiority complex about the question of assessment in regard to our eportfolios, because our approach is really not well-suited to the kind of quantitative, universal, standardized approach that many people mean when they say “assessment.” Because what our students are doing with their eportfolios takes so many different forms, it’s not easily possible to say “yes, this one measures up.  No, this one falls short.”

And the more I thought about this, the more I began to think back to my own history with portfolio assessment (before the “e” even existed) as a writing teacher. I thought about how and why portfolio assessment entered writing instruction and where it came from. The whole point of portfolio assessment, originally, in writing instruction, was to provide alternate assessments–richer ones. More nuanced and complicated ones. To assess the things (like writing ability) that are NOT easily or accurately assessed by a single test, or a single score.

Portfolios in writing instruction were about growth, about process, about diversity. They were implemented specifically because the picture of a student as a writer can not be reduced to just data or skills. Writing teachers sat with students’ portfolios for long periods of time. They looked at everything. They read drafts and students’ reflections of how drafts became final products. They read students’ thoughts about what each piece of writing said about the student as a writer. They thought about students’ choices, and their reasoning for their choices.  They thought about what and why students were collecting.  How and when they were reflecting on what they collected.  Where and to whom and how they were presenting what they collected.

Portfolio assessment at its best can be qualitative assessment, formative assessment.  Not just summative assessment leading to a grade or a score or a single evaluation, but deep description leading to more process and more progress, feeding back into more recommendations and more learning for the student, for the eportfolio system, and for the program.

I’m not sure why I originally fell into a prejudice that this kind of deep assessment was somehow “soft” or “just anecdotal.” Somehow not “real” or “valid” assessment. I know very well from my own research and scholarship that true observation and deep description are not less effective and less important than numbers and graphs or rubrics and scores.  In fact, in many cases, in many disciplines and types of research, from participant observation fieldwork to the Scholarship of Teaching and Learning, I know very well that measuring is actually often superior–more comprehensive, more insightful, more  detailed, more useful for planning and changing and learning, even more  accurate and “objective” and transferable–to measuring up.

Rather than scoring on a rubric, or checking off items on a list of competencies (and I do not want to imply that “measuring up” like that is never valuable), the kinds of questions we can be asking about eportfolios (and are starting to ask)–measuring, studying, deep assessment questions–are the kinds of questions that always get asked in the Scholarship of Teaching and Learning. Pat Hutchings (in the introduction to Opening Lines: Approaches to the Scholarship of Teaching and Learning) frames them like this:

  • “What works?”
  • “What is?”
  • “Visions of the possible”
  • “Theory building” questions

These are the kinds of questions that eportfolios like ours (and many others, I don’t mean to sound exceptionalist here) can answer so well (or lead to more depth in the questions).

So the techniques we’re planning and will be implementing over the next months for assessing eportfolios will take that kind of path.  We will work by selecting a representative (random or intentionally selected–there is validity in both methods) sample and doing some deep description, including content analysis, tagging, and coding for comparison.  Not starting with the rubric, but starting with the eportfolios (“What is?”) and moving from them to the careful and nuanced judgments (never complete, never finished) about “what works?” and then to those “visions of the possible” and “theory building.” We can see what is happening at certain points, and then see how that changes over time, and get a very accurate and well-developed sense of what is happening system-wide, or with individual students. We can do some almost-ethnographic work with the eportfolios, and start to measure how they work, what they can do, and we can see pathways that students are taking that we might not even have imagined.

To assess eportfolios in this way will not be rapid or efficient or automatic.  It will be time-consuming and push us to question what we’re looking at, what we’re looking for, what assumptions we’re bringing and what conclusions we’re reaching for.  It will push us to think about teaching and learning in deeper ways than “value added” or “standards-driven” or even “general education.”  And we will hew more closely to the origins of eportfolios in portfolio assessment, in authentic assessment.  When you select a sample and discuss and think about how it’s being sampled, and when you ask a team of experienced thoughtful raters to look at each eportfolio in the sample, and not just “score” it, but code it with keywords, describe it and analyze it, then you’re measuring.  And you’re getting a rich picture of student learning, with real results that can be applied beyond a grade or a score (for a program or a student). Some of this (once the codes are developed) can be done by content analysis software. But the bulk of it is human judgment.  And that’s what’s really good about it.

Maybe there’s an important fight to be had.  When administrators (and I speak as one) start to ask for assessment, maybe we shouldn’t (maybe I shouldn’t) be too quick to bend and say “yes, let’s see if they measure up.” Maybe we can find a way to use eportfolios to help with the process that portfolio assessment in writing instruction started…to promote alternate assessment where learners and learning are explored and understood, not just rated and scored.

Some early thoughts, anyway–lots more to be said on this.

(An excellent article on this subject, one that influenced me a lot, is by Minnes and Boskic at UBC from 2008 “Eportfolios: From Description to Analysis.” http://www.irrodl.org/index.php/irrodl/article/download/502/1050)


7 Comments

  1. Thanks! If you’re interested (and able to travel to Boston in July), registration should be opening very soon for the annual AAEEBL conference. Lots of good panels and presentations there, and my Monday workshop http://www.aaeebl.org/2012sotl might interest you, too. I’ll be posting about that when registration opens in the next couple of days.

  2. Hi Joe,

    Michael pointed me to your blog in response to a post I recently wrote about at ePortfolio project at the University of Mary Washington that I’m working on. You can read my post here: http://wrapping.marthaburtis.net/2011/10/20/the-eportfolio-jungle/.

    I would love to talk to you more about this and hear more about how your approach assessment of your ePortfolios.

    We’re dealing with assessment in our project, but I fear that more mundane requirements for institutional assessment (in part for reaccreditation) are threatening to consume the kind of meaningful portfolio assessment you’re describing here.

  3. Joe I’m excited about the prospects of seeing eportfolio assessment have the opportunity to use this form of “measuring” vs. “measuring up.” My background in Fine Arts which has a long history of portfolio and critique gave artists ample opportunity to reflect on their work with professionals in a deep way.

    If you are looking for any kind of support, let me know as this idea really interests me. We need clearly developed and research supported alternatives to the Sloan-C, VALUE, and Quality Matters type of rubrics. All of these have very important usages and will likely be part of CUNY assessment goals of web-based instruction, given what I’m hearing through the CAT. But I worry that this assessment will create a neat set of numbers that assessors overly lean on and miss the richness in learning that exists.

    Thanks for the post and look forward to seeing the project move forward.

Leave a comment

Your email address will not be published. Required fields are marked *