13 Comments
Oct 18, 2022Liked by Joshua Doležal

As someone who has not spent their entire career in academia, my observation is that assessment in academia suffers from the same flaws as assessment in the corporate world. Everyone agrees that assessment is a reasonable thing to do. Most assessment procedures are highly flawed. It is possible to get a great assessment and be poor at ones job, and visa versa. The main motivation in the organization to have an assessment process is primarily to be able to claim that the organization has an assessment process. Assessment data is used inappropriately to push bureaucratic priorities or to punish enemies.

Welcome to the corporatization of academia.

Expand full comment
author

Well said! You describe what Jason calls “regulatory compliance” in another comment. I’m not familiar with assessment in industry, but perhaps that is indeed part of the origin story of academic assessment for compliance?

Expand full comment
Oct 20, 2022Liked by Joshua Doležal

My favorite description of assessment came from someone presenting at a conference on higher education. It went something like this: "Faculty feel about assessment the way your cat feels about getting a bath" Why? Because they both feel that you are imposing something on them that they are doing all the time! Faculty are constantly assessing students' performance. You judge their expressions when you ask a question, or when a student answers it. Do they seem interested or lost? You judge the kinds of ideas they contribute to a discussion. Do their comments indicate understanding, insight, or something else? Every teacher leaves the classroom with an assessment: "Yes, that went ok. They seem to be getting it," or "That was a disaster. I'd better try another way of working on that material next time." And then you judge what they write or what their exams look like. All of those things are "assessment tools." But they don't add up to nice columns of numbers, and ultimately, the modern university is supposed to produce knowledge that can be measured like annual rainfall. Unfortunately, some of the best results of teaching only appear years later. Too late for anybody's rubric!

Expand full comment
author

Great points, Maria. For the record, I am not categorically opposed to less subjective forms of assessment. I'm a reader of Atul Gawande and a listener of the Freakonomics MD podcast, and both offer examples of how meaningful insights can be gleaned from measuring things (like the increased workflow doctors are seeing after the rise of telehealth). I think the problem in higher education is that there is a great deal of pro forma assessment -- going through the motions for accreditation or even doing it earnestly but without a well designed instrument -- and not much high-quality research being done by real experts in those methods. It is not dissimilar to the frustration many of my colleagues in the social sciences felt when faculty were asked to respond to poorly designed surveys. If you have a bad instrument, you get bad data, and it's reasonable in that case to ask whether foregoing the assessment altogether might do less harm than following through with it.

Expand full comment

Yes! And in my experience, we tried harder to push our curriculum to be entirely developmentally based which meant we lost majors and enrollment! Great observations!

Expand full comment

I seem to recall not only was Maitland Jones a chemistry professor, he was an organic chemistry professor. As an mechanical engineer, I find it appalling students had that much power over the destiny of an organic chemistry professor, one who some say was the father of organic chemistry.

Organic chemistry is required to pass the MCAT, which is a precursor to medical school.

Everyone who graduates with a BS and beyond has wanted to have professors fired, but years later recognize that they were pushed to be their best by them.

If these students intended to go to medical school, veterinary school and into a myriad of other science professions, don’t we think they’d want the challenge?

We’re dealing with the idea of immediate gratification from these students and we’re starting to see the results. How easy has it been lately for you to get one of your pets in to see a veterinarian?

Chemistry at the collegiate level is hard. In my opinion, there are courses where just the idea that you understand the material is a measure.

Chemistry is one of those courses that sifts out the mediocre student.

Your Vonnegut quote was an excellent choice. It is where we are headed.

As someone who homeschooled a kiddo who is now an Aerospace Engineer, we were our own assessment committee. My concern and the concern my student had was could he accomplish his goals with the material we chose to study? He was well prepared for the collegiate experience because of that style of “self assessment “. He went to a top ten engineering school and was hired into that industry during the last semester.

I may have a very simplistic viewpoint, but in my opinion things have become far too complicated, especially in the field of education. There is not enough emphasis placed on STEM, true STEM, where students endure the rigor. Assessment refinement misses the mark. Blunt assessment, like can the student do the work - especially in the sciences, in my humble opinion is what’s required these days.

Can I offer four questions?

1. Does the student have professional goals? If “yes” proceed to 2, if “no” proceed to trade school.

2. Does the student believe he can accomplish his goals? If “yes” proceed to 3, if “no” proceed to trade school.

3. Does the student do the work the people who understand the subject require? (This also requires you have people who understand the subject presenting the material.)

4. Does the student spend time on his chosen topic over and above what is assigned? If they do, they’re truly interested in the topic and will succeed, unschooler here…

And a last note, curmudgeons are some of the most interesting people in my opinion.

Expand full comment
author

Glad the Vonnegut resonated. He was parodying a communist dystopia, and the top-down assessment has some of those politburo trappings, too.

Expand full comment
Oct 18, 2022Liked by Joshua Doležal

My understanding comes from pedagogical scholarship, much of which was turbo-charged in response to the rise in standardized assessment in K12 and technocratic rise in academia. The probem is that many academics don't distinguish authentic and inauthentic assessment, and tie the latter to accreditor requirements.

I can try to look up some past Chronicle articles; I have a few I've kept as points of reference just to fight the gaslighting that is common in education. That is, professors tend to take all the blame for the educational systems ills--like our K12 brethren--while most critics ignore the context and structural limitations in which we work. I mention another major research below.

As for my evidence that educators are not doing their job, one can look at the national statistics and a wide review of pedagogical scholarship, where none of this is a secret. In my experience, most professors just don't dip into study of pedagogy, and instead academia is largely a customary and practice-based entreprise, which is why it was ripe for criticism. Just start looking at the data on graduation rates, skills, etc. Look at customary pedagogical practices that continue despite all the evidence that they're not effective. It's hard to pin any individual or institution down without local knowledge, however.

I am, in fact, not leaning towards a regulatory framework. That's a poor solution to the problem, as it leans into the problems you note. Rather, teaching itself needs to be reformed, which is frequently noted in the Chronicle or Inside Higher education. So, familiarity with the discussion in the academic trade journals is enough to apprise oneself of the situation. Also, in the community colleges, the text Redesigning America's Community Colleges is a good book and is making strides everywhere. The authors might be offended if I point out that their recommendations for faculty are just a good repackaging of the last 20 years of scholarship ... or not.

Assessment as a means to correct the prioritorization of research is, of course, not a good way to go. We need an educational system that doesn't pit research and teaching against each other. Assessment just reveals that students don't reliably learn what instructors profess to teach, and reveals to what extent that is an issue with the instructor, the students, or both. It should aid instructors, and this is common knowledge as part of what the scholarship calls reverse design courses (or what I'd call just good teaching that existed long before there were catchwords) that are supposedly what all competency-based institutions do.

You're right. Assessment doesn't, by itself, improve anything. Done well, it gives the instructor feedback. Perhaps this is missing since you dont' seem familiar with it, but good assessment is what any instructor does in their own classrooms. Don't be thinking about the external or accreditor model. That's equivocating on the term.

Bottom line.

My goal in my comment was to show that you're "throwing the baby out with the bathwater" as it appeared that you're not familiar with modern pedagogy. And that's much of my point. Most academics aren't. And all the folkways and internal structural incentives press to replicate that mistake. External, accreditor assessment is a poor solution to that problem.

I write in compassion and frustration. I hope that you realize that the compassion is directed at you, but the frustration is not.

Expand full comment
author

Ah -- I think we share the same beef with the external model for assessment. One form that takes is the top-down mandate to do internal assessment of a particular kind. And I've seen a lot of very bad assessment of that stripe. "Regulatory compliance" is a helpful term and would have sharpened my original post.

So there are definitely two kinds of assessment that we're talking about. I won't get into it about modern pedagogy, but in my experience healthy departments define their standards clearly, talk about them as a group, and build them into their grading. The external model or top-down mandate often comes in addition to this and rarely yields useful information.

All points well taken. Thanks!

Expand full comment
Oct 18, 2022·edited Oct 18, 2022Liked by Joshua Doležal

Josh, I was so caught up in reading this that I forgot to start the coffee. I'm of two minds. Assessment in higher education has a role in ensuring some standards are met. But I agree that the arts and humanities often end up stifled by the effort to assess artistic voice--lyricism. Another question you might ask is the role assessment plays in grade inflation. There is less scrutiny of a "high-performing" class, in my experience.

Expand full comment
author

Thanks, Max! As Jason noted in an earlier comment, I think my beef is with regulatory compliance. A healthy department will define its standards clearly and build assessment into formal grading. My experience is mainly with institution-wide assessment, which is often tone-deaf to the needs of particular departments. I deleted a fifth question from this final draft in which I described a conversation with a consultant who was trying to sell the English department a certain kind of software that would track our assessments. But the software really only understood a linear curriculum like Engineering with a lot of prerequisites. English was much more flexible: students could take some courses, like surveys, in their first year or in their final semester. The software couldn't track that, because the outcomes were supposed to be developmental, i.e., tied to particular courses. Had we restructured our curriculum to fit the software, we'd have seen dramatic drops in enrollment and likely would have lost majors who depended on the flexibility of our curriculum while double majoring. At the same time, we were losing tenure lines because of enrollment in our classes. So it would have been suicidal from a programmatic standpoint to take the consultant's advice. We couldn't even say that it would be better for students, because we liked the diversity of seasoned perspectives alongside fresher voices in some of those courses. It's possible that a different consultant would have had more finesse and actually listened to what we were saying, but this anecdote captures the ham-handed approach that many assessments take when they are top-down mandates.

Expand full comment

Your depiction of academic assessment is a straw person, and that deflates many of your claims. You're not entirely wrong--it is practiced that way in many places--but that's not what it's supposed to be. That's what one gets when they phone in regulatory compliance. Also, you're railing against the version of higher ed assessment gifted to us around the 2010 when the technocrats, e.g., charitable giving by the Bill & Melinda Gates Foundation, came upon the scene.

Assessment is absolutely vital, but many academics think that "assessment" only comes in the form of accreditor bean counting. Likewise, robust assessment is not, as you name "student evaluations, failing grades, course withdrawas, etc." Instead, it is an evaluation of student skills and also the effectiveness of instructors in teaching those skills. Let me be specific.

Like Dr. Jones, my students for years do poorly in my introductory courses, though mine could never be called weed out courses. Similarly, I have moved mountains to accomodate and supplement instruction. But I also refined the assessment in the reverse-designed course and discovered that approximately 3/4 of the students at my community college cannot read anywhere near grade level. Hence, the quite accessible textbooks and even the quiz questions were incomprehensible to them, as they are functionally illiterate. Were you to check the history of academic assessment--as I did upon discovering this years ago--you'd find that functional iliteracy is a common probem at community colleges. And that many K12ers and community college profs implicitly or explicity adjust by just teaching to the text and rendering their low reading comprehension moot. But that also reduces learning to what one can have through memorization, and makes most fields impossible to teach properly. Around here, which would be Iowa, it is clear that the whole K12 education establishment systemically produces these outcomes: students who cannot read at grade level, students who do not know this fact about themselves, and students who do not expect to have to do so.

This is all something that wise use of assessment practices can reveal.

There's another and very good reason for the assessment movement. Your entire article implicity supposes that professors or educators in general are doing their job. Too often, they're not. And your past articles have noted this problem. The perverse incentive system of academia does not encourage good teaching, and assessment arose initially as an external movement to hold colleges accountable for their obvious failures.

Expand full comment
author

Quite a lot here, Jason. You may be right that I'm being selective in my representation of assessment. I came to assessment by way of accreditation, and so my grasp of it has more to do with regulatory compliance. I've seen quite a lot of it done very badly, and in those cases it would have been better if it had not been done at all. You seem to have a different understanding of it, and I'd like to hear more about that. Your reference to 2010 and the Gates Foundation is intriguing -- is there a resource I might consult to pursue that thread?

I don't disagree with you about the perverse incentive system in higher education, but it sounds like we have different baseline assumptions about the profession. I'm curious about the evidence for assuming that educators are too often not doing their job? It seems like you're leaning toward the regulatory framework there, thinking that assessment can act as a corrective against people who are prioritizing research or maybe indulging a cult of personality?

The literacy question is a big one that I don't think we can tackle in a comment thread. I'll say only that the fragmentary approach that standardized testing takes to reading, in particular, shows the diminishing returns of assessment in actually improving literacy. But there are many layers to the literacy problem.

Thanks for reading and for sharing your thoughts.

Expand full comment