
Today’s flowers go to UNC-Greensboro’s Dr. Christina O’Connor, although admittedly about two weeks late.
In addition to serving as UNCG’s Director of Professional Education Preparation, Policy & Accountability, Dr. O’Connor sits on the Preparation and Entry subcommittee of PEPSC, the organization that is working on the Human Capital Roundtable’s North Carolina teacher merit pay proposal.
At the June 10 meeting of that subcommittee, Dr. O’Connor reported back to the whole group on her breakout room’s thoughts about the proposal, voicing the same concern teachers have been raising ever since the merit pay plan became public: the NC teacher evaluation instrument used by principals (NCEES) is too subjective to be used to determine teachers’ salaries and career advancement opportunities.
“If we’re going to be making high-stakes decisions about people’s careers, we need to make sure we’re using instruments that have solid data quality behind them.”
NC Department of Public Instruction’s Dr. Tom Tomberlin, chief supporter of the merit pay plan’s current design, was none too pleased.
Audio and a transcript of this part of the meeting are below:
Dr. O’Connor: We don’t believe NCEES should be part of this.
I saw that in the feedback too there was a lot of the feedback that I read that had a lot of concerns about NCEES being used for this and there was a lot of, you know, concern about the peer review process and so we tried to streamline it, simplify it and you know still have multiple pathways but keeping that bar of INTASC standards and validity and reliability.
If we’re going to be making high-stakes decisions about people’s careers, we need to make sure we’re using instruments that have solid data quality behind them.
Dr. Tomberlin: So, is there some evidence that NCEES doesn’t have validity and reliability?
Dr. O’Connor: I think there’s lots of anecdotal evidence that it’s not reliable, that the scores on it are highly subjective and there’s not a lot of consistency. The validity, you know, I think you could make an argument that it has some validity. There’s some validity evidence there as far as being, you know, cross-locked to the North Carolina standards and the INTASC standards, but as far as the training and the reliability of the data I think there’s lots of concern. And I don’t think that there’s been, I have not seen any reliability evidence published on that.
Dr. Tomberlin: So my concerns with the way it’s implemented are, I’m with you on that. As far as an instrument whether…
Dr. O’Connor: Instruments are not reliable. Data is reliable.
Dr. Tomberlin: I understand that Dr. O’Connor.
What I’m saying is that that tool passed those requirements for validity and reliability. What is our theory of action that any other instrument we choose that has similar issues of, that has similar levels of validity and reliability is not going to be implemented in a way that’s problematic, that’s equally problematic to what we’re seeing with NCEES? And my question is, is the evaluation process itself fundamentally problematic (laughs), or is it the instrument we’ve decided to use? And given that virtually every other state in the union has the same issues that we have with evaluation it leads me to believe that it’s not instrument specific. It’s some other quality of the process.
***
As a reliability-related side note, DPI’s Dr. Kim Evans reports directly to Dr. Tomberlin and is tasked with keeping minutes for PEPSC subcommittee meetings.
I’ll let you be the judge of whether the minutes from this meeting accurately capture this important exchange between Dr. O’Connor and Dr. Tomberlin.
