Home WebMail Saturday, November 2, 2024, 08:42 AM | Calgary | -3.9°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Posted: 2013-06-10T19:03:15Z | Updated: 2013-08-10T09:12:01Z Who Is Charlotte Danielson and Why Does She Decide How Teachers Are Evaluated? | HuffPost

Who Is Charlotte Danielson and Why Does She Decide How Teachers Are Evaluated?

Teachers are being told that while there is no official lesson plan design, they better follow the recommended one if they expect to pass the upcoming evaluations.
|
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

A New York Times editorial endorsed the state imposed teacher evaluation system for New York City as "an important and necessary step toward carrying out the rigorous new Common Core education reforms." The system is based on the Danielson Framework for Teaching developed by Charlotte Danielson and marketed by the Danielson Group of Princeton, New Jersey.

Michael Mulgrew, the president of the city's teachers union, and Mayor Michael Bloomberg, also announced that they are generally pleased with the plan . According to the Mayor, "Good teachers will become better ones and ineffective teachers can be removed from the classroom." He applauded State Commissioner John King for "putting our students first and creating a system that will allow our schools to continue improving."

Unfortunately, nobody, not the Times, the New York State Education Department, the New York City Department of Education, nor the teachers' union have demonstrated any positive correlation between teacher assessments based on the Danielson rubrics, good teaching, and the implementation of new higher academic standards for students under Common Core.

A case demonstrating the relationship could have been made, if it actually exists. A format based on the Danielson rubrics is already being used to evaluate teachers in at least thirty-three struggling schools in New York City and by one of the supervising networks. Kentucky has been using an adapted version of Danielson's Framework for Teaching to evaluate teachers since 2011 and according to the New Jersey Department of Education , sixty percent of nearly 500 school districts in the state are using teacher evaluation models developed by the Danielson Group. The South Orange/Maplewood and Cherry Hill , New Jersey schools have used the Danielson model for several years.

According to the Times editorial , the "new evaluation system could make it easier to fire markedly poor performers" and help "the great majority of teachers become better at their jobs." But as far as I can tell, the new evaluation system is mostly a weapon to harass teachers and force them to follow dubious scripted lessons.

Ironically, in a pretty comprehensive search on the Internet, I have had difficulty discovering who Charlotte Danielson really is and what her qualifications are for developing a teacher evaluation system. According to the website of the Danielson Group, "the Group consists of consultants of the highest caliber, talent, and experience in educational practice, leadership, and research." It provides "a wide array of professional development and consulting services to clients across the United States and abroad" and is "the only organization approved by Charlotte Danielson to provide training and consultation around the Framework for Teaching."

The group's services come at a cost, which is not a surprise, although you have to apply for their services to get an actual price quote. Individuals who participated in a three-day workshop at the King of Prussia campus of Arcadia University in Pennsylvania paid $599 each. A companion four-week online class cost $1,809 per person. According to a comparison chart prepared by the Alaska Department of Education, the "Danielson Group uses 'bundled' pricing that is inclusive of the consultant's daily rate, hotel and airfare. The current fee structure is $4,000 per consultant/per day when three or more consecutive days of training are scheduled. One and two-day rates are $4,500/per consultant/per day. We will also schedule keynote presentations for large groups when feasible. A keynote presentations is for informational/overview purposes and does not constitute training in the Framework for Teaching."

Charlotte Danielson is supposed to be "an internationally-recognized expert in the area of teacher effectiveness, specializing in the design of teacher evaluation systems that, while ensuring teacher quality, also promote professional learning" who "advises State Education Departments and National Ministries and Departments of Education, both in the United States and overseas." Her online biography claims that she has "taught at all levels, from kindergarten through college, and has worked as an administrator, a curriculum director, and a staff developer" and to have degrees from Cornell, Oxford and Rutgers, but I can find no formal academic resume online. Her undergraduate degree seems to have been in history with a specialization in Chinese history and she studied philosophy, politics and economics at Oxford and educational administration and supervision at Rutgers. While working as an economist in Washington, D.C., Danielson obtained her teaching credentials and began work in her neighborhood elementary school, but it is not clear in what capacity or for how long. She developed her ideas for teacher evaluation while working at the Educational Testing Service (ETS) and since 1996 has published a series of books and articles with ASCD (the Association for Supervision and Curriculum Development). I have seen photographs and video broadcasts online, but I am still not convinced she really exists as more than a front for the Danielson Group that is selling its teacher evaluation product.

The United Federation of Teachers and the online news journal Gotham Schools both asked a person purporting to be Charlotte Danielson to evaluate the initial Danielson rubrics being used in New York City schools. In a phone interview reported on in Gotham Schools, Danielson was supposedly in Chile selling her frameworks to the Chilean government, "Danielson was hesitant to insert herself into an union-district battle, but did confirm that she disapproved of the checklist shown to her." The checklist "was inappropriate because of the way it was filled out. It indicated that the observer had already begun evaluating a teacher while in the classroom observation. She said that's a fundamental no-no."

Bottom line is that 40% of a teacher's evaluation will be based on student test scores on standardized and local exams and 60% on in-class observations. In this post I am most concerned with the legitimacy of the proposed system of observations that are based on snap-shots, fifteen minute visits to partial lessons, conducted by supervisors potentially with limited or no classroom experience in the subject being observed, followed by submission of a multiple-choice rubric that will be evaluated online by an algorithm that decides whether the lesson was satisfactory or not.

Imagine an experienced surgeon in the middle of a delicate six-hour procedure where the surgeon responds to a series of unexpected emergencies being evaluated by a computer based on data gathered from a fifteen-minute snapshot visit by a general practitioner who has never performed an operation.

Imagine evaluating a baseball player who goes three for four with a couple of home runs and five or six runs batted in based on the one time during the game when he struck out badly.

Imagine a driver with a clean record for thirty years who has his or her license suspended because a car they owned was photographed going through a red light, when perhaps there was an emergency, perhaps he or she was not even driving the car, or perhaps there was a mechanical glitch with the light, camera, or computer.

Now imagine a teacher who adjusts instruction because of important questions introduced by students who is told the lesson is unsatisfactory because it did not follow the prescribed scripted lesson plan and because during the fifteen minutes the observer was in the room they failed to see what they were looking for but what might have actually happened before they arrived or after they left.

When I was a new high school teacher in the 1970s, I was observed six times a year by my department chair, an experienced teacher and supervisor with expertise in my content area. We met before each lesson to strengthen the lesson plan and in a post-observation conference to analyze what had happened and what could have been done better. Based on the conferences and observations we put together a plan to strengthen my teaching, changes the supervisor expected to see implemented in future lessons. The conferences, the lesson, and the plan were then written into a multi-page observation report that we both signed. These meetings and observations were especially important in my development as a teacher and I follow the same format when I observe student teachers today.

As I became more experienced the number of formal observations decreased. I still remember a post-observation conference at a different school and with a different supervisor who had become both a mentor and a friend. After one lesson he virtually waxed poetic at what he had seen, but then suggested three alternative scenarios I could have pursed. Finally I said I appreciated his support and insight, but if I had done these other things, I would not have been able to do the things he really liked. He paused, said I was right, and said to just forget his suggestions.

But under the new system, principals will drop in for a few minutes and punch in some numbers. Teachers then will be rated, mysteriously or miraculously, based upon a computer algorithm using twenty-two different dimensions of teaching. Astounding!

And this assumes principals know what they are doing, have the independence to actually give teachers a strong rating, and are not out to get the good teacher who is also a union representative or just a general pain in the ass like I was.

But that is a big assumption. Teachers in the field report to me that the New York City Department of Education is already trying to undermine the possibility of a fair and effective teacher evaluation system. I cannot use their names or mention their schools because they fear retaliation. I urge teachers to use Huffington Post to document what is going on with teacher evaluations in their schools.

Within hours after an arbitrator mandated use of the Danielson teacher evaluation system, New York City school administrators received a 240-page booklet explaining how to implement the rubrics next fall. Teachers will receive six hours of professional development so they know what to expect, not so they know how to be successful. Teachers are being told that while there is no official lesson plan design, they better follow the recommended one if they expect to pass the evaluations.

Administrators are instructed how to race in and out of rooms and punch codes into an IPad with evaluations actually completed in cyberspace by an algorithm. Teachers will fail when supervisors do not see things that took place before or after they entered the room, if lesson plans do not touch on all twenty-two dimensions, or when teachers adjust their lessons to take into account student responses.

Teachers expect to be evaluated harshly. In December, 2012 the New York Daily News reported that the Danielson rubric, while still unofficial, was being used to rate teachers unsatisfactory.

This year there also appears to be an informal quota system for the granting of tenure. Teachers recommended for tenure by building administrators are being denied by central administration, which suggests how low the opinions of building based administrators are valued.

As I have written repeatedly in other posts , there are useful educational goals established by the Common Core standards. But unless the standards are separated from the high-stakes testing of students and the evaluation of teachers and schools they will become an albatross around the neck of education and a legitimate target for outrage from rightwing state governments, frustrated parents, and furious teachers, and they will never be achieved.

Your Support Has Never Been More Critical

Other news outlets have retreated behind paywalls. At HuffPost, we believe journalism should be free for everyone.

Would you help us provide essential information to our readers during this critical time? We can't do it without you.

Support HuffPost