Your message, should you choose to accept it, is to write a blog post covering all that has transpired in Tennessee education this past week. Your post must be under 3000 words and it must contain as many positive elements as possible – ok, at least some. In the event of discovery, any acknowledgment of your actions will be disavowed. This message will self-destruct in 30 seconds.
Let’s see how I do.
STATE OF TESTING IN TENNESSEE
This week the tossing of bones by the the Tennessee Department of Education (TNDOE) continued with the release of new testing data. A couple of weeks ago, it was the TVAAS scores. This week it was the state results for TNReady for grades 3-8. The district and individual scores will get tossed in the coming weeks. I love the headline of the story in Chalkbeat TN: The report card is in for Tennessee grade-schoolers. It’s not good, but it’s expected.
The article goes on to explain that two-thirds of kids in Tennessee in grades 3-8 scored below grade level in English Language Arts – note that I said “English Language Arts” and not “reading.” Math’s scores were only a little better. The explanation given was as follows:
The low scores for English and math for grades 3-8 were expected under the state’s transition to a new test aligned for the first time with more rigorous Common Core standards, which have been in classrooms since 2012. (Tennessee students had performed significantly better on its previous TCAP exams, which did not emphasize critical thinking skills and were based on outdated academic standards.)
In other words, we gave our kids an exam that we knew they wouldn’t do well on because we thought it was a better test. As a parent, when I get my kid’s results back, am I’m supposed to look at them and say, “Hey, I expected you to do bad. It’s ok.” What am I supposed to answer when they ask, “Then why did you make me take it? Aren’t tests supposed to be about what I know, not what I should hypothetically know?” I won’t have answer other than to say, “In Tennessee we value assessment over acquisition. Acquisition is only valuable as a validation tool for the assessments.” I know, it should be the other way around, but…
Everybody at TNDOE is talking about how much better this new test is and how it is more rigorous than our previous tests. But when I ask, how do we know that? I get no clear answer. It seems to me that the reasoning is this test is a better and harder test because it’s more closely aligned with standards and more kids failed than before.
Let’s approach this in a different manner, though. Say I run a 5k race annually and each year my timing goes up a little bit, so I’m feeling like I want something different to challenge me. After year 5, I change to a 10k race. My time for that race is substantially lower. What conclusions can I draw from that difference in time? Am I really not that good a 5k runner? Is the course really that much harder than the 5k I was running? Is my training off? Am I not that good a runner?
I’d say there are very few conclusions that can be drawn based on comparing the results between my 5k and my 10k time. It could be that the length of the course was a bigger adjustment than I anticipated. It could be that conditions were worse on the day I ran the 10k vs the 5k. It could be that one course was flatter and one was more hilly. A kid could be good at bubble-in questions, but not write-ins. How do we know that improvement isn’t contingent just on familiarity with the course? Or the test?
I know people will argue that we should all be training to run hills instead of flat races. But does running hills well really indicate that I am a better runner? Terrain is just another variable. My liberal arts education always taught me that in order to get the most accurate measurement possible, you need to remove as many of the variables as possible.
One year of data is not a real indication of anything other than this: kids are not very good at taking this test. In order to draw any meaningful conclusions, you would have to have a set of data that you could analyze for trends. Simply taking a 10k race and comparing its results to the results of a 5k race, just because both are races, is not a valid means to draw conclusions about a runner’s abilities. The same holds true for students and testing.
Yet, the TNDOE feels comfortable ignoring the plethora of problems they’ve encountered administering the test over the years and declare, according to State Education Commissioner Candice McQueen, the following:
This is a key moment for our state, as we are now transitioning to the point where we have a true understanding of where students are from elementary through high school, and we can use that information to better support their growth.
How? Based on what? Say your doctor comes to you and says, “After putting you on the treadmill, I think you are going to need a knee replacement.”
Do you rush out and get that knee replacement? Or do you say, “Whoa, Doc. Let’s do a few more tests. Let’s get a little more data before we do anything extreme.”?
It’s in this light that I suggest we might want to slow the accolades and proclamations down a bit. The next couple weeks, as more results get released, should prove interesting.
WHO NEEDS TEACHERS?
Here’s another question for you: If I put together a council to address the needs for affordable housing in the city and I don’t put any builders on the council, what would you think of my council?
What if I put together a council to come up with solutions to crime, and I don’t include any police officers?
That’s essentially what we’ve done in Nashville when it comes to literacy. The mayor’s office and Metro Nashville Public Schools (MNPS), along with the Nashville Public Education Foundation (NPEF), recently assembled a council, the “Nashville Literacy Collaborative,” to develop a plan to increase student literacy in Nashville. According to the Blueprint for Early Childhood Success they released this week, they challenged themselves to “think bigger, dig deeper, and come up with a plan to accelerate action.” But not bigger and deeper enough to notice that there were no classroom teachers sitting among them.
Right now, some of you may be saying, “Wait a minute, I saw some professional educators on the list.” Yes, there was the Chief Academic Officer, and a literacy coach, and a principal who doesn’t even have a doctorate – though we are currently paying for half her tuition in a doctoral program at Trevecca University to rectify that status. There wasn’t one single person who is charged with improving the literacy level of our kids on a one-to-one level daily, i.e., a classroom teacher. Man, if you don’t think that sends a message.
While there is a lot of good information, and a few solid recommendations, in the blueprint, I do have some issues with the report. But in keeping with my mission, I want to focus on three.
Look at the headline included with the blueprint:
2 in 3 Nashville third-graders are not reading on grade level.
Scary, isn’t it? Horrific! Crisis-level, right? But it is accurate?
Ask somebody on the panel where they are drawing this data from. Ask them to define what grade level means. I’m sure it’s referenced somewhere in the report, but for the sake of brevity, let’s go to Jason Gonzales’ article in the Tennessean:
About 34 percent of the district’s third-grade students left the 2014-15 school year without the appropriate level of literacy skills, as determined by the state. The number was under the state’s older, less rigorous TCAP test.
Now refer back to earlier in this post where I pointed out that TCAP was not a reading test. It encompasses both grammar and spelling. Look at the year attached to the data set, 2014-2015. Due to testing problems, the most recent data we have is 2 years old. So we are making a proclamation based on a test that is not solely a measurement of reading and is arguably outdated.
I hear the chorus rising, “But TC, we have other tests…” True. Last year, the district implemented MAP assessments for the entire district. So let’s look at those results and see what they show. They may show the same number of kids not reading on grade level. But at least we’d be using current and more accurate numbers. The MAP testing numbers should be included in the report, but unfortunately they are not.
Now about that “grade level” thing. Ask anybody what it means and most will tell you it means kids are performing at a level comparable with most kids at that grade level. Sounds great so far, right? But how is that level determined?
Back when dinosaurs walked the earth, better known as “when I went to school,” standards were determined by a bell curve. All the kids in each grade would take the test and then teachers would review the answers. And if you fell in line with most of your peers, you were on grade level. If you scored on the high end, you were above grade level. If you were on the low end, below grade level. It wasn’t perfect, but at least it was rooted in an explainable formula.
We don’t do things like that anymore. What happens now, in the enlightened age, is we get a group of people involved in education together – notice I didn’t say educators – and they hash out what they think, based on their experiences and the demands of future employers, kids should know. It’s not quite as arbitrary as I make it sound, but it is pretty arbitrary, and in my opinion, should be taken with a grain of salt. Especially when people are using it as a tool to scare you.
I am not saying there are no issues and that improvement is not needed. What I am saying is that if we are going to design and prescribe policy, let’s do it with defined terms, accurate data, and include people who do the work daily. I don’t think that I am asking for anything crazy.
The next point I want to draw your attention to in this Blueprint is about our English Learners program. The blueprint lists “20 Truths that frame our thinking.” Here is Truth Number 16 (from page 27):
There is a need for more innovative supports for children learning English.
According to MNPS, 20 percent of elementary students are English-language learners. These numbers are rising at a particularly rapid rate. However, proficiency with ELL students has been fairly stagnant over the last several years.
Classroom instruction and interventions need to be tailored to build upon each child’s unique background. Research shows that instruction that works to build comprehension and context is more effective than an overemphasis on basic skills and ELL instruction that incorporates native or primary language leads to higher literacy achievement. Other communities, for example, Palm Springs Uni ed School District, have deployed tiered systems of supports for ELL students that include an out-of-school-time program to support literacy development.
Later in the blueprint, under recommendations, is this (from page 51):
Our district serves an increasing number of English-language learners. Unfortunately, the increase in these families has been so rapid that the district has struggled to keep up both with the volume of need and with cutting-edge thinking and programming. Reaching citywide literacy goals will require we take a more assertive approach to ELL programming. To that end, we recommend that the district:
- Encourage the district to engage additional expertise to assess current elementary-level ELL services and supports to determine what is needed to accelerate improvements with these students.
- Forge a partnership between the Nashville Newcomer Academy to share early literacy instructional best practices with schools serving a high number of ELL families.
- Invest in out-of-school programming speci cally designed to help support language development (e.g., Saturday School for Newcomers, special programming during summer or fall/winter break, tutoring resources). An example of this kind of programming is: – Palm Springs Year-Round EL Learning Model
- Investigate specialized instructional models and teacher training for high-ELL schools. An example: – Sobrato Early Academic Model
I don’t know who made this determination, since there is nobody on the council with any experience in EL instruction. I would have to strongly object to the assumptions made here. In my opinion, there seems to be a concentrated effort as of late to discredit our EL department. I’m not sure what the reasoning behind the attacks are, but I find them disingenuous at best.
Over the last several years, our EL department in MNPS has far surpassed the TNDOE’s prescribed annual measurable outcomes (AMOs). Our program has been recognized for its innovation by numerous national EL-centered organizations, including the Council of Great City Schools, who recently asked our director to present on EL practices at their annual convention.
As a parent of children in one of the most diverse schools in Nashville, Tusculum Elementary School, I have a great deal of insight into our EL programs. What is written in this report is not an accurate representation of what is actually transpiring. In fact, I think it reeks of politics and a hidden agenda. It has no place in a document of this magnitude, and I would argue that it needs to be amended lest it taint the whole Blueprint.
I want to point out one last omission in this Blueprint. Please do not take my criticisms as a dismissal of the good people and their hard work on this Blueprint. I think it’s incredible that Nashville as a city recognizes the importance of and is willing to invest in a city-wide literacy initiative. I would only argue that there just wasn’t enough of the “right” people included in the process. Their exclusion is probably a factor in the most glaring omission in this work.
Nowhere in these recommendations is the simple edict to give kids more time to read in schools. Teaching reading can be a very challenging task, but it also has a very simple component as well. If you want to improve kids reading, have them read more.
Going back to our running analogy. Coaches tell runners, if you want to be a better runner, run more. Don’t like running? Pick any other activity and the same will hold true. If you want to be better at something, you need to do it more. Nobody ever became a great guitar player by only playing at home or once a month. The same holds true for reading.
We have to make kids see reading as a relevant activity. Schools provide the perfect opportunity to make this happen. Increased time for kids to read in school has to be a prime ingredient in any plan to increase literacy. In the words of Maplewood HS teacher Jared Amato:
“My take is that if every kid in grades K-12 had the opportunity to read a brand-new book every single month of the year and talk to caring adults, they will see themselves as readers, and ultimately get better at reading. If we increase access, make reading relevant and make it fun, the results will come.”
We’ve only covered a fraction of what I wanted to get to today, but I’m about out of words and we have to get to the poll questions. I’ll have more on Monday. Including the strange tale of Hume-Fogg, MLK, and the district girls soccer championship.
For my first question, since we just finished the first quarter of school and report cards are due, I’d like you to grade the district. We’ll even use their scale – exceeds expectations, meets expectations, making progress towards expectations, or failing to meet expectations.
Question two is about the recent Educators Voice sessions. If you went, what was your takeaway. If you didn’t, why not?
Last question is in regards to the recently created Community Superintendent positions. They’ve been on the job for several months now, so what kind of feedback do you have for them?
That’s it for today. You can contact me at firstname.lastname@example.org. Be sure to check out the Dad Gone Wild Facebook page.