“Your friend professes belief yet I’m not convinced. What about you? Are the gods real?”
“They are real,” says I, “And you’re a prick.”
― Glorious Exploits
I must admit, the world has given me a bit of a thumping of late.
Between an increased workload, the responsibilities that come with parenting, and the steady pressure of day-to-day living, I’ve been operating just below the surface. Not overwhelmed in any dramatic sense, but consistently stretched. Writing requires time, attention, and a certain amount of mental space. Lately, those have been in short supply.
There is also a second admission, more difficult than fatigue.
I’m experiencing a crisis of faith—not in public education itself, but in the ecosystem that governs it.
For more than a decade, I’ve covered education policy at the local, state, and national level. I’ve followed board votes, legislative sessions, budget decisions, and leadership transitions. I’ve written through changing superintendents, shifting political majorities, and rotating reform agendas.
The purpose has always been straightforward: to examine decisions, test claims against evidence, and translate institutional language into something accessible to the people most affected by it.
Lately, I’ve found myself asking whether that work meaningfully alters outcomes.
There have been tangible successes. Some harmful initiatives were stopped or softened. Some individuals who should not have remained in positions of authority no longer hold them. Those outcomes matter.
At the same time, the broader system continues largely unchanged. Structures persist. Incentives remain intact. Narratives adapt more quickly than practices. In many cases, criticism is absorbed, rebranded, and redeployed without substantive adjustment.
That tension came into focus during a recent conversation with a school board member—someone I respect and generally regard as thoughtful.
During that exchange, they outlined a timeline related to a policy decision that I knew to be inaccurate. The discrepancy was not a matter of interpretation but of fact.
Because of the personal regard I have for this individual, I hesitated. Rather than immediately challenging the claim, I questioned my own recollection. The conversation concluded with a familiar refrain:
“We can have different opinions, but we are not entitled to different facts.”
The statement was correct.
Afterward, I reviewed the public record. Multiple sources confirmed my understanding. The timeline presented to me was wrong.
When I raised that point, the response was not correction but deflection. The discussion shifted. The factual disagreement was treated as secondary, its resolution unnecessary.
This pattern is not unique. It reflects a broader dynamic within education governance.
School board members are largely dependent on information provided by district leadership. That information often arrives curated, framed to justify prior decisions, and rarely challenged internally.
My reporting draws from a different set of sources. Over time, I’ve built relationships with educators working directly with students—teachers, counselors, principals—people whose daily experience often diverges from official narratives.
These perspectives are not inherently superior, but they are indispensable. Without them, policy discussions risk becoming self-referential, detached from classroom reality.
Too often, those closest to students are expected to implement decisions without being meaningfully consulted in their design. Their role is execution, not authorship.
Despite that, moments of connection continue to provide clarity.
Occasionally, an educator will approach me and say, “You’re TC Weber, aren’t you? I read you.”
They don’t always agree with what I write. Agreement is not the point. Engagement is.
Those exchanges serve as a reminder that there is value in sustained, critical attention—even when progress feels incremental.
So while time remains limited and skepticism has increased, the work continues.
— — —
Which brings us to this week, and yet another proposal to create yet another database.
State Senator Bill Powers (R–Clarksville) has announced plans to sponsor legislation requiring school districts and public charter schools to implement a computer system for documenting what the bill describes as “early warning signs” related to student health, safety, and behavior. According to public statements, these signs would include bullying, harassment, intimidation, mental health concerns, substance abuse, and self-harm.
At first glance, the intent appears straightforward: identify concerns earlier and intervene before harm occurs. The difficulty lies in the details.
The legislation, as described, hinges on the identification and documentation of “early warning signs,” a term that has not been clearly or consistently defined. Without a precise definition, implementation necessarily relies on individual interpretation.
That raises an immediate and reasonable question: how does the system distinguish between typical adolescent behavior and conduct that signals genuine risk?
Those categories often overlap. Adolescence is, by definition, a period marked by emotional volatility, boundary testing, and inconsistent judgment. Codifying observations without sufficient context risks transforming developmental behavior into permanent records.
Once entered into a database, information tends to persist. It is accessed by people removed from the original context, interpreted through institutional lenses, and rarely revisited with the nuance that accompanied the initial observation. Any proposal that formalizes student behavioral data should account for that reality.
There is also the question of who bears responsibility for populating the system.
In practice, data does not enter databases on its own. Teachers and school staff would be required to observe, interpret, and document student behavior—adding to workloads that are already widely acknowledged as unsustainable.
It is reasonable to ask how this requirement would be enforced and evaluated. If documentation is inconsistent or does not align with administrative expectations, teachers could find themselves subject to scrutiny unrelated to student safety.
That risk is not theoretical. Educators already operate in environments where compliance is closely monitored and deviations, however minor, can carry professional consequences.
From a family perspective, the stakes are equally high. Students do not reset each academic year. Behavioral records can follow them for years, shaping perceptions long after the original incident has passed. Any system that formalizes behavioral data must grapple with the possibility of long-term impact based on short-term judgment.
More fundamentally, this proposal reflects a recurring pattern in education policy: diagnosing relational problems as data deficits.
Schools do not struggle because they lack information about students. They struggle because time, staffing, and structural support for meaningful relationships have been systematically reduced.
This proposal does not address that reality. Instead, it risks exacerbating it by diverting attention and time away from the very relationships that allow educators to recognize context, nuance, and change.
If the goal is student safety, there are alternatives that merit serious consideration.
Reducing administrative burden so teachers have time to know students.
Ensuring principals and support staff have capacity to engage meaningfully with school communities.
Providing well-supported School Resource Officers who are trained and encouraged to build relationships rather than simply enforce compliance.
These approaches are more difficult to quantify. They do not produce clean dashboards or easily summarized metrics. But they align more closely with how students are actually known and supported.
If, instead, the objective is to demonstrate action—to produce a visible artifact of concern that can be cited in hearings or campaign materials—then a new database accomplishes that goal.
The distinction between those two aims matters.
— — —
Speaking of data, ACT scores and participation rates for the Class of 2025 were released last week.
For the fourth consecutive year, Tennessee reported a 99% participation rate, with an average composite score of 19.3. The ACT is scored on a scale from 1 to 36 and is widely used in college admissions. According to ACT guidance, many colleges set minimum composite score thresholds between 18 and 20.
Overall, statewide performance remained largely consistent with recent years.
There were notable areas of improvement. English Learners showed gains, with the percentage scoring 21 or higher increasing from 2.3% in 2024 to 6.3% in 2025. District-level data indicates that a significant portion of that growth occurred in Williamson County Schools, where the share of English Learners meeting or exceeding a 21 rose from 20% to 37.1%.
Average composite scores for selected Middle Tennessee public school districts were as follows:
Cheatham County School District: 19.3 Clarksville–Montgomery County School System: 19.3 Dickson County School District: 19.0 Maury County Public Schools: 18.0 Metro Nashville Public Schools: 17.5 Robertson County Schools: 18.3 Rutherford County Schools: 19.8 Sumner County Schools: 20.8 Williamson County Schools: 25.3 Wilson County Schools: 20.4
These figures are frequently contextualized by differences in student demographics, including poverty rates, mobility, and the proportion of English Learners. Those factors are relevant and should be acknowledged.
They do not, however, alter the practical reality that students across districts compete for the same postsecondary opportunities. Colleges and employers evaluate individual applicants, not district-level explanations.
From that perspective, persistent score disparities raise questions that extend beyond context alone—particularly around access to rigorous coursework, instructional consistency, and long-term academic preparation.
The data does not offer simple conclusions. It does, however, suggest that stability at the state level can mask significant variation at the local level, with implications for students depending on where they attend school.
— — —
The upcoming legislative session promises another round of debate over vouchers.
House Speaker Cameron Sexton has expressed interest in removing income caps and enrollment limits from Tennessee’s Education Savings Account program.
“Whether you’re making $30,000 or $140,000, you should have the opportunity,” Sexton said.
He’s not entirely wrong. Studies show that living comfortably in Davidson County requires well over $100,000 annually. What looks affluent on paper often isn’t in practice.
This comes as lawmakers also discuss expanding Education Freedom Scholarships, which launched last year with a cap of 20,000 students.
Notably, ESA enrollment remains well below its legal cap. Fewer than 5,000 students are currently enrolled.
Democrats remain opposed.
“We cannot afford a more expensive voucher program that drains resources directly from classrooms,” said Senate Minority Leader Raumesh Akbari.
Here’s where I struggle.
We’re always told classrooms are cash‑strapped—until leadership wants something. Then millions appear for consultants, programs like AVID, new assessments, scripted curricula, and central office salaries.
So forgive me if the rhetoric rings hollow.
— — —
Finally, MNPS was recently recognized by the National Alliance of Black School Educators as a 2025 Demonstration District.
The press release was glowing.
The data is more complicated.
Achievement gaps remain wide. Graduation rates show progress—but also regression for Hispanic students. Gains are real, but uneven.
Still, district leaders accepted the award, highlighting strategies like “guaranteed, personalized, and applied experiences.”
That language usually translates into scripted lessons and lockstep pacing.
Better data points, perhaps.
Better adults?
I’m not convinced.
— — —
Then there’s this.
MNPS has entered into a contract with Eduservice, Inc., doing business as CT3, to pilot a program in which teachers receive real-time instructional feedback via an earpiece while teaching. According to contract language approved in May, the program is framed as a “comprehensive professional development” initiative focused on instructional practice and classroom management.
The pilot is limited to five schools, with any future expansion left solely to the discretion of the district.
The original contract authorized expenditures not to exceed $1 million. In October, that ceiling was amended to $2 million.
Those are not interpretations. Those are public records.
Under the model being piloted, teachers deliver instruction while receiving live prompts from an administrator or instructional coach. The district has presented this as support. Whether classroom teachers experience it that way remains an open—and largely unasked—question.
Comparable arrangements do not exist in most licensed professions. Physicians are not coached in real time while diagnosing patients. Attorneys are not guided mid-argument during trial. In those fields, professional judgment is presumed.
In teaching, it increasingly is not.
MNPS has faced sustained challenges related to staff retention, morale, and workplace stress. Teachers leaving midyear or with minimal notice has become more common, and anxiety is frequently cited as a contributing factor. Any intervention introduced into classrooms should be evaluated in light of those conditions.
It is reasonable to ask whether real-time monitoring and feedback during instruction alleviates or compounds that stress.
It is also reasonable to ask who determines best practice in these moments, and on what evidence that determination rests.
Finally, there is the question of student experience. Effective teaching depends on attention, presence, and trust. Introducing a second, unseen participant into live instruction inevitably alters that dynamic.
If the goal is to strengthen teaching and learning, the burden should be on the district to demonstrate that this model does so—without increasing anxiety, undermining professional autonomy, or weakening the teacher–student relationship.
— — —
This isn’t corporate media.
There’s no team. No budget. No handlers.
It’s just me—trying to keep up, trying to keep you informed, and trying to say what others won’t.
If you value that work:
Venmo: @Thomas‑Weber‑10 Cash App: $PeterAveryWeber Tips / Story Ideas: Norinrad10@yahoo.com
Until next time:
Accountability begins with accuracy—and with listening to the people closest to the work.
Categories: Education
Leave a comment