review – 社区黑料 America's Education News Source Tue, 17 Jan 2023 20:32:32 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png review – 社区黑料 32 32 Book Review 鈥 Are Education Leaders Mismeasuring Schools鈥 Vital Signs? /article/book-review-are-education-leaders-mismeasuring-schools-vital-signs/ Tue, 17 Jan 2023 21:00:00 +0000 /?post_type=article&p=702483 Two years ago, students at a charter school in East Los Angeles were learning at 1.5 to two times the pace of their grade level peers around the state, based on three years of standardized test scores. But the California Department of Education labeled the school a 鈥渓ow performer,鈥 which put it at risk of closure. Why? Because  

I have written before in these pages about the importance of accurate and balanced methods of measuring school quality. In the same spirit, I recommend a new book by Steve Rees and Jill Wynns, . 

Wynns spent 24 years on the San Francisco school board, while Rees spent just as long running a company that helped school districts measure and report on the quality of their schools. Both have seen their share of mistakes, many of which lead to real pain: teachers reassigned and principals removed based on faulty data; English learners held back from entering the mainstream academic program even after they have become fluent; charter schools closed due to inadequate measurement of growth; even students denied graduation based on flawed interpretation of test results.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


Rees and Wynns have now authored a highly readable guide that superintendents, principals, school board members, education reporters, teachers, and advocates can use to avoid these kinds of errors. They underline four flaws that are most common:

Growth v. Proficiency

The first is using children鈥檚 current test scores 鈥 rather than a measure of their academic growth 鈥 to judge the quality of schools and teachers. In high poverty schools, students often arrive several years behind grade level. Few of them are 鈥減roficient鈥 in math or reading. But too often states and districts give greatest weight to students’ current test scores, not their rate of improvement. 

Consider a middle school whose sixth grade students arrived three years behind grade level. If they are only one year behind grade level at the end of sixth grade, that would be spectacular progress. But in California, to use but one example, the school鈥檚 academic score would be in one of the two lowest categories.

Apples v. Oranges 

The second major flaw Rees and Wynns point out is related: when trying to measure academic growth, some states and districts fail to measure the same students over time. Instead, they measure a school or grade level鈥檚 average over time. But in a middle school, a third of the students each year are new arrivals, and another third from last year have departed. In four-year high schools, a quarter leave each year, another quarter arrive.  So annual school or grade level averages are measuring different kids.

The solution is obvious: Measure the same cohort of students over time, following them from one grade level to the next. Even better, remove from your measure students who have departed or recently arrived at the school.

Ignoring the Imprecision of Test Results

The third common flaw is failure to acknowledge the imprecision of test scores. 鈥淲hen we test kids, we鈥檙e trying to gather evidence of something that exists out of sight, somewhere between their ears,鈥 Rees and Wynns write. 鈥淲hatever their test scores reveal, it can only be an estimate of what they know.鈥

Standardized tests are often used to rate children鈥攖ypically into four categories, which might be summarized as advanced, proficient, needing improvement, and far behind grade level. But imprecision means some of these classifications are dubious. 鈥淭he major test publishers include what they call classification error rates in their technical manuals,鈥 the authors explain. 鈥淚t is common to find a 25鈥30 percent classification error rate in the middle bands of a range of test scores鈥攁nd that鈥檚 for a standardized assessment with 45鈥65 questions.鈥

鈥淚n Texas, Illinois, Maryland, California, Ohio, Indiana, Florida and many other states,鈥 they add, 鈥渢he parent reports make no mention of imprecision.鈥 Yet these reports tell parents whether a child is on grade level. Some states use a standardized test called the Smarter Balance Assessment. Its 鈥渢echnical manual reveals that the classification accuracy rate in these middle two bands (Levels 2 and 3) is about 70 percent. In other words, just seven out of every 10 kids whose scores land in the middle two bands will be classified correctly as having either met the standard or scored below the standard.鈥  

Lack of Context

The fourth major flaw Rees and Wynns discuss is 鈥溾檇isregarding context when analyzing gaps in achievement.鈥 Often, a school is compared to the statewide average, when its students are anything but average. They might be affluent, or poor, or recent immigrants. If so, do we learn anything about the quality of their school by comparing them to a state average? 

Rees and Wynns urge school and district leaders to compare their students to schools or districts with demographically similar children. 鈥淚f you can identify other schools with kids very much like your own who are enjoying success where your students are lagging, you can call the site or district leaders and see how their approach to teaching reading differs from your own,鈥 they suggest. 鈥淭hat last step, compare-and-contrast with colleagues who are teaching students very similar to your own, is where your analytic investment will pay off.鈥

The authors point a finger of blame at schools of education, which rarely teach future teachers or administrators about data, assessment, or statistics. 鈥淪chools of education simply must stop sending data- and assessment-illiterate educators into the field,鈥 they declare.

They also urge state departments of education to disclose the imprecision of test scores whenever they report results, to do more to communicate the meaning of those results, and to create help desks that district and school leaders can turn to with data and assessment questions.

Perhaps their most novel recommendation is that we begin measuring 鈥渙pportunities to learn,鈥 to draw attention to yawning gaps. Some districts assign students to the school closest to their home, for instance, while others offer significant choices 鈥 hence greater opportunity. Most districts give teachers with seniority more ability to choose their schools, leaving the schools in low-income neighborhoods to settle for rookie teachers or those no one else wants 鈥 creating a huge opportunity gap for low-income students. Some schools offer the opportunity to take more advanced courses or more career-oriented courses.

A few districts work hard to match their supply of courses and schools to what students and their families want, but most don鈥檛. The result: yet another opportunity gap. 鈥淚f 90 percent of your sections are dedicated to college-level course work, and 50 percent of your graduating seniors have chosen a path to the workforce or the military, then your master schedule constrains the opportunities to learn that your students care most about,鈥 the book explains. 鈥淲ork force prep courses and multiple pathways toward work-related professions would be a needed addition for that school. The question for those leading or governing districts is how actively you listen to students when they tell you what future they鈥檙e aiming for, and the extent to which you direct your budget and staff to meet their desires.鈥

A brief article cannot begin to suggest the depth and detail the authors plumb in this volume. In addition, every chapter of Mismeasuring Schools鈥 Vital Signs includes questions people can ask to uncover data and measurement problems 鈥 and methods to solve them 鈥 in their own districts and schools. There is even , which includes interactive data visualizations and resources such as a glossary of statistical terms and a 鈥渧isual glossary鈥 showing the types of charts and graphs you can use to communicate meaningful data.There鈥檚 an old saying in the management world: What gets measured gets done. As Rees and Wynns demonstrate, in public education we too often measure the wrong things, in the wrong ways. If we鈥檙e going to improve the lives of children, we have to learn how to measure what matters, accurately, and then understand what it means. Mismeasuring Schools鈥 Vital Signs is a good place to start.

]]>
Opinion: Review: Why You Should Buy into the ‘Sold a Story’ Podcast /article/review-why-you-should-buy-into-the-sold-a-story-podcast/ Fri, 02 Dec 2022 16:45:00 +0000 /?post_type=article&p=700663 Updated

Let me get this hard sell on the table right up front: You should listen to 鈥,鈥 a podcast about reading instruction in U.S. schools. After all, you can be concerned that 1 in 3 American fourth graders read and still not want a deep dive into how literacy is taught. But 鈥淪old a Story鈥 is about more than a national problem; it鈥檚 about a deeply personal struggle experienced by families of all kinds.

In the hands of adept reporter and storyteller Emily Hanford, that deep dive unfolds with crystal clarity, emotional anchors and dramatic cliffhangers to spotlight why many students struggle to read: It is because many schools don鈥檛 teach them the specific skills they need to successfully do so.

The podcast’s basic premise is that extremely popular approaches to teaching young kids to read 鈥 to decode written words 鈥 give short shrift to explicit lessons that connect letters in words to the sounds they represent. In many schools, this explicit phonics instruction is sprinkled into reading lessons, but in woefully inadequate amounts and crowded out by other strategies, including 鈥渢hree-cueing鈥 鈥 which coaches students to use context or pictures to guess what unknown words are. Research, much of it decades old and now called the , shows that systematic phonics instruction is key to helping students become fluent readers. But these other approaches have largely ignored it.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


Why? In six episodes, Hanford and her colleague Christopher Peak deftly stitch together the complete picture: an overview of those popular approaches to reading instruction, the national political battle over how to teach literacy and the reading guru whose three apostles, with their billion-dollar publishing company, championed this flawed approach.

The podcast focuses on the idea, established by reading guru Marie Clay, that children can become readers by leaning on context clues instead of sounding out words. Two very popular curricula from celebrated authors 鈥 “Units of Study for Teaching Reading” from Lucy Calkins and “Leveled Literacy Intervention” from Irene Fountas and Gay Su Pinnell 鈥 were the primary promoters of this flawed idea in school districts and education schools across the country, generating millions of dollars for them and their publisher, Heinemann.

Throughout, Hanford and Peak ground these episodes not in who should be blamed, but in who bears the consequences. The fallout is hitting students struggling to learn to read, parents flummoxed by their children’s lack of progress and teachers who keep saying something like, 鈥淚f only I had known. 鈥︹

Of course, the significance of that fallout hinges on whether Hanford and Peak鈥檚 provocative claims about the scope and quality of these curricula are actually correct. There are compelling reasons to believe they are.

Regarding its scope, a 2019 nationally representative Education Week found that “Leveled Literacy” intervention was used by 43% of K-2 early reading and special education teachers, while “Units of Study” was used by 16%. These curricula are Heinemann’s biggest sellers. Hanford and Peak found Heinemann brought in over $233 million in the past decade from just the 100 largest districts. Imagine their business across the remaining 13,000 smaller school districts.聽

As to the quality, EdReports, a nonprofit reviewer of K-12 instructional materials, last year found lacking 鈥 labeling both as 鈥渄oes not meet expectations.鈥 However, you need not lean on expert reviews to see the disconnect in this curricular approach. In a tacit admission, Calkins revised her 鈥淯nits of Study鈥 curriculum to incorporate the Science of Reading. The disconnect is even plainer in Fountas and Pinnell鈥檚 of their approach that encourages guessing words from context. They write, 鈥淚f a reader says 鈥榩ony鈥 for 鈥榟orse鈥 because of information from the pictures. 鈥 His response is partially correct, but the teacher needs to guide him to stop and work for accuracy.鈥

That response lays bare how detached their approach is from teaching students to actually read text. Getting 鈥減ony鈥 from the word 鈥渉orse鈥 can be 鈥減artially correct鈥 only if the goal is something other than teaching students to read accurately, because it rewards children for learning to do something other than read the word. It rewards guessing. Such a strategy might get students partial meaning in the short run, but it will produce struggling readers over time. Indeed, it has.

Hanford deserves credit for her work championing the Science of Reading and pressing the case against predominant approaches to literacy used in many schools across a nation of struggling readers. Fortunately, some states and districts are . recently outlawed three-cueing, and New York City has to increase phonics instruction. But it will take time and deliberate efforts to change instruction in schools. In the interim, 鈥淪old a Story鈥 gives frustrated parents of struggling readers good questions to ask and the courage to demand better instruction. Clear, engaging and, yes, enraging reporting like this can help policymakers, teachers and families ensure that they are not sold a story that might hold their young readers back.

]]>