ed tech – 社区黑料 America's Education News Source Fri, 17 Apr 2026 00:12:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png ed tech – 社区黑料 32 32 Parents, Schools Clash Over Movement to Abolish Screens /article/parents-schools-clash-over-movement-to-abolish-screens/ Thu, 16 Apr 2026 10:30:00 +0000 /?post_type=article&p=1031185 With more parents pushing for limits on screen time in the classroom, Vermont state Rep. Rob Hunter, a Democrat, wants to make it easier for them to opt their children out of using laptops and iPads.  

He co-sponsored this year that would give parents an ed-tech 鈥渞ight of refusal.鈥 A former English teacher, he was never a fan of the shift toward every student having their own laptop. Technology, he said, isn鈥檛 making students any smarter.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


鈥淚n fact, we know it鈥檚 making them dumber,鈥 he said, expressing a view shared by parents across the country, especially those with students in the elementary grades. 

When his fellow lawmaker Rep. Leanne Harple read the bill, she imagined how tough it might be for teachers to accommodate such requests. An English teacher herself, she also speaks from experience. Her students do research online, where the information is more up to date than in books and academic journals. A 2024 American Federation of Teachers showed 83% of teachers use technology in the classroom daily.

The bill 鈥渨ould create, in some cases, a lot more work,鈥 she said. For every assignment, teachers would 鈥渉ave to create an alternative that鈥檚 completely analog.鈥

Their opposing views on the topic reflect a growing national debate. Parents who advocated for bell-to-bell cellphone bans are now targeting Chromebooks and other ed tech. Influenced by researchers like Jonathan Haidt and Jared Cooney Horvath, who argue that cellphones and classroom technology have harmed students鈥 development, they鈥檝e mobilized in Facebook groups. They鈥檙e demanding pencil-and-paper assignments and asking teachers to excuse their kids from computer-based math and reading apps. Their pleas have sparked pushback from districts that for years have relied on technology for everything from curriculum to testing.

鈥淚n August, almost no one was talking about this, and now I’m having no other conversations,鈥 said Kelly Clancy, a mom of three in South Brooklyn, New York, who also serves on her local community education council. 鈥淭here’s a sea change in parents realizing that they don’t want their kids in front of screens.鈥

She鈥檚 among those challenging the New York City schools鈥 use of digital programs. She refused to let teachers enter her kids鈥 work into , an AI tool from the curriculum company HMH that generates feedback on student writing. But when she tried to opt her children out of i-Ready, a widely used testing program from the company Curriculum Associates, she met resistance. The tests are a 鈥渂aseline component鈥 of the district鈥檚 assessment system, David Pretto, superintendent of District 20, wrote in an email. Her school鈥檚 principal, he said, 鈥渋s not in the position to exclude your child from universal screening.鈥

Clancy didn鈥檛 take no for an answer. 

鈥淲e will get legal advice if necessary, but my children will not complete these,鈥 she wrote back.

In a statement, the district said any tool using student data 鈥渕ust undergo a rigorous 鈥 review process to meet strict privacy, security and compliance standards before it is approved for use.鈥 Officials urged parents to contact local schools with their concerns.

When New York City parent Kelly Clancy said she wanted to opt her children out of i-Ready, a local superintendent said she couldn鈥檛.

Across the country, the Seattle Public Schools has advised staff that 鈥渇amilies may not opt out of district-adopted digital curriculum,鈥 but a spokesperson for the district told 社区黑料 that 鈥渢his is an evolving landscape,鈥 and 鈥渨e will continue to review and update the guidance as needed.鈥

Parents in Pennsylvania鈥檚 Lower Merion School District are also determined to keep their students off Chromebooks at school. 

鈥淭hey鈥檙e saying we can鈥檛, but we鈥檒l find a way,鈥 Yair Lev, a parent of two, said after a last month in which Superintendent Frank Ranelli said opting out wasn鈥檛 possible because the curriculum is computer-based.

Teachers, Lev said, are caught in the middle. He collected from five teachers, who said students often access gaming sites and YouTube during class, and even make video calls to students in other classrooms.

鈥淭here should be clear districtwide policies and parameters for when laptops should and should not be used, rather than leaving major decisions to classroom-by-classroom discretion,鈥 one wrote.

Frank Ranelli, superintendent of the Lower Merion School District, outside of Philadelphia, spoke to parents in March about the district鈥檚 technology policies. (Ron Stanford)

Not 鈥榦ur best moment鈥

Lev, a cardiologist and professor at Thomas Jefferson University Hospital, said he鈥檚 not opposed to technology. He consults for cardiology startups using AI and has taken the lead on AI use in his division at the hospital. But he and his wife realized that 鈥渒ids are being exposed to a lot of screens, and we decided to try to reduce it at home.鈥

In some ways, he represents many of the parents pushing for tech opt outs. His children are young, and they鈥檙e starting school at a time when Haidt, a social psychologist, that cellphones and social media have harmed children鈥檚 mental health. Lev鈥檚 kids are also beginning their education after the pandemic, when parents are demanding more say over what鈥檚 taking place in the classroom and data breaches have compromised student privacy.

鈥淭he image of technology in schools that鈥檚 seared into every parent’s mind is the lockdown version of technology. It wasn’t our best moment,鈥 said Joseph South, chief innovation officer at the International Society for Technology in Education, which merged in 2023 with ASCD, a major curriculum organization. 

Until the pandemic, Elyssa East, a New York City mom, was raising her son screen-free. That became impossible during school closures. Around the same time, she learned that he had some learning difficulties and would 鈥渞eally fall apart when it came to any instruction on the screen.鈥

Online math programs like Zearn and IXL made him feel 鈥渄efeated,鈥 she said, because they were assigned for remediation. 

鈥淗ere is this technology that’s supposed to help him, but it makes him feel even worse than a human teacher would,鈥 East said. 

She eventually switched him to a private school. She has opted him out of math apps and he writes on an old electric typewriter.

鈥嬧嬧滺e likes that a lot,鈥 she said. Compared to a laptop, 鈥渋t’s a totally different experience.鈥

Elyssa East鈥檚 son, now in sixth grade, uses a typewriter at home to do his homework rather than a laptop. (Courtesy of Elyssa East)

鈥楥aught in the crossfire鈥

Some teachers have no problem with .

Dylan Kane, a seventh grade math teacher in Lake County, Colorado, near Aspen, went . Students, he wrote, are more focused, are completing more work and spend less time 鈥渇ussing with logistics,鈥 like connecting to the internet or forgetting their Chromebook at home.

Like many parents, he was influenced by Horvath鈥檚 . In his 2025 book, the cognitive neuroscientist argues that the widespread use of classroom technology has left students distracted and unable to retain information.  

But prior to January, Kane never had a parent request to opt their child out of using computers or specific software. Even during parent-teacher conferences this spring, his decision to ditch Chromebooks in class never came up.

鈥淚 work in a small, rural town that’s relatively low-income, not a lot of college-educated parents. I think much of the tech backlash from parents is coming from the more-online, higher-educated folks,鈥 he said. He thinks trying to accommodate individual parents鈥 objections would be tricky. 鈥淭eachers could be caught in the crossfire because they have to deal with district-mandated online programs and then potentially parent opt-outs.鈥 

South at ISTE+ASCD said he鈥檚 heard plenty of 鈥渉orror stories鈥 about technology, like apps dominated by advertising and students spending class time 鈥渟hooting aliens鈥 on the screen. But those examples are often due to teachers using a program that was never vetted by their district or 鈥渟ome random kid who found a workaround,鈥 he said.

He and Richard Culatta, the organization鈥檚 CEO, added that moving through state legislatures that limit screen time don鈥檛 necessarily address parents鈥 other concerns like cyberbullying, protecting student data or improving the overall quality of instruction. 

Many of the bills require paper worksheets to be used instead of technology, said Culatta, who quipped that he often feels like he鈥檚 in a 鈥渢ime warp.鈥 

鈥淭here鈥檚 no quality indicator,鈥 he said. 鈥淵ou could literally take any garbage worksheet and it would be fine.鈥

鈥楻apid innovation鈥

Opt-out requests have forced districts to be more thoughtful about how they use technology. 

The Worcester Public Schools in central Massachusetts is like a lot of districts. It went through 鈥渁 period of rapid innovation and tech acquisition鈥 prior to the pandemic to make sure 鈥渢eachers and students had the tools needed to be future-ready,鈥 said Sarah Kyriazis, director of the district鈥檚 Office of Innovation. 

Schools added even more ed tech tools during COVID lockdowns for remote and hybrid learning. Now some parents are questioning those decisions at a time of 鈥渘ational concern about data, privacy, security and screen time,鈥 she said. 

The district鈥檚 school committee has so far to allow parents to opt out of ed tech programs. But Kyriazis is collecting feedback from teachers on the apps they feel are most important for instruction. The goal, she said, is to whittle down the amount of data sent through online platforms to third-party vendors. Principals and teachers, she said, should be able to 鈥渟peak with parents about each app and its purpose in the classroom.鈥 

Further west, the Northampton, Massachusetts, district is accommodating opt-out requests from about 12 parents. To do so, teachers must come up with activities that allow students to learn from the same curriculum as their peers 鈥渨ithout using the disputed programs,鈥 said Superintendent Portia Bonner. 

Laura Carney Erny, who has a second grader in the district, hasn鈥檛 tried to opt her son out of tech yet, but she鈥檚 thinking about it for third grade. Even learning which programs the school used took 鈥渕onths of back-and-forth emails鈥 with teachers and administrators, she said.

Parents say they don鈥檛 want to further complicate the lives of teachers, especially those who lack classroom aides. Northampton lost in 2024 who were paid with temporary COVID relief funds. 

鈥淚 don’t blame teachers for relying on tech because it’s an easy thing to do,鈥 she said. 鈥淪ome of these programs help keep the kids in their seats.鈥

In the Los Angeles Unified School District, former teacher Kate Brody is among those who have opted their children out of practice sessions on i-Ready, now the subject of a over student privacy. She decided the program was a problem when her first grader couldn鈥檛 tear himself away from the screen to use the bathroom and started having accidents. 

鈥淚 used to teach full time,鈥 she said. 鈥淚 definitely don’t want to create a world where we’re asking teachers to do multiple lesson plans and monitor half the class on the computer and do analog lessons for the other half.鈥 

It鈥檚 unfair to teachers to field opt-out requests every year, she said. That鈥檚 why, as a board member for Schools Beyond Screens, an advocacy group of parents and educators, she backs a that calls for limits on the use of technology for all students, especially in the early grades. The board will vote on the plan April 21.

鈥淩ight now,鈥 she said, 鈥渋t鈥檚 the Wild West.鈥

]]>
Schools Are Paying for Ed Tech That Students Never Use 鈥 Could A New Contract Model Change That? /article/schools-are-paying-for-ed-tech-that-students-never-use-could-a-new-contract-model-change-that/ Mon, 06 Apr 2026 16:30:00 +0000 /?post_type=article&p=1030759 When school districts sign contracts for educational technology, they typically buy a set number of licenses. The software company delivers the product and the district cuts a check. Whether students actually benefit or even use the tools doesn鈥檛 factor into it.

Over the past few decades, that has generated a growing tension among parents and educators, who have begun questioning the .


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


But a new kind of funding scheme may turn that dynamic on its head: A finds that a different approach to buying classroom technology may not only be workable but, in many cases, produces results that traditional contracts don鈥檛. Called outcomes based contracting, the model ties what companies get paid, at least in part, to whether students actually learn.

The findings, from the nonprofit groups and the , also come as school budgets are tightening after COVID relief funds dried up and district leaders find themselves under growing pressure to justify spending. 

The report examined a group of school districts piloting an outcomes based model. It finds that the arrangement offers a new way to determine whether tech is actually working for kids, since it dictates that a portion of vendors鈥 payments depends on meeting a set of agreed-upon student benchmarks. If students don’t reach them, vendors don鈥檛 collect the full contract amount. 

But the model also builds in a layer of shared accountability: Districts must commit to making sure students use the tools at the levels, or “dosage,” necessary to produce results.

Brittany Miller, the center鈥檚 executive director, said that forces everyone to take implementation seriously.

鈥淲hat this model does is it tells everybody across the ecosystem: 鈥楶rioritize this,鈥欌 she said. 鈥淵ou have to get to this level of implementation integrity, which translates into dosage, in order to actually have a meaningful experience for a student.鈥

Kids 鈥榥ot getting the dosage they need鈥

Before looking at whether a tech product improves student outcomes, Miller said, there’s a more basic question that districts rarely ask: Are students using these tools at all?

The answer is often, 鈥淣o.鈥 

The report found that more than 65% of purchased ed tech licenses typically go unused, with school districts paying full price for products that sit idle. But districts participating in the outcomes based pilot met dosage requirements for as many as 95% of students. Overall usage rates were typically 10 times higher than under traditional contracts.

鈥淲e talk a lot about dosage, and kids not getting the dosage that they need,鈥 Miller said. 鈥淎nd that, to me, is a proxy for being a responsible consumer of tech: Are our kids actually using it in a way that will drive outcomes?鈥

Miller said part of what drives the usage shift is that both districts and vendors share a direct financial stake in students actually using the products. Under the model, if a student falls behind on usage, the district must find out why and get that student back on track. If they don鈥檛, there鈥檚 a record of that and the district is on the hook for payments, even if the student鈥檚 achievement didn鈥檛 improve. 

Brittany Miller

It鈥檚 only fair in cases like these, she said. 鈥淭he provider wasn’t able to prove that their product worked because kids didn’t actually use it.鈥

Beyond usage statistics, the report found that districts in the pilot reported greater instructional coherence. Technology was being used with more intention tied to specific learning goals rather than as a general add-on to existing lessons. And teachers were more deliberate about how they integrated tech into their instruction.

Miller, who formerly led large-scale tutoring implementation in Denver Public Schools, said she has sat in classrooms and watched students working with these products, typically supplemental literacy and math tools. She said many of them can make a difference, but only if used properly. 

鈥淲e’re talking about technology that has the ability to help students pronounce words correctly, support their fluency and break down words for them,鈥 she said. 鈥淚n mathematics, we’re talking about students using technology to really try different ways of solving problems and getting them exactly what they need in the moment.鈥

The report also found that tech companies benefited from the model in unexpected ways: Because outcomes based contracts require detailed, real-time data on how students are using a product, companies got access to information about their tools鈥 effectiveness that most standard contracts never generate.

Fewer tools, better results

Perhaps most counterintuitively, the report found, districts that rely on outcomes based contracting actually end up buying fewer tech products.

That鈥檚 because the process of building such a contract requires district leaders to clearly define what problem they鈥檙e trying to solve, what success looks like and whether a given product is actually the right tool for the job. That level of scrutiny, said Miller, produces a kind of natural audit.

鈥淲e’ve seen in a lot of districts as they’ve taken this on, the number of ed tech tools they’re purchasing just [goes] way down at the district level,鈥 she said.

In one district, Miller said, officials found they鈥檇 purchased licenses for more than 1,000 tools. As they examined the list they said, 鈥淚f there is not a clear reason and purpose that we’re using this in the classroom that’s actually driving student learning, then we’re not going to pay for that tool anymore.鈥

She added, 鈥淚t just shifts the mindset of the system to really say, 鈥楲et鈥檚 look at what we鈥檙e purchasing more carefully, figure out what is and isn’t working, and start to cut down on the noise.鈥

The center, based at the , grew out of research conducted at Harvard University’s under economist Tom Kane, who in 2021 a small group of tutoring providers and school districts to examine whether outcomes based contracting 鈥 already used in healthcare and workforce development 鈥 could be adapted for K-12 education. 

The project eventually moved to the foundation, with Denver among the early participants. Miller was a district leader at the time and got involved in the work that Denver was piloting on tutoring. 

As of February, Miller鈥檚 center had worked with 87 education institutions ranging from school districts to state education agencies and tracked results for more than 63,000 students.

In addition, six states 鈥 California, Texas, Florida, Arkansas, Indiana and Louisiana 鈥 have launched initiatives around the model. Together they represent more than 28% of total U.S. K-12 education spending, constituting a potentially fundamental shift in how schools spend money. That shift, Miller said, could have a huge impact on children鈥檚 achievement if educators are asking the right questions. 

鈥淭here’s a student at the end of the day that’s being served by this,鈥 she said. 鈥淗ow are you really humanizing their lived experience in the classroom and making sure that they’re achieving the outcomes that we know they’re able to?鈥

]]>
California Agency Fines Company For Violating Ed Tech Privacy Law /article/california-agency-fines-company-for-violating-ed-tech-privacy-law/ Thu, 05 Mar 2026 19:30:00 +0000 /?post_type=article&p=1029424 This article was originally published in

This story was originally published by . for their newsletters.

Before they could attend school football games or school plays, high school students across California had to give their personal information over to a ticketing platform, GoFan, which then sold that data to advertisers, state privacy regulators said. The parent company PlayOn, which has contracted with roughly 1,400 California schools, repeatedly violated state privacy law in 2023 and 2024, according to a January filed by the state鈥檚 privacy protection agency.

The California Privacy Protection Agency, sometimes known as CalPrivacy, announced the order Tuesday, saying it is fining PlayOn $1.1 million for failing to give students and families a way to opt out of their data collection.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


PlayOn offers a slew of online products that coordinate ticket and merchandise sales for schools and youth sports organizations, along with other services, such as fundraising and streaming. Its subsidiaries include GoFan, MaxPreps, and NFHS Network, which are used by school districts stretching from Los Angeles and San Diego to Modoc, Mono, and Sierra counties, the order says. The company鈥檚 annual gross revenue is over $26 million.

When users tried to access tickets for school events through one of PlayOn鈥檚 platforms, GoFan, a pop-up appeared, prompting the ticket-holder to agree to the company鈥檚 privacy policy, which allowed the sale of personal data. There was no way to say no, the order said: The pop-up obscured the screen so that it was impossible to access the ticket without agreeing to the company鈥檚 terms.

鈥淪tudents trying to go to prom or a high school football game shouldn鈥檛 have to leave their privacy rights at the door,鈥 said Michael Macko, CalPrivacy鈥檚 head of enforcement, in . 鈥淵ou couldn鈥檛 attend these events without showing your ticket, and you couldn鈥檛 show your ticket without being tracked for advertising. California鈥檚 privacy law does not work that way. Businesses must ensure they offer lawful ways for Californians to opt-out, particularly with captive audiences.鈥

PlayOn 鈥渄oes not admit liability for any violation鈥 of state law, according to the disciplinary order, which effectively functions as a settlement agreement. The order also notes that the company significantly changed its privacy policy in December 2024, allowing users to opt out of data collection, bringing the company into compliance with the state law. These data privacy matters have been 鈥渇ully resolved鈥 since then, said James Dickinson, the company鈥檚 senior vice president of marketing, in an email.

The fine is the first time that the state privacy agency has gone after a company for violating the rights of students and schools, according to the press release. The agency formed in 2020 when voters backed calling for increased enforcement of data privacy laws.

Exceptions to California鈥檚 privacy law

California has some of the strongest data privacy laws in the country, including a landmark 2018 law that requires large for-profit companies to give users a relatively easy way to opt out of data collection or delete their data.

Enforcing the law can prove tricky though. Last year, found that more than 30 companies made it difficult for customers to exercise their privacy rights. While the companies were technically abiding by the law, which requires them to give customers a way to delete their information, they used special code to hide that information from Google search results.

The 2018 law also has a number of exceptions, including for non-profit organizations and for companies that buy, sell or share data from less than 100,000 California residents or households.

The state privacy agency is responsible for enforcing the law. In the past 12 months, the agency has found violations by the menswear company , the rural supply retailer . and the automaker , each resulting in fines ranging from $345,000 to $1.35 million. In January, the state said in that it fined Datamasters, a data broker, for selling the names, addresses, phone numbers, and email addresses of 鈥渕illions of people with Alzheimer鈥檚 disease, drug addiction, bladder incontinence, and other health conditions for targeted advertising.鈥 The broker also traded data on individuals鈥 perceived race, political views and banking activity.

California has additional protections regarding the collection and sale of students鈥 data, but those laws do not necessarily include apps and services used outside of the classroom, even when that technology is a de facto requirement for participation in school sports or extracurriculars. Assemblymember , a San Luis Obispo Democrat, introduced a bill this year that would expand the number of tech companies who need to abide by California education privacy rules, but the laws could still leave out many popular student services, last month.

PlayOn did not respond to questions about its compliance with California school privacy law. The PlayOn says it doesn鈥檛 collect personal information from 鈥渕inors under the age of 16 without proper consent鈥 but it doesn鈥檛 mention anything about students who are age 16 or 17.

California law prohibits companies from selling all K-12 students鈥 data, regardless of their age.

This article was and was republished under the license.

]]>
SXSW EDU Cheat Sheet: 26 Sessions for 2026 /article/sxsw-edu-cheat-sheet-26-sessions-for-2026/ Thu, 05 Mar 2026 11:30:00 +0000 /?post_type=article&p=1029429 South by Southwest EDU returns to Austin, Texas, running March 9鈥12. As always, it’ll offer a huge number of panels, discussions, film screenings, musical performances and workshops exploring education, innovation and the future of schooling.

Keynote speakers this year include Monica J. Sutton, creator and host of the children’s education series Circle Time with Ms. Monica, Yale psychology professor and Happiness Lab podcast host Dr. Laurie Santos, appearing alongside Common Sense Media’s Bruce Reed, and bestselling author Jennifer B. Wallace, whose work centers on the human need to feel valued 鈥 and to add value. 


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


Also featured: former Presidential Science Advisor Arati Prabhakar, who will join a panel on 鈥渕oonshot鈥 thinking and the future of AI-driven learning. And a new documentary traces the career of longtime Sesame Street star Sonia Manzano.

Artificial intelligence this year plays a bigger role than ever. Dozens of sessions examine AI’s expanding role in classrooms, from adaptive tutoring and authentic assessment to teacher burnout, algorithmic bias and what it means to be literate in an age when machines can write, reason and create.

This year, the Austin Convention Center, which typically hosts the event, is under construction. So sessions will be held at four venues around downtown Austin. Organizers are also planning a 鈥淪XSW EDU Clubhouse鈥 at the historic , which will host daily performances, keynote livestreams and social events each night.

Because of the event鈥檚 multiple venues, space may be limited, so organizers recommend booking reservations for keynotes, featured sessions and workshops. They鈥檝e provided an with details. 

To help guide attendees, we鈥檝e scoured the 2026 to highlight 26 of the most significant presenters, topics and panels:

Monday, March 9:聽

9 a.m. 鈥 : Researchers, district leaders and family engagement specialists examine the chronic absenteeism epidemic that has left millions of American students disconnected from school since the COVID pandemic. This panel presents the latest data on what is actually driving absenteeism 鈥 from housing instability and health crises to school climate and whether students feel they matter. It鈥檒l explore which interventions are producing genuine, sustained improvement.

11 a.m. 鈥 : This panel presents evidence that score inflation on standardized tests, state-level proficiency standards and the federal retreat from accountability are making it harder than ever for families to get an accurate picture of their child’s true academic standing 鈥 and what policymakers can do about it.

1:30 p.m. 鈥 : This Opening Keynote features Monica J. Sutton, educator, entrepreneur and creator of Circle Time with Ms. Monica, who traces her journey from preschool classroom to digital learning spaces reaching millions of families worldwide. Sutton challenges educators to evaluate every innovation through a developmental lens, asking: Does this technology honor how young children learn, grow and thrive, while protecting curiosity and connection?

2 p.m. 鈥 : What do real students think about AI? How do they want to learn about it? This session, by MIT Media Lab鈥檚 Jaleesa Trapp and LEGO Education鈥檚 Jenny Nash, explores strategies for building AI literacy through hands-on computer science that fosters critical thinking and ensures safe, responsible AI use.

2 p.m. 鈥 : Civics teachers, researchers and policy advocates will examine how teachers are navigating the nearly impossible task of teaching democracy, elections and civic participation in classrooms where students and families often hold deeply opposed political views. The panel shares new findings from America鈥檚 Promise Alliance鈥檚 State of Young People research and explores strategies for creating classrooms where hard but evidence-based conversations happen productively 鈥 and where students develop the civic skills needed to participate in and repair a fractured democratic system.

4 p.m. 鈥 : Child development experts offer a science-backed framework for evaluating AI for young learners without compromising the play, exploration and human attachment that are foundational to healthy development. This session offers an 鈥渦rgent exploration鈥 of AI’s impact on brain architecture and what educators, parents and policymakers must know to protect young minds.

4 p.m. 鈥 : A panel of educators explores the causes of low student engagement, absenteeism and cheating, sharing classroom-tested solutions for creating assignments that are cheat-resistant by design. Rather than relying on cheat-detection software and pedagogy that punishes students for cheating, panelists will share how to foster a culture of academic integrity based on student agency, purpose and ownership of learning.

4 p.m. 鈥 : In this featured panel, Rep. Jim McGovern (D-Mass.), Chef Ann Foundation CEO Mara Fleishman, University of Pennsylvania student Maya Miller and Duke World Food Policy Center Director Norbert Wilson make an evidence-based case that school nutrition is an educational issue, not merely a logistical one. Panelists connect chronic hunger and poor nutrition directly to cognitive function, attendance, behavior and academic performance, and present district-level models that have transformed school meals into assets for learning.

Tuesday, March 10:

9 a.m. 鈥 : This featured session stars Roya Mahboob, CEO of the Digital Citizen Fund, who will draw on her experience growing up in Afghanistan to trace how exclusion compounds across the pipeline from K鈥12 classrooms to corporate boardrooms. Mahboob offers evidence-based interventions that have demonstrated real impact on girls’ participation and persistence in tech, as well as a vision for education that is inclusive, practical and full of possibility.

9 a.m. 鈥 : A candid discussion on the science, ethical considerations and implementation challenges of using Voice AI for assessment in K鈥12 classrooms. Learn what鈥檚 promising, what鈥檚 problematic and what鈥檚 on the horizon as experts explore how Voice AI differs from other AI tools such as large language models (LLMs), and how it can be integrated in ways that truly support students and educators.

12:30 p.m. 鈥 : In this keynote, Bruce Reed, Head of AI at Common Sense Media, and Dr. Laurie Santos, Yale psychology professor and host of The Happiness Lab podcast, examine how rapidly evolving AI technologies and social media are shaping young people’s mental health 鈥 and how families, educators and policymakers can respond. They explore the science of well-being, the risks of algorithm-driven systems and common-sense guardrails to protect young minds. 

2 p.m. 鈥 : This panel challenges the deficit framing that has long defined how schools, families and students themselves understand dyslexia. In an interactive session, a think tank-style panel will present a strength-based model of dyslexia support and examine how AI tools are beginning to unlock academic access for students whose abilities have been systematically undervalued.

3 p.m. 鈥 : Director Anna Toomey’s feature documentary tells the story of five mothers determined to establish the first public school in New York City for children with dyslexia. Toomey follows their battle to open the South Bronx Literacy Academy, addressing a learning disability that affects about 20% of the public. A post-screening discussion connects the film’s themes to national debates about reading instruction and equitable access.

4 p.m. 鈥 : As chronic absenteeism reaches historic highs, schools are doubling down on academics, interventions and incentives. But they may be missing underlying emotional and psychological factors driving absenteeism: stress, anxiety and lack of belonging. This session looks at how rest, youth voice/choice and emotionally safe environments can re-engage students.

5:30 p.m. 鈥 : Director Ernie Bustamante’s feature-length documentary offers a portrait of Sonia Manzano, the trailblazing actress who played Maria on Sesame Street for 44 years. A conversation with Manzano herself follows the screening, exploring how public media can reach children when formal schooling often fails, and what Sesame Street鈥檚 legacy means in the age of AI-generated children’s content.

Wednesday, March 11:聽

10 a.m. 鈥 : This performance offers an early look at a show in development that began as a teacher performance at a school meeting. In this Hamilton-meets-The Sound of Music-meets-Good Night and Good Luck story, set against today’s culture wars, three high school students and their teachers navigate questions of identity, purpose and what school can and cannot teach. A Q&A with Peter Nilsson, the show’s creator, follows the performance.

11 a.m. 鈥 : This solo session by Toby Fischer, an Ohio educator, offers a sweeping reimagination of literacy for the 21st century, arguing that reading and writing instruction must now encompass the ability to critically evaluate AI-generated text, recognize the hallmarks of synthetic content, prompt AI systems effectively and to understand the social and ethical contexts in which AI-generated language circulates.

12:30 p.m. 鈥 : This keynote by Adeel Khan, Founder & CEO of MagicSchool AI, makes the case that teacher expertise, relationships and professional judgment must guide technological change. Drawing on his experience building the popular platform, Khan will share unfiltered insights on what’s working and what’s not, offering a framework for evaluating AI tools through the lens of educator agency.  

2 p.m. 鈥 : This panel examines why so many school AI initiatives rely on tools that 鈥渏ust aren鈥檛 there yet.鈥 Panelists share case studies of implementations that stumbled, the lessons of those failures and the educator-driven, grassroots efforts that can move schools from dabbling with AI tools to using them for real instructional transformation. 

Thursday, March 12:

10 a.m. 鈥 : This featured panel convenes former Presidential Science Advisor Arati Prabhakar, Renaissance Philanthropy President Kumar Garg, Carnegie Learning VP of R&D Jamie Sterling and Bezos Family Foundation Chief of Staff Eden Xenakis to explore how bold learning goals can accelerate AI-driven innovation in education. They鈥檒l examine how 鈥渕oonshot-centered鈥 models can rally diverse innovators around a shared outcome and catalyze the funding needed to scale breakthroughs.

10 a.m. 鈥 : Dubbed the 鈥渢oolbelt generation,鈥 more than half of Gen Z respondents in a recent survey said they鈥檙e considering a skilled trade career. And schools are working to modernize career preparation, including by tapping immersive technology to expose students to in-demand skilled trades. This panel, moderated by The74鈥檚 Greg Toppo, will discuss how we can harness tech to engage students in learning while preparing them to successfully meet workforce demands.

11:30 a.m. 鈥 : This session offers a ground-level counternarrative to AI anxiety, presenting a community college and workforce development partnership in Cleveland that is using AI-powered tools and training to open new economic pathways for adults who were left behind by earlier rounds of technological change. Speakers will examine what equitable AI adoption looks like in a post-industrial city and what conditions made the initiative work.

11:30 a.m. 鈥 : Leaders from higher education, industry and workforce policy examine whether universities are structured to produce graduates who can thrive in a labor market being remade by AI. The panel will ask which degrees and credential pathways are producing AI-ready graduates, where institutions are falling behind, and what structural changes will move the needle most.

11:30 a.m. 鈥 : Directed by Scott Barnett, this feature-length documentary follows bestselling author James Patterson to the front lines of America’s reading crisis to examine how the Science of Reading 鈥 a vast body of evidence-based research 鈥 is changing how children are taught to read. A post-screening discussion with literacy researchers and classroom teachers will examine what the film gets right and what systemic change will actually require.

2 p.m. 鈥 : This workshop, conducted by two top officials with the Illinois-based Education Research and Development Institute, will offer practical AI tools that automate routine tasks, generate content, analyze data and simplify communication, freeing teachers to focus on students and strategy and reducing the risk of burnout.

2:30 p.m. 鈥 : This featured panel, with Martin McKay of Everway, Hello Sunshine CEO Maureen Polo and the Brookings Institution’s Rebecca Winthrop, draws on a landmark report spanning 50 countries to explore what it means to protect children’s cognitive, social and emotional development in an AI-saturated world. Speakers will move beyond the question of whether AI should be used in schools to ask how it can be designed to strengthen young people’s capacity to think, relate and thrive.

]]>
10 Useful Tech Tools for Educators in 2026: A Practical Guide /article/10-useful-tech-tools-for-educators-in-2026-a-practical-guide/ Tue, 30 Dec 2025 11:30:00 +0000 /?post_type=article&p=1026398 As a tech explorer and author of the newsletter, I’ve tested more than 200 Ed Tech services this year in search of the 10 most useful teaching tools. The massive number of apps and sites clamoring for teachers鈥 collective attention can be exhausting. So this guide is intended to help you gauge what鈥檚 actually worth your time. 

Each of these top 10 tools is valuable whether you’re working with little kids, grad students, or learners in between. These services are all free to try, with paid upgrades available. I teach college and grad students, have two elementary school kids of my own and have worked with teachers at all levels for more than two decades. So you鈥檒l find here tools designed to enhance teaching at all levels. 

Jeremy Caplan

Make Concepts Stick

Create a learning path

Pathwright is one of the best-kept secrets among teaching tools. It’s a well-designed, simpler alternative to complicated learning management systems like Blackboard or D2L, and it’s more elegant and flexible than Google Classroom. Rather than giving students dozens of menus to choose from, Pathwright lets you create a simple learning path for students to follow one step at a time. You can create a path with a few steps for guided independent learning, or set up a full online course that’s easy to navigate. Any step you create can include a reading, video, activity, assessment, embed or any other interaction. The learning paths provide an easy way to guide students toward learning objectives. It鈥檚 a visually delightful alternative to clunkier systems.

Visual thinking and collaborative whiteboards

When Google shut down Jamboard and Microsoft discontinued Flipgrid, teachers went searching for lively alternative tools. Figjam came to the rescue. Digital whiteboards enable the kind of open-ended visual thinking that’s invaluable whether you’re teaching about historical networks, systems thinking, scientific processes or anything requiring students to explore connections and relationships. The platform is free for educators. Figjam also has new AI capabilities, allowing it to categorize student comments or transform a scattered brainstorm into an organized handout. You can even use Figjam for presentations. Unlike sterile corporate whiteboard apps, Figjam includes playful stickers, stamps and templates designed for teaching and learning 鈥 from icebreakers to built-in timers. 

Inspire Curiosity

Make attractive slides

Replace PowerPoint or Google Slides with Gamma. You鈥檒l save time preparing slides, and they鈥檒l be more engaging for students. Create vertical, square or horizontal slides. You can import existing PDFs or PowerPoint slide decks. When you鈥檙e done creating, you can export to Google Slides or PowerPoint. You can use Gamma without any of its artificial intelligence features, if you鈥檙e an AI abstainer, or you can use Gamma鈥檚 AI to jumpstart a new presentation instantly from an outline, text prompt or document you upload. 

Unlike PowerPoint, Gamma makes it easy to embed live websites, videos or data visualizations inside your slides to make them stand out. You can even use Gamma to build simple websites, social posts or interactive lessons. 

Make interactive teaching materials

Genially is terrific for creating interactive lessons. Add clickable hotspots to any image, timeline, map, or other image. Student clicks reveal informational pop-ups, links, or audio files. These hotspots transform static visuals 鈥 like simple maps or timelines  鈥 into engaging, exploratory learning elements. I鈥檝e used Genially to turn my old handouts into resources with embedded audio. When students click on something, they hear my brief recorded explanation or anecdote. The free version works great for teachers. You can invite an unlimited number of students into your workspace for free, and like these other top tools, Genially is grounded in student privacy: it鈥檚 FERPA-COPPA and GDPR compliant. While it takes a bit of experimenting to get comfortable with the interface, once you understand the basics, you can transform boring handouts into interactive learning materials that students actually want to explore.

Jeremy Caplan Walks Through Three Top Tech Tools for Educators

AI That Actually Helps

Build on your teaching materials

NotebookLM is a free tool from Google that lets you apply AI to any collection of documents. It鈥檚 super useful for searching through your teaching materials, but also for strengthening and repurposing them. You can have 100 notebooks in a free NotebookLM account, and each notebook can have 50 sources in it. A source can be a PDF, Word Doc, image, audio file, link or a Google Drive file. Each one can be up to 200 MB or 500,000 words. You can fit dozens of lesson plans, handouts, syllabi, slides, rubrics or even handwritten notes or voice recordings. NotebookLM makes everything instantly searchable and remixable. 

NotebookLM鈥檚 semantic search can find things in your materials based on level, topic, style or other characteristics that a simple Control-F search can鈥檛 do. You can also use it to adapt teaching materials into new formats. Turn a dense reading into an engaging audio overview students can listen to, or transform a handout into a colorful infographic or slide deck. Students can create their own notebooks and generate flashcards and interactive quizzes to help with studying. They can also use the mind map feature for helping to visualize connections across topics. 

You can create separate notebooks for each course you teach, or organize one for administrative tasks and another for curriculum development. NotebookLM works only from your uploaded sources 鈥 not generic web content 鈥 and provides citations so you can see the source of search results. 

Your AI Teaching Partner

Claude is a general purpose AI tool, like ChatGPT, Gemini or Microsoft鈥檚 Copilot. Where it excels for teachers, though, is in a feature called Claude Projects. You start by uploading your existing teaching materials 鈥 syllabi, lesson plans, handouts, slides, rubrics, notes or pictures you took of whiteboard diagrams 鈥 whatever you use for your teaching. You also provide a detailed set of instructions and context to guide your project. This might include the level of your students, your approach to project-based learning, how much time you typically have for lessons, what kinds of activities your students respond well to or any special learning needs they have. 

You can then task Claude to be your critic and coach, pointing out blind spots in your syllabi, listing potential missing elements in your upcoming lessons or suggesting supportive materials you may want to create to supplement a particular part of your class.  

You can use Claude to help you make your lessons and materials more inclusive and accessible. It can help you adapt content for different skill levels or even translate materials into multiple languages. It can suggest concrete examples and analogies, give you alternative elements to consider adding to a rubric, or even point you to additional readings or research you might want to explore related to a subject you鈥檙e teaching. It鈥檚 the closest thing most of us will get to having an assistant, available 24-7 to support our teaching prep. 

Spark Engagement

Add fun to learning

No other teaching tool generates as much classroom buzz as Kahoot, which turns quizzes into playful games. You can design your own questions or pick from a huge library of quiz games designed by other teachers. And now that Kahoot has an AI assistant built in, you can convert text from your handout or lesson into editable quiz questions.

What makes Kahoot especially engaging is the variety of question formats: Students can drop pins on images, fill in blanks, guess numbers or order items in a list. There鈥檚 also dramatic quiz-show music that helps create a playful atmosphere. Students can play individually or in teams, so Kahoot works for both competitive and collaborative classroom cultures., and are good alternative game-style quiz platforms that offer fuller free plans for those on a tight budget.

Get students engaging and interacting

Padlets are digital bulletin boards where students can post comments, links, voice recordings or short videos. You set up a board with a topic or a template. You can start with a map, timeline, discussion thread or an image gallery.  Students can then participate from their own device, adding their own notes, documents, images or comments. Or they can use the built-in recorder to add audio or video. 

You can build a board as a live, collaborative activity or asynchronously. You can also use it as a teacher to guide students through teaching material or as a showcase for exceptional student work. 

I find Padlet useful for brainstorming, collecting student questions before class and building collections, like students鈥 favorite songs, books or snacks. 

It鈥檚 so easy to use that most students can jump in without any training. Padlets are often used in elementary school classes, but I鈥檝e also used them with graduate students and for mid-career training. It’s one of the best tools for getting students building on each other鈥檚 ideas, rather than passively consuming content.

Tame the Chaos

Organize your materials

Craft is a surprisingly useful, underrated tool for creating and organizing notes and documents. Use it to develop attractive lesson plans, student handouts, syllabi or collections of resources. You can organize materials into neat visual cards students can click to explore. Add text, images, links or tables to your documents so they look more elegant than Google Docs, Apple Notes or Microsoft Word documents. It鈥檚 easy to share Craft docs with a link or export as PDF, and it鈥檚 easier to use than Notion or other pro tools. 

Craft also has a remarkably good mobile app, so you can actually use your phone or tablet to make notes or prepare documents. If you鈥檙e drowning in scattered teaching materials in various different apps, consider Craft as a new, flexible place to make, organize and share your docs.

Create free, useful surveys

Tally is the best free tool for making forms and surveys. Whether you鈥檙e gathering feedback from parents and students, or collecting information for trips, Tally forms are better-looking and more flexible and powerful than Google Forms. They鈥檙e just as easy to create in a few minutes. You can add images, videos or text before or between questions. You can use Tally to collect assignment submissions, create quizzes or handle RSVPs for events. The interface lets you start typing and add questions from a simple list 鈥 no complicated menus. You can make forms feel less bureaucratic than other boring survey tools and connect your forms to Google Sheets, Notion, Excel, or whatever other tools you like so you can analyze responses easily. Based in Belgium, Tally follows strict European privacy rules. For educators who need to collect information regularly, Tally lets you quickly make professional-looking surveys without paying for expensive tools. Extra fancy analytics require a paid plan, but the free tier will cover most of your teaching needs. I haven鈥檛 yet needed to upgrade.  

Bonus: Preserve Academic Integrity

Detect AI writing

Students increasingly lean on AI for homework help. Sometimes they鈥檙e trying to make sense of something confusing, like a jargon-filled textbook diagram. On other occasions, they鈥檙e using AI in more problematic ways; 84% of high school students say they鈥檙e using AI to help with schoolwork, according to College Board, while about 85% of college students say the same, according to. In some of those cases, students are using AI to avoid doing their own writing. 

That鈥檚 led some educators to look for new ways to discourage students from outsourcing their thinking. I wish we didn鈥檛 have to resort to AI checkers, but educators are clamoring for them. If you鈥檙e going to use an AI detector, Pangram is the most accurate. Its false-positive rates of around one in 10,000 are much better than the notoriously problematic early detection tools. From my perspective, Pangram can serve as a useful backup pair of eyes when you’re overwhelmed with questionable submissions, or if you鈥檙e just trying to identify the most egregious violations of academic integrity.

]]>
Opinion: 2,739 Ed Tech Tools Later, Where Are the Outcomes? /article/2739-ed-tech-tools-later-where-are-the-outcomes/ Wed, 03 Dec 2025 13:30:00 +0000 /?post_type=article&p=1024367 Step into any school district today, and you鈥檒l see it: a dizzying maze of educational technology tools. On average, districts access annually. Ed tech providers roll out flashy features, sometimes without clear evidence that they actually improve student learning. And yet, when results fall short, districts are left paying for products that don鈥檛 deliver. 

As districts navigate mounting financial pressures within a shifting K-12 funding landscape, the stakes could not be higher. The opportunity to invest in solutions that deliver outcomes has also never been greater.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


The call is simple: Buy what works, build for impact and hold everyone accountable to outcomes. Recent research that EdSolutions conducted on behalf of the revealed critical insights about how to make this happen.

When contracts focus on results over bells and whistles, every dollar stretches further toward meaningful learning gains. The question is no longer 鈥淲hat can this program do?鈥 but 鈥淲hat outcomes will my students achieve as a result?鈥

This is the moment to move beyond feature checklists and unclear expectations for dosage. Districts and providers alike must embrace outcomes-based contracting, an approach that puts student learning at the center.

It鈥檚 not just about shifting financial incentives; it鈥檚 about ensuring shared accountability for implementation integrity. Every dollar should drive measurable student gains, not just fund another tool. Districts must weigh evidence of effectiveness as heavily as price when assessing value. Providers must clearly define the conditions 鈥 professional learning, supports, and implementation as designed 鈥 required to achieve results and ensure the product price reflects the full cost, including these conditions.

The call is simple: Buy what works, build for impact and hold everyone accountable to outcomes.

In today鈥檚 crowded edtech landscape, district leaders say they want to buy what works. A 2024 EdSolutions survey of 400+ educators shows most rely on evidence tools 鈥 60% cite EdReports, 47% What Works Clearinghouse, 37% Evidence for ESSA 鈥 when considering options. Yet when it comes to actual purchasing, our analysis of district requests for proposals found price, not evidence, still drives decisions.

Why? Because quality evidence is scarce. Of 14 widely used K-12 math and literacy products we analyzed, only four earned the highest ratings for effectiveness. With limited proof and tight budgets, districts default to comparing features and costs 鈥 $20 vs. $40 鈥 rather than asking which tool actually helps students learn.

Districts need to flip that script and push beyond price by asking: Does the evidence hold up in our context? Are the promised outcomes worth the investment? Providers need to shift the conversation by proving their products deliver results, not just bells and whistles. And funders need to step in to underwrite rigorous, independent studies that give the field the confidence it badly needs.

Buying the right product is just the first step. Without strong implementation support, even the best tools flop. Take a district that invests in a new math platform: It looks affordable on paper, but training is optional, usage is inconsistent and get the required practice Results stall, teachers grow frustrated and the district ends up paying for something that never stood a chance.

The evidence is clear. Researchers from Northwestern University found that when teachers receive even, student gains are dramatically larger than when products are used off the shelf. Yet too many providers treat professional learning and requirements to use as designed as 鈥渆xtras鈥 rather than essentials.

Implementation has real costs: time, resources and training to use tools as designed. Providers should be transparent about these requirements and build them into their pricing and messaging. If a product鈥檚 effectiveness depends on dosage, training or fidelity, those elements aren鈥檛 optional; they鈥檙e part of the product itself.

Outcomes-based contracting transforms the provider-district relationship. By tying payments to student outcomes, districts must commit to implementing as designed, while providers must commit to delivering tools that actually work. Both parties have skin in the game.

The OBC approach sparks the critical conversations that traditional contracts don鈥檛 always surface:

  • What outcomes do we expect, and how will we measure them?
  • Who is this product designed for, and is that population similar to our target population?
  • What implementation steps are non-negotiable and by whom?
  • What professional learning and time commitments are required?

Instead of retrofitting products for the wrong contexts, OBC clearly and strategically defines the outcomes and expectations upfront. Instead of hiding implementation requirements in the fine print, OBC makes them explicit and actionable. This goes beyond accountability for outcomes, creating a unique opportunity to improve both product design and teaching practice together by working through real-world usability challenges to achieve the product’s research-backed intent. It’s a win-win.

Budgets are tight, communities want results and funders demand proof. Traditional contracting rewards features and sales; OBC rewards outcomes. It鈥檚 time to flip the script 鈥 and pay for what works.

]]>
Opinion: Ed Tech Can Unlock STEM Potential of Students With Disabilities 鈥 If It’s Funded /article/ed-tech-can-unlock-stem-potential-of-students-with-disabilities-if-its-funded/ Fri, 17 Oct 2025 12:30:00 +0000 /?post_type=article&p=1022087 Thirty-five years after the passage of the Americans with Disabilities Act, students with disabilities still underperform their peers and face ongoing barriers in education and employment. The most recent confirmation came Sept. 9, when the National Assessment of Educational Progress released 2024 science scores for eighth graders and math scores for 12th graders. Three-quarters of students with disabilities scored below NAEP basic, the lowest-performing subgroup.

An estimated 15% of Americans live with a disability. Yet fewer than 10% of such students pursue careers in STEM, even though their interest in science, technology, engineering and math matches that of their peers. That’s a missed opportunity 鈥 not just for them, but for American businesses facing a growing talent shortage.

People with disabilities often bring unique skills to STEM, such as advanced digital literacy gained by using assistive technologies and innovative problem-solving abilities resulting from neurodiversity. Virgin founder Richard Branson, for example, credits dyslexia for influencing his out-of-the-box thinking on business issues. The founders of many successful American companies, such as Elon Musk and Charles Schwab, are neurodiverse. LinkedIn now includes “” as a professional competency, recognizing that workers who approach information in a unique way often create innovative solutions.

The question isn’t whether neurodivergent perspectives and disability can drive innovation 鈥 they do. The real question is whether America is investing enough to unlock that potential. Currently, disability innovations remain chronically underfunded. Unlike Big Tech, which draws billions of dollars in investment, accessibility and assistive technology rarely receive the funding they need to grow. Without that support, promising tools stall in research labs or small pilot programs, never reaching the students who need them most.

This funding gap creates a vicious cycle. Without adequate investment, assistive technologies can’t achieve the scale needed to drive down costs or demonstrate market viability. Private investors remain on the sidelines. Meanwhile, the disability community continues to face barriers that innovative solutions could remove.

Strategic investment by the federal government has played a significant role in providing seed money for disability innovations. , a teletherapy company that received seed funding from the Institute of Education Sciences’ Small Business Innovation Research program, embodies this potential 鈥 growing to a team of more than 2,000 clinicians delivering more than 5 million sessions of therapy across 7,700 schools in 45 states.

For too many children, months-long waits for speech-language therapy can delay critical progress at a pivotal stage of learning. , a national initiative led by the University at Buffalo with support from the National Science Foundation and the Institute of Education Sciences, is developing AI-powered tools that deliver near-real-time, interactive and personalized support. The project aims to help schools reach more students sooner, easing the strain of therapist shortages and long wait times.

uses federal funding from the Department of Education’s to innovate and expand educational access for students with disabilities. Through 鈥 the world’s most extensive digital library for people with dyslexia, low vision or blindness 鈥 readers can access more than a million titles in audio, large print and Braille. Today, Benetech is using artificial intelligence to make STEM and teacher-created materials more accessible, transforming complex content, including math equations, chemistry formulas and structural diagrams, into formats that can be read aloud.

From the to the department鈥檚 ed tech and innovation grants and various other initiatives, has sparked innovation in the disability sector, which struggles to secure sufficient financial investment. Many of these programs also fund research and pilot projects that explore how AI can improve educational outcomes 鈥 a key administration priority.

The president’s proposed budget presents a mixed picture: While it preserves the SBIR program, it redirects funding from the Office of Special Education Programs to states, reduces the National Institute on Disability, Independent Living and Rehabilitation Research鈥檚 budget and slashes the Institute of Education Sciences to roughly one-third of its prior funding level.

In addition, on Oct. 10, during the federal government shutdown, the administration laid off nearly the entire staff of the Office of Special Education and Rehabilitative Services, including the Office of Special Education Programs that administers some of these grants. Many of these cuts appear inconsistent with the stated goal of leveraging AI for educational innovation, given that much early-stage research and development depends on these funds.

Federal programs help identify key challenges in special education and have encouraged innovators to focus on them, connecting them with researchers and providing opportunities to build and test product concepts that are effective. However, federal funding alone cannot do this; private and philanthropic capital are also needed to diversify and broaden the pipeline. 

There are some nascent signs of progress here. Organizations such as are directing funds toward specific challenges 鈥 for example, its initiative, which supports projects using artificial intelligence to advance mathematical discovery. Some corporations and venture capitalists are investing in early-stage disability innovation funds, like , which aim to unlock the economic potential of individuals with disabilities. In the process, they also support innovations that have broader applications. By validating promising solutions and reducing investment risk, these early funders create pathways for later-stage investment, which in turn enables organizations to scale their work, reach a broader audience and achieve greater impact.

The payoff is clear: Millions of students with disabilities gain access to tools that unlock their learning potential, while the nation builds a stronger pipeline of STEM talent critical to economic growth and competitiveness.

]]>
Opinion: Using AI for the Master Schedule Can Meet Student Needs 鈥 and Save Money /article/using-ai-for-the-master-schedule-can-meet-student-needs-and-save-money/ Wed, 01 Oct 2025 12:30:00 +0000 /?post_type=article&p=1021444 America’s school districts, particularly in urban centers, find themselves caught in a near impossible situation. The emergency ESSER funds that kept districts afloat during the pandemic have disappeared, and enrollment has dropped by more than 1.2 million students since 2020. At the same time, classrooms are serving more students with disabilities and multilingual learners than ever before.

District leaders are now being asked to do the impossible: serve more complex student needs with fewer resources. The math doesn鈥檛 add up. And if we rely on the old playbook of across-the-board cuts, it鈥檚 our students, especially our most vulnerable, who will pay the price.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


But there鈥檚 hope. Right now every district has access to a strategy powerful enough to save millions of dollars while improving, not sacrificing, student learning. The answer lies in the master schedule. And in 2025, it also lies in the responsible use of artificial intelligence optimization.

With 70% to 85% of district budgets dedicated to personnel, the master schedule is the single most powerful lever that leaders have. It decides how every teacher鈥檚 time is used, how every classroom is filled, and how every dollar is spent.

Yet too often, scheduling is treated as a yearly administrative headache. Old schedules get 鈥渞olled over,鈥 locking in inefficiencies. The result? Some classes run half empty while others are overflowing. Teacher workloads vary wildly, fueling burnout. Budgets are strained, and reduction-in-forces quickly follow.

In our current reality of decreasing budgets, districts can鈥檛 afford to keep scheduling on autopilot.

In our districts, Lubbock Independent and Edgewood Independent, we faced the same brutal reality: declining enrollment, shrinking budgets and growing numbers of students with specialized needs. On paper, the choices looked bleak. Our communities braced for layoffs and program cuts that would inevitably fall hardest on students and teachers.

We refused to accept that.

Instead, we both turned to the master schedule and used AI-optimization tools to transform this process across our schools. We analyzed every teacher’s workload and class sizes across schools and subjects, revealing significant variations that weren’t aligned with student needs.

By leveraging AI, we were able to build schedules that rebalanced class sizes and workloads and resulted in significant savings 鈥 $2.2 million across 14 schools in Lubbock, $1.05 million districtwide in Edgewood.

In Lubbock, the program supported a transition to consolidate schools in our west Texas district. As we did a deep dive, we realized we did not have enough students to sustain fine arts and career-and-technical programming in all our middle schools. We made the difficult decision to close a school with declining enrollment starting this fall.

In Edgewood, which serves San Antonio鈥檚 west side, we faced overcrowded core classes and under-enrolled electives. By using AI-optimized scheduling, we rebalanced class sizes, shared staff more effectively across campuses, and absorbed changes through normal turnover.

In neither district did we lay off a single teacher; every staffing adjustment was a result of normal turnover and retirements. We were able to reinvest those dollars into new academic priorities that directly benefit students.

When we first turned to AI to help build our schedules, we quickly saw just how complex the task really was. A single high school schedule can involve over a million possible combinations of teacher, student, and period placements 鈥 and not all of them serve students equally well. In the past, we鈥檇 spend weeks trying to reconcile course requests with teacher assignments and room availability, only to end up rolling over parts of last year鈥檚 schedule because we ran out of time.

AI changed that. It analyzed thousands of possible scheduling scenarios, optimized for our goals and showed us solutions invisible to even the most experienced scheduler. Our teams were freed up to think strategically to ensure that our schedules aligned with school and district priorities while meeting the needs of our students.

For the first time, we weren鈥檛 defaulting to a legacy schedule. We were making deliberate, student-centered choices at a speed that simply wasn鈥檛 possible before

Our experiences aren’t isolated success stories. They represent a larger shift in how districts can approach resource allocation during times of fiscal uncertainty. But to make this shift, leaders need tools far more sophisticated than what most districts currently rely on.

As researchers at the at Columbia University have noted:

“The master schedule, an undoubtedly strategic tool, gets treated as a logistical one. This has disastrous consequences for students because it (1) masks the weight of the choices at hand, and (2) limits what is possible. In every case, the shift from technical to strategic scheduling was accompanied by a shift from limited to more sophisticated tools. As schools and systems sought to do more with their schedules, they stumbled over difficult-to-use tools and were pushed to find alternatives.”

Current solutions  鈥 clunky SIS schedulers, messy spreadsheets, whiteboards, magnet tiles 鈥 simply aren鈥檛 enough for the complexity of today鈥檚 challenges.

We know AI often raises skepticism in education, with fears about replacing teachers or removing authentic classroom engagement. But scheduling is different. It鈥檚 not about replacing human judgment, it’s about freeing up scheduling teams to focus on higher-level decisions about how our schedule serves our students

If there is one place where AI belongs in education right now, it鈥檚 here: in the master schedule.

In 2025 the question isn’t whether districts can afford to invest in better master scheduling, it’s whether they can afford not to. For districts serious about navigating financial setbacks while preserving quality educational experiences, the master schedule may be their most powerful untapped resource.

Visibility is everything in a time of uncertainty. When leaders see clearly how resources are allocated and understand the impact of scheduling decisions on both budget and student outcomes, they can make choices that truly serve the needs of their communities.

We鈥檝e lived this transformation. Our message to our colleagues is simple but urgent: don鈥檛 approach scheduling this year the way you always have. Treat it as your chance to build stronger, more sustainable schools for the students and communities we serve. The master schedule is not a burden, it鈥檚 a blueprint for success.

]]>
School Systems Are Remaking the Old Yellow Bus into a High-Tech Machine /article/school-systems-are-remaking-the-old-yellow-bus-into-a-high-tech-machine/ Tue, 30 Sep 2025 16:30:00 +0000 /?post_type=article&p=1021393 This article was originally published in

KANSAS CITY, Mo. 鈥 A transplant from Miami, Anallive Calle learned her way around Kansas City from behind the wheel of a big yellow school bus.

The tablet near the dash provides turn-by-turn directions to every stop and checks each kid on and off the bus throughout her route. It鈥檚 helped her navigate the narrow roads and one-ways that stretch through one of the city鈥檚 oldest neighborhoods.

And from her phone, she can check on the status of her own son and whether he made the bus each morning and afternoon.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


鈥淪o it鈥檚 transparent all the way,鈥 she said. 鈥淵ou know when your child is picked up and where they鈥檙e at every moment.鈥

Last school year, Kansas City Public Schools started a new transportation contract with Zum, a company that provides busing services for districts across the country.

With the new vendor, drivers welcomed updates like air conditioning and tinted windows that keep the new fleet comfortable. But they also were given a suite of new technology 鈥 a main driver of the 15,000-student urban school district鈥檚 decision to ink a $100 million, 5-year contract with Zum.

Aside from navigation, the buses are loaded with live cameras inside and out. Checking in at the tablet allows parents to track their kids and schools to get a headcount on that day鈥檚 breakfast and lunch. From the bus barn鈥檚 dispatch office, a large screen shows the location of each bus, its exact speed, whether it鈥檚 running on time 鈥 and even the driver鈥檚 rating from parents.

Derrick Gines, a Zum driver and safety trainer with 10 years of experience, said the technology built into today鈥檚 buses make drivers and students safer.

鈥淰ersus yesteryear, they were designed for freight 鈥 human freight,鈥 he said. 鈥淏ut now, there鈥檚 so much safety wrapped around this thing.鈥

While the iconic yellow buses might look like those of yore, school systems big and small are increasingly investing in a new wave of on-board technology.

New software programs monitor engine components, alerting transportation departments to maintenance needs. Other tools create the most optimal routes, saving on fuel, staff and bus costs. Turn-by-turn navigation and student manifests help ensure that no driver is lost and no kid is left behind. And live video feeds can help with student behavior issues 鈥 even allowing a school principal to speak to students on the bus in real time, in some cases.

This newfangled technology is a stark contrast to the machinery and aesthetics of the yellow bus, which have remained largely unchanged for decades, said Ryan Gray, editor-in-chief of School Transportation News, which covers the industry.

鈥淓ven when you walk onto a school bus, it still looks the same,鈥 he said. 鈥淏ut the inner workings have just completely changed. All of the advanced electronics in it 鈥 the wiring to make all of this technology work, whether it be the hardware or the software 鈥 it鈥檚 grown by leaps and bounds.鈥

Schools see some of these technologies as intuitive progress: Technology has reshaped many other facets of public education, while many bus drivers were stuck with paper maps and CB radios. But with the rise of new technology comes new risks, and some advocates are cautious about the security of all the data flowing through yellow buses.

A booming market of vendors and limited regulations on bus tech has given more responsibility to school IT and transportation departments. But Gray said most school districts are embracing these new tools 鈥 if they can afford them.

鈥淚t always comes down to money,鈥 he said. 鈥淚 think that if they think they have the money, they鈥檙e going to want to buy this stuff.

School systems and tech companies say these tools can improve student safety, create efficiencies and help alleviate the chronic shortage of bus drivers.

鈥淚t鈥檚 a huge recruiting tool,鈥 said Jason Salmons, transportation director for Bentonville Schools in northwest Arkansas.

Bentonville contracts with Transportant, a Kansas-based company, to equip its buses with new camera and tracking technology. Salmons said the navigation and student tracking provide peace of mind to drivers, who can easily traverse new neighborhoods. The seven live cameras on each bus also provide security if an incident arises.

About 13,000 of the district鈥檚 20,000 students ride buses across 135 daily routes. In addition to an upfront cost, he said the school system pays a subscription of about $90,000 per year.

The software tracks not only every bus, but also every student鈥檚 boarding and disembarkment, even taking photos of the kids. If something happens, law enforcement can see where a child was and what they were wearing at dropoff 鈥 providing a 鈥減riceless鈥 service, Salmons said.

With real-time tracking 鈥 much like a rideshare customer would see on their screen 鈥 parents and students view buses as more reliable, he said. With more precise pickup times, students don鈥檛 wait outside in the cold as long and older kids can even get a few more minutes of sleep, Salmons said.

鈥淗igh schoolers use the app as their bible,鈥 he said.

Data privacy

Given the national driver shortage and parents鈥 focus on reliability, Cassie Creswell understands the appeal of the new bus technology. But she has concerns about the growing loads of data being collected.

鈥淚t鈥檚 a mixed bag on this stuff,鈥 said Creswell, the co-chair of the national Parent Coalition for Student Privacy, which advocates to protect student data.

That group has pushed to keep cameras out of classrooms, but hasn鈥檛 taken a formal position on school buses, she said. Creswell, a parent of a Chicago Public Schools student, said the more data that is collected 鈥 such as GPS locations and video footage 鈥 the more opportunities for that data to be sold or illicitly .

鈥淎re we actually clearing away stuff that you really shouldn鈥檛 hold on to forever?鈥 she asked. 鈥淲e鈥檙e so careless with student data 鈥 even very sensitive data 鈥 and we鈥檙e very careless about the long-term protection of that data.鈥

School systems interviewed by Stateline said their bus data is being securely stored separately from other student records and that data such as videos are routinely deleted.

Alan Fairless, a founder and chief technology officer of the tech provider Transportant, previously worked in building encrypted tech products.

He said the company doesn鈥檛 sell any student data and encrypts the memory of each device 鈥 so, someone stealing a tablet off a bus would have no access to its memory. The company was created in 2018 to tackle parent and school concerns about bus reliability and delays.

Fairless said he quickly learned many districts struggle with high driver turnover because of student behavior issues on board.

By providing multiple cameras that can be accessed live, he said, the company鈥檚 product provides a new layer of support to drivers.

鈥淣ow, when something happens, they push a button and a dispatcher or principal is going to watch that bus in real time,鈥 he said.

Fairless said one school district has what it calls a seven-minute rule: When a driver alerts of an incident, a dispatcher aims to watch the video, figure out what happened and notify parents over text or phone call within seven minutes.

鈥淭he effect is, that video arrives to the parents, and now they know the real problem, and they know that before the student comes home and creates some other version of the story,鈥 he said. 鈥淪o now, it鈥檚 like the parents and the school district are working together to solve the problem.鈥

Buses are lined up at the Kansas City Public Schools bus barn in Kansas City, Mo., between morning and afternoon routes. Zum, which operates the buses for the school system, has equipped its fleet with many high-tech features that are proving popular with drivers and parents. (Kevin Hardy/Stateline)

Since launching, the company has contracted with 88 school systems in 19 states to provide its all-inclusive tech suite that includes the app for families, on-board Wi-Fi, camera systems and routing services.

While prices can vary, school districts typically pay about $3,600 per bus up front and an annual subscription cost of about $69 per bus, said Jeff Shackelford, vice president of sales.

Changing parent demands

The addition of Transportant has helped keep parents informed in Oregon鈥檚 Estacada School District, which sprawls across 750 square miles southeast of Portland.

鈥淚t鈥檚 been great customer service for our families to just see, just like when someone orders an Uber, they can keep track of where their kid is at,鈥 said Maggie Kelly, a spokesperson for the school system of about 2,000 students.

Kelly said the district expects to make up some of its initial investment in the technology as it realizes savings from more efficient bus routes.

Parents are demanding more real-time information on bus times and locations, said Rick D鈥橢rrico, a spokesperson for Transfinder, whose products build more efficient bus routes and provide tracking apps for parents.

鈥淚f I can track a burrito order, why can鈥檛 I track a bus?鈥 D鈥橢rrico said. 鈥淧arents these days expect their districts to have ways to notify them on individualized ETAs and alerts for when their kid is on their routes, and not rely on schoolwide email blasts.鈥

Recently, school districts in Alaska, Texas and Wyoming have launched the company鈥檚 apps, which are free for parents.

Such services can provide savings by cutting back on the number of drivers and buses in operation. But they also relieve pressure on dispatchers, who can be besieged with parent phone calls during disruptions or delays.

Since rolling out a new bus tracking app this year, the St. Johns County School District in northeast Florida has fielded far fewer parent calls.

That app is just the latest addition to a portfolio of advanced onboard technology, said Jonah Paxton, transportation fleet technology foreman at the district, which serves about 27,000 bus riders.

The 52,000-student school system intentionally purchased separate products for bus cameras, parent tracking and driver navigation. Paxton said that allows the school system to avoid getting stuck with a single provider that could demand higher prices in the future.

鈥淲e鈥檙e not locked into a single sort of a walled-garden of products, which gives us a lot more freedom to pick and choose which products we like, which ones we don鈥檛 like, and gives us a little more negotiating power,鈥 he said.

To ensure security, the school system stores video files on its own servers rather than those of outside vendors, he said. The district has a specific video retention policy and it blurs out student faces if videos are ever requested under the state鈥檚 public records law.

Paxton said student and driver safety drives many of the tech decisions for the school鈥檚 fleet of more than 300 buses.

鈥淏uses are vastly different than they were even five,10 years ago,鈥 he said. 鈥淚 think many people who haven鈥檛 ridden a bus in a while can think of the bus as sort of an unpleasant place, or kind of the Wild West of schooling, but they鈥檝e really come a long way.鈥

Stateline reporter Robbie Sequeira contributed to this story. Stateline reporter Kevin Hardy can be reached at khardy@stateline.org

is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Stateline maintains editorial independence. Contact Editor Scott S. Greenberger for questions: info@stateline.org.

]]>
Opinion: How a Federal Seed Money Program Has Powered Ed Tech Innovations /article/how-a-federal-seed-money-program-has-powered-ed-tech-innovations/ Mon, 25 Aug 2025 14:30:00 +0000 /?post_type=article&p=1019893 Fifteen years ago, Scott Laidlaw was a middle school teacher in Ogden, Utah, who created tabletop games using paper cutouts to keep his algebra class engaged. Seeing his students鈥 interest in algebra surge, he wrote a proposal and received a federal grant to create a game in which students use mathematical concepts to advance a virtual civilization in ancient Mesopotamia.

Today, Laidlaw runs , which reaches over 60,000 students a day across 48 states with research-based curricula built around the original paper game concept. His is just one of dozens of education technology breakthroughs highlighted in “,” our analysis for The Study Group of the impact of the Small Business Innovation Research program, run by the Education Department鈥檚 Institute of Education Sciences.

Between 2012 and 2022, this quiet innovation engine invested approximately $92 million in education research and development, seeding innovations from AI-driven literacy tools to teletherapy platforms that help rural schools access speech and occupational therapy.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


The returns are staggering. Products developed through this funding have reached more than 130 million students and educators nationwide 鈥 at a cost to taxpayers of roughly 70 cents per user. Each federal dollar invested generated nearly $9 in sales, investments or acquisitions. 

But today, seed money that helps address real student needs is at risk. Layoffs and funding cuts at the department have frozen innovation grants. One startup had to let go of most of its team, stalling a promising project overnight. Another was forced to put off recruiting new team members, the type of holdup that can upend a young venture. When Fiscal Year 2025 awards were pushed back by months, delaying contracts, one company founder missed a planned school-year pilot window entirely 鈥 a costly setback in a fast-moving market.

This federal turbulence does real harm to these small businesses and to the students they are trying to help. That’s unfortunate, because public seed funding helps education innovation to grow more accessible and rooted in real learner needs. And at a time of rapid advancement in artificial intelligence, it is crucial that innovations reach all students and meet varied needs.

In fact, while roughly 90% of startups fail, many SBIR-backed education ventures succeed where others fall short. A primary reason is that they often originate with teachers and academic researchers who deeply understand schools’ toughest challenges. An educator notices students struggling with fractions; a practitioner sees rural students who can’t access speech therapy; a researcher wonders if AI could help English learners master science vocabulary. The Small Business Innovation Research program supports early-stage innovations the private sector initially hesitates to fund.

For example, federal seed funds supported development and evaluation of an early version of , a teletherapy platform that addresses critical shortages of special education specialists. Presence now facilitates millions of virtual speech and occupational therapy sessions, as well as mental health counseling. The startup attracted over $70 million in investments, underscoring how strategic public funding can catalyze transformative private-sector growth.

Another example: Learning Ovations’ literacy platform , an evidence-based early reading rapid assessment to inform instruction. A2i, developed over a decade of federally funded research, uses algorithms to tailor lessons to individual student needs. With SBIR backing, the evidence-based platform expanded rapidly, and it was so compelling that publishing giant Scholastic acquired A2i to grow nationally. Today, thousands of classrooms use this technology to boost early literacy.

Beyond individual successes, SBIR has created an ecosystem of innovation with numerous spillover effects. Companies report spinoffs, entirely new product lines and deeper partnerships with schools and researchers. For many, the seed money didn鈥檛 just launch products 鈥 it fostered their entire business.

Programs like this aren鈥檛 charity, they鈥檙e smart economics. They demonstrate how public dollars can be used effectively: not as ongoing subsidies, but as smart, initial investments that private markets scale and sustain. This approach is a blueprint for future investment in critical areas, such as using AI to personalize learning, prepare students for careers and equip educators with the tools they need to be effective.

As a former SBIR program manager and longtime educator and now applied researcher, we’ve witnessed firsthand what targeted, strategic public investments can achieve. Early-stage ideas by classroom teachers and scientific researchers can, with the right nurturing, transform education for millions. Small, well-placed bets can create powerful solutions, like games that engage students in algebra; data dashboards that provide real-time insights into whether students are succeeding or struggling, and why; and assistive technologies that accommodate the needs of individual learners, including those with disabilities.

In an era marked by skepticism toward government spending, the SBIR program stands out as a model worth celebrating. It empowers innovative small businesses to harness education鈥檚 transformative potential.

As the nation debates how to spend limited education budgets, it’s crucial to consider that if 70 cents per student can deliver these breakthroughs, the possibilities are limitless. The nation鈥檚 students and educators deserve no less.

]]>
An Ed Tech Insider Pleads for More Equitable Tools /article/an-ed-tech-insider-pleads-for-more-equitable-tools/ Tue, 12 Aug 2025 14:30:00 +0000 /?post_type=article&p=1019314 As much as anyone writing about education technology today, Anne Trumbore has had a front-row seat for its development.

As a young person living in the San Francisco Bay area in the early 2000s, she stumbled into teaching at Stanford University鈥檚 experimental , working with Patrick Suppes, an early innovator in ed tech. His work, reaching back to the 1960s, popularized the notion of computers as 鈥渁utomatic tutors,鈥 a vision now playing out with AI tutors from , and others. 

Trumbore openly admits that she 鈥渒ind of backed into this business,鈥 working in the entertainment field when she landed a summer job teaching writing at Stanford. 


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


鈥淚 didn’t have preconceived notions of what technology could or should do in education,鈥 she said. 鈥淭he animating question was, 鈥楬ow can we use this tool?鈥欌

Trumbore eventually joined the Stanford-led team that launched and ushered in Massive Open Online Courses (MOOCs) in the 2010s. 

In her new book , Trumbore calls herself an 鈥渆nsemble player in the transformation of online education from experimental and low status to 鈥榠nnovative鈥 and 鈥榙isruptive.鈥 I have also helped make wealthy institutions, venture capitalists, and more than a few professors even wealthier,鈥 she writes.

Trumbore introduces readers to Suppes and to two other key ed tech pioneers of the mid-20th century: MIT鈥檚 Seymour Papert and Don Bitzer of the University of Illinois. 

Bitzer created , a groundbreaking, networked course distribution system that could educate up to 1,000 people at once. He and his team developed interactive touch screens and learning management systems, among other innovations. 

Papert, a South Africa-born mathematician who studied with the Swiss child psychologist Jean Piaget, applied the latter鈥檚 groundbreaking work to education, popularizing the idea that children should learn about computers by programming them. A co-founder of MIT’s Artificial Intelligence Laboratory, he created , a programming language for children that served as an inspiration for the computer language and countless young people鈥檚 .

Despite all the hype surrounding new, AI-enhanced products, most can be traced back to these three pioneers and their teams, Trumbore said. 鈥淲e make the same discoveries about online education decade after decade because we do not acknowledge 鈥 or know 鈥 the history of the field,鈥 she writes. 鈥淭here is evidence that this ignorance is not an accident.鈥

Trumbore, who worked at the University of Pennsylvania and now creates professional certificate and degree classes at the University of Virginia, recently talked with 社区黑料鈥檚 Greg Toppo. She warned that 17 years after the first MOOCs appeared, they鈥檝e failed to bring education to many of the students who most need it: low-income, nontraditional students who could use extra help focusing and mastering difficult material. 

Trumbore sees the same dynamic playing out with generative artificial intelligence, warning that universities must be 鈥渕ore clear-eyed about their business partnerships with technology companies, more thoughtful about their motives in distributing education 鈥榯o the masses,鈥 and ultimately take inspiration from the past鈥檚 successes and failures in order to create more equitable educational experiences that provide more returns to learners than to edtech investors.鈥

This interview has been lightly edited for length and clarity.

I want to actually start with talking about Stanford鈥檚 Online High School. So you were an actual teacher? 

Yes.

Live classes, obviously projected over many thousands of miles? 

Yes.

Was the original conception that it was a high school that just happened to have students throughout the world? Was everybody there at the same time for class?

It was synchronous. The idea came in 2004. We got a grant from the to start thinking about this. There’s a series of schools still called the Malone Consortium, and they share content because not every private school can have a Chinese literature course. And this idea of providing access, connecting students to great instructors, was something Malone had been thinking about and servicing for a while. And in its model there were similarities to what the Education Program for Gifted Youth [now Stanford鈥檚 program] was doing with its calculus courses.That whole unit evolved to provide access to students who couldn’t take a calculus course because they lived on Martha’s Vineyard or in Alaska, or someplace where they didn’t have access to it. And so this just was an extension, in some ways, to see if we could do it. 

You say the technology wasn鈥檛 very good at the time.

It was early, and folks didn’t even have broadband. So that was also a really interesting challenge. Actually before we did the Online High School we started teaching synchronous college classes in the summer, and so that was our beta test case, and I did that for a while. The success of those programs became the basis for the grant application to Malone, which became the basis for, “Let’s do a high school.” And then we formed the full high school, and then we went and got accredited, and it’s thriving today.

That led, in short order, to the phenomena of Coursera and edX. And as you say, you were there as it was taking shape. More than a decade later, what does the MOOC space look like to you? Has it fulfilled its purpose, or has it got the same illness as a lot of ed tech, which is that it sort of lost its way?

I think the answer to both questions is “Yes.” 

If you were an idealistic, super empathetic, early proponent and ed tech evangelist, we were opening up the gates of Harvard and Yale and Princeton and Stanford and Penn and all these wonderful places. It is true that today, which was not true 15 years ago, anybody with an Internet connection can log in and see what’s being taught, or a pretty close version of what’s being taught, at some of these schools. And we forget how amazing that was to have access to that. So in that case, the answer is Yes.

The answer is also “Yes” that it’s lost its way, because the business model behind it is a traditional free-market business model: Scale quickly, make profit, follow users, drive engagement. Don’t worry about making it the best learning experience. Worry about making it the biggest, most appealing learning experience and let the customer decide what it is they’re interested in 鈥 let the customer drive the content. 

I guess that’s where I’ve ended up: I believe in the promise of ed tech. I don’t think that the promise of ed tech and the free-market business model are compatible.

I’m taking a Coursera course. It seems perfectly fine to me. It sounds like you would make the case that it could be better if there wasn’t a focus on profit?

There’s a lot that gets lost when we focus on frictionless delivery of content and not on an education experience. Education is difficult. It’s expensive to provide. That’s why we invented Coursera. I think that for educated folks, or for super agentic, bright kids, it’s wonderful. You don’t need much else. I think the problem comes when we say that Coursera is sufficient as an education. And the folks at Coursera would say it’s not sufficient as an education.

Early in the book, you say, “Just beneath the shiny surface of the latest ed tech marvel is the work of Suppes, Papert and Bitzer and many people on their teams who’ve worked, sometimes unknowingly, to extend the past of educational technology into the future.” 

I mean, these were, by our standards, primitive forays. The computers, literally you were having to dial into the mainframe on campus. I guess I want to just make sure I understand the lessons those three have to teach us.

It’s two things at the same time. One is that the goals they had are very similar to what we have today. So one, they’re worthy goals. And two, we haven’t invented the technology to achieve those goals. 

We’ve been talking about individual tutors for almost 70 years, 60 years for sure, quite publicly. So this idea that these are necessary pursuits, that this is going to improve education, I think, is a foundation no one questions. No one questions why we would want a computerized tutor. Believe me, I’ve searched. I did find, I think, three articles that are like, “Hey, hold on here.” Folks are all in on this. 

For Bitzer, what was amazing about him 鈥 and he was the true engineer among the group 鈥 was using, literally, duct tape sometimes, putting together this system. But the vision of that system and what it enabled, that it enabled communication among students who were learning at the same time 鈥 that’s how we now measure success. 鈥淗ow many people do you have on the platform?鈥 That did not exist. That was like, “What are you kidding me? In 1962?” Really revolutionary. 

And with Papert, this idea that we shouldn’t let the computer program the child, but the child should program the computer, I think, is probably the most relevant to the tidal wave or avalanche of ChatGPT in education, that we need to train all these kids to have AI skills. I’m not saying we shouldn’t, but what are we teaching them? Why are we teaching them this? And is this really the right thing to do? And are they using it as a tool, or are they being used as a tool? 

I think these questions are highly relevant, but we forget to ask them, because there is this cycle of funding and social capital for being an innovator and all this stuff that gets really exciting if you’re brand new and chasing the bright, shiny object. Nobody wants to hear about the past. 

I want to drill down on Papert. I think he’s the most interesting of the three in a lot of ways, mostly because he had this fascinating background, studying with Piaget. Explain to me how his thinking about “the child programming the computer” is playing out now with things like ChatGPT. 

It’s different, obviously, school to school. In some schools, students are using ChatGPT as a tool to create things that are useful to them, not to create an assignment: “Hey, you guys need to get together and design a water pump.” Or “Design a Pixar character and build it with our 3D printer.” And it’s a group of kids working together in a team. And they use ChatGPT to come up with some models. They do that, then they send it to the 3D printer. I think that’s a great use of these tools. And there’s someone guiding them. That’s the children using it as a tool. “Kids, today we’re going to do Lesson 4 of OpenAI Academy,” that’s less good. That’s not using it as a tool.

I firmly believe in what Papert was saying, and I think technology is used best when you use it to empower a learner to be more human and to unlock creativity, and that is very possible with these tools. It really isn’t how we deploy them. The way they’re being marketed, sold and consumed, it’s faster to just say, “Watch out: You’re not going to get a job if you can’t master these tools. You鈥檝e got to get ahold of these tools and watch some videos and learn some stuff.” Rather than the more time-consuming, “Hey, try this thing. Try this thing. Use it to make this. Use it to make that.”

It strikes me that the public conversation around ed tech has a weird format. It’s binary, which maybe is appropriate: People want to talk about things like MOOCs and such as either the most amazing thing ever or a total failure. I’m curious if you have a thought on why that is, and how we can emerge from that?

That dialogue, which always cracked me up as well, part of it is innovation, “the violence of forgetting.” I love that phrase, and I end with it in the book. In order for this to have been brand new and amazing, there’s this narrative that’s fueled by cash and investment and people’s eyeballs and all this stuff, that it was a huge success, and then the binary of that, to your point, is that it was a huge failure. Once you add that much money and power into these things, people are not interested in a more nuanced answer, I would argue, to anything. But it’s especially true here. 

Again, I think MOOCs have done a lot of good. It’s amazing that they exist. It’s awesome. But they didn’t cure cancer. They didn’t lift the continent of Africa out of the educational attainment that it currently has. At the time, when people were layering on these hopes for MOOCs, we were like, “This is hilarious.” I mean, we were more stunned and like, “Oh my God, more work.” But in retrospect, it was like, “This is kind of nuts that these folks are flocking to Coursera’s offices, and we’re having lunch with Tom Friedman, and he’s , and you’re so busy making it.鈥 This label gets attached to it, it鈥檚 like, “Is this what we’re doing?” And it’s intoxicating to believe that.

I have to apologize, maybe not on behalf of Tom Friedman, but on behalf of journalism. I think we’re part of the problem. I take your point that, on the one hand, it’s amazing that these things exist, but on the other hand, they didn’t cure all these ills. What was the accomplishment? I think there was a lot of hope during the pandemic that this was a world that was going to save us from that catastrophe, and it didn’t turn out to be true, mostly.

So two parts: One, I do think certainly in America we really love to give our power away to technology. We just love it. It’s over and over and over again, starting with the camera. You can look back to anything: the washing machine, cake mix 鈥 any of these efficiency-solving technologies that come out, particularly during the course of the 20th century and into the 21st. We tend to take a pretty passive stance toward them and imbue them with all these characteristics that they’re almost God-like. They’re going to save us from ___. And that’s great marketing copy. And those two things are interlinked. But we love doing it, because look at any article about ChatGPT on any given day, and it’s the same idea. So there’s something in our national consciousness that really wants to believe that there’s this amazing technological solution just around the corner that is going to cure everything. Twitter was supposed to democratize democracy. So I think that’s part of the problem. 

But what MOOCs did, and why they’re great, is that if I want to know about something, and I want to know more than just asking ChatGPT or doing a Google search or looking at Wikipedia, I can log on and for free, or for a relatively modest fee, I can learn about this stuff. I mean, just even seeing the modules mapped out, if you’ve never taken Python, and you’re like, “I don’t even know what Python is,” and then you look at Chuck鈥檚 [University of Michigan professor Charles Russell Severance鈥檚] , and you see, “Oh, now I kind of know what a computer science course looks like.” 

Right.

That’s amazing. And I think the nearness of it is also really interesting, that people feel like it’s so much more approachable now, and not as exclusive. I think that was some of the good that came out of it; and the fact that companies are offering these to employees so that they can learn how to do things better or use them for their own self-actualization or to make themselves better at work, I think is great. 

And then from some work I did when I was at Penn: People were something like 600% more likely to say “I think I’m going to apply to a higher ed program” after completing one of these courses. People’s conception of themselves changed after they were able to complete a course.

I don’t want to rain on that parade, but I do want to bring up a point you make in the book, which is that a lot of times the people who take advantage of this or benefit from it are people with a lot of agency already.

One hundred percent. They give additional agency to those who have it.

How do we solve that?

I think that providing education in a format that worked for people at the top, say 20% of intellectual distribution and access to higher education 鈥 I mean, the first MOOCs were modeled on courses at Princeton, on courses at Stanford 鈥 that is not, by definition, accessible education to everyone. Letting everyone into the classroom doesn’t mean they’re going to get it or understand it. And that, I think, underlies that idea, because the inventors, the funders, many of the initial employees, are all part of that group. They all went to the same collection of colleges. They all know people who know people from there. So this is a way of education that works. It’s great to scale that. 

So maybe you catch some people who truly are excluded only because of geography or finances, but they know how to learn that way. That doesn’t begin to address the vast number of people who don’t learn that way, who by the time they’re in the third video, they’re texting or asleep or bored or checked out. It can’t possibly solve the problem. One thing I often say in talks is that if access to education were to solve the world’s education problems, we wouldn’t have all these institutions of higher education, because libraries would have solved everything. Andrew Carnegie .

]]>
Opinion: The Road to Educational Equity: Can Ed Tech Solve the Digital Divide? /article/the-road-to-educational-equity-can-ed-tech-solve-the-digital-divide/ Thu, 12 Jun 2025 14:30:00 +0000 /?post_type=article&p=1016827 In a nation where ZIP codes often determine opportunity, the promise of educational equity remains out of reach for millions of students. Despite years of reform, the link between a child鈥檚 environment and their academic outcomes still remains.

Today, as schools integrate digital tools into everyday learning, a new dimension of inequality has come into focus: access to technology. While some students benefit from personalized platforms and high-speed connectivity, others are still left behind, struggling to participate in a system that increasingly assumes digital access. The debate is no longer whether ed tech can improve education, but whether it will reach those who require it the most.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


The integration of technology into classrooms has the potential to improve learning, but only if access is universal. In reality, disparities in broadband connectivity, device availability, and digital literacy continue, especially in rural and low-income regions.

A 2024 report by the indicates that 43% of adults earning less than $30,000 annually lack broadband access, and nearly half of households making under $50,000 struggle to afford internet services. 

This leads to a “homework gap” that disproportionately impacts students in excluded communities, limiting their ability to complete assignments and engage with digital learning resources.

Beyond infrastructure, the challenge extends to technology deployment. Schools with more resources can invest in training educators, curating high-quality digital content, and supporting students with tailored interventions. In contrast, under-resourced schools may lack the technical assistance and instructional direction required for effective ed tech integration. Without thoughtful implementation, technology risks becoming a superficial fix rather than a meaningful equalizer.

To bridge the gap, tech access should be treated as a foundational right, not a privilege. That means investing in affordable internet for all households, making sure every student has access to a reliable device, and providing the support systems that make digital learning meaningful and accessible.

Ed tech, when designed and deployed with equity in mind, can be an effective tool to close learning gaps. AI-powered and gamified learning platforms, for example, offer the ability to personalize content to meet students where they are, regardless of age, ability, or background. 

Adaptive platforms, for instance, are able to recognise when a student is behind and make real-time material adjustments. Through milestones and rewards, gamified modules can keep students motivated. This is especially helpful for students who might otherwise lose interest in a strict, one-size-fits-all approach. These features can have a particularly on classrooms with a variety of learning demands but a small number of teachers.

Too often, though, innovative learning technologies are piloted in affluent districts with the budget and infrastructure to support them, while the students who could benefit most remain out of reach. Without targeted strategies to expand access and usage, ed tech risks strengthening the very disparities it aims to address.

True equity means creating educational technology that represents the diversity of the learners themselves. This includes considering various cognitive styles, linguistic backgrounds, and cultural situations. Platforms should offer multilingual support, dyslexia-friendly fonts, sensory-sensitive modes for neurodiverse kids, and culturally relevant material. Without these design considerations, ed tech may inadvertently exclude the very students it aims to uplift, even when devices and internet access are available.

The answer lies not just in the tools themselves, but also in how and where they are deployed. Equity-focused implementation requires a commitment to both access and impact 鈥- ensuring students can use the technology, and that the technology truly supports their learning journey.

 This is not a challenge educators can tackle alone. It requires coordinated action from policymakers, district leaders, nonprofit partners, and the tech community itself.

Public investment should prioritize infrastructure development in under-served areas, such as expanding broadband coverage and subsidizing device distribution. Equally important is funding for professional development, helping teachers integrate digital tools into their pedagogy in ways that are culturally responsive, developmentally appropriate, and aligned with academic goals.

At the policy level, educational equity must be embedded into procurement decisions, funding formulas, and accountability frameworks. Leaders must ask not just whether technology is available in schools, but whether it is making a measurable difference for students who have historically been left behind.

Collaboration across sectors is critical. Nonprofits can help support communities in navigating the digital learning landscape. Tech providers can design solutions with accessibility and inclusion built in from the start. And local governments can act as conveners 鈥 aligning resources, reducing duplication, and ensuring families are supported beyond the school day.

There is no silver bullet to educational inequity, but there is momentum. Across the country, districts are experimenting with community Wi-Fi programs, public-private partnerships, and learning models that prioritize flexibility and student engagement. These efforts prove that with the right intentions, innovation and inclusion can go hand in hand.

What鈥檚 needed now is sustained commitment. We should resist the temptation to view ed tech as a short-term fix or an optional add-on. Instead, it must be approached as a core element of a broader equity agenda, one that prioritizes student outcomes, not just new tools.

Ed tech holds enormous promise, but only if we build systems that ensure its promise reaches every student. That starts with recognizing that the digital divide is not just a tech problem, it鈥檚 an equity problem. And equity is something we must design for from the beginning.

]]>
Technologists Welcome Executive Order on AI in Schools But Say More Detail is Needed /article/technologists-welcome-executive-order-on-ai-in-schools-but-say-more-detail-is-needed/ Mon, 05 May 2025 16:30:00 +0000 /?post_type=article&p=1014744 This article was originally published in

Education software experts say they鈥檙e cautiously optimistic about a Trump administration drive to incorporate AI into classrooms, but such a program needs clear goals, specific rules 鈥 and enough money to fund the costly systems.

鈥淎I is, inherently, really expensive,鈥 said Ryan Trattner, CEO of AI-assisted studying tool Study Fetch. 鈥淚t鈥檚 not something that scales like a normal piece of software where it might be the same price for 1,000 people to use it as 100,000.鈥

Among a handful of education-related executive orders last week, President Donald Trump released an order to incorporate artificial intelligence education, training and literacy in K-12 schools for both students and teachers.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


The move is in line with other actions Trump has taken to promote quick growth of artificial intelligence in the U.S., including rolling back the 2023 Biden administration executive order that aimed to promote competition within the AI industry while creating guidelines for responsible government use of the technology. Introducing AI to grade school children is meant to create an 鈥淎I-ready workforce and the next generation of American AI innovators,鈥 the order said.

A task force made up of members from various federal departments 鈥 like the Departments of Agriculture, Education, Energy and Labor, as well as the directors of the Office of Science and Technology Policy, the National Science Foundation and other federal agency representatives 鈥 will be developing the program over the next 120 days.

Some makers of AI tools for students said they are cautiously optimistic about more widespread use of AI in schools, saying it would better prepare kids for the current workforce. But they say success with this program hinges on the ability to measure outcomes for AI learning, an understanding of how AI plays a role in society and a set of clear federal guidelines around AI, which the U.S. does not currently have.

Many students, parents and teachers are already using AI in some portion of their learning, often through AI-powered tutoring, counseling, training, studying or tracking tools mostly available from private companies.

Bill Salak, chief technology officer at AI learning and studying platform Brainly, said that many AI tools built for education right now aim to fill gaps in schools where teachers are often spread thin. They may be using AI tools to help them make lesson plans, presentations or study guides. Brainly was founded on the idea of simulating student-run study groups, and is a supplement to classroom learning, Salak said.

Salak is happy to see an initiative that will prompt educators to incorporate AI literacy in schools, saying he feels we鈥檙e in a 鈥渞apidly changing world鈥 that requires much of the workforce to have a baseline understanding of AI. But he says he hopes the task force gets specific about their goals, and develops the ability to measure outcomes.

鈥淚 do think there will be further mandates needed, especially one in which we revisit again, like, what are we teaching?鈥 he said. 鈥淲hat are the standards that we’re holding our teachers to in terms of outcomes in the classroom?鈥

Specific objectives may come after the 120 day research period, but the executive order currently says that the initiative will develop online resources focused on teaching K-12 students foundational AI literacy and critical thinking skills, and identify ways for teachers to reduce time-intensive administrative tasks, improve evaluations and effectively teach AI in computer science and other classes. It also seeks to establish more AI-related apprenticeship programs targeted at young people.

Trattner of Study Fetch said he鈥檚 eager to see a green light from the administration for schools to invest in AI education. The Study Fetch platform allows students and teachers to upload course material from a class, and receive customized studying materials. Trattner said that initially many educators were worried that AI would allow students to cheat, or get through classes without actually learning the material.

But he said in the last year or so, teachers are finding specific tasks that AI can help alleviate from their long to-do lists. Generative AI chatbots are probably not the best fit for classrooms, but specific AI tools, like platforms that help students learn their curriculum material in personalized ways, could be.

鈥淓verybody knows this, but teachers are extremely overworked, with multiple classes,鈥 Trattner said. 鈥淚 think AI can definitely help educators be substantially more productive.鈥

But cost is something the committee should consider, Trattner said. The executive order calls for the development of public-private partnerships, and said the committee may be able to tap discretionary grant funding earmarked for education, but it didn鈥檛 outline a budget for this initiative. AI tools are often more expensive than other software that schools may be used to buying in bulk, Trattner said.

Some AI tools are targeted toward other parts of the school experience, like College Guidance Network鈥檚 Eva, an AI counseling assistant that helps users through the college application process, and helps parents with social and emotional dynamics with their children.

Founder and CEO Jon Carson said he鈥檚 not sure that this executive order will make a big impact on schools, because schools tend to follow state or local directives. He also feels like the current administration has damaged its authority on K-12 issues by attempting to shut down the Department of Education.

鈥淚n another era, we might actually even bring it up if we were talking to a school district,鈥 Carson said. 鈥淏ut I don’t think we would bring this up, because the administration has lost a lot of credibility.鈥

Carson hopes the committee plans for security and privacy policies around AI in schools, and folds those principles into the curriculum. Federal guidance on AI privacy could help shape everyone鈥檚 use, but especially students who are at the beginning of their experience with the technology, he said.

A successful version of this program would teach students not just how to interact with AI tools, but how they鈥檙e built, how they process information, and how to think critically about the results they receive, Salak said. Educators have a right to be critical of AI, and the accuracy of information it provides, he said. But critical thinking and validating information is a skill everyone needs, whether the information comes from a textbook or an algorithm.

鈥淚n a world where there鈥檚 so much information readily accessible and misinformation that is so readily accessible, learning early on how to question what it is that AI is saying isn鈥檛 a bad thing,鈥 Salak said. 鈥淎nd so it doesn鈥檛 need to be 100% accurate. But we need to develop skills in our students to be able to think critically and question what it’s saying.鈥

The specific recommendations and programing stemming from the Artificial Intelligence Education Task Force likely won鈥檛 come until next school year, but Salak said he feels the U.S. workforce has been behind on AI for a while.

鈥淚 really hope that we鈥檙e able to overhaul the agility at which the education institution in America changes and adapts,鈥 Salak said. 鈥淏ecause the world is changing and adapting very, very fast, and we can鈥檛 afford to have an education system that lags this far behind.鈥

]]>
Ed Tech Startup Boosts Teacher Well-Being With Feelings Check-ins and Care Packs /article/ed-tech-startup-boosts-teacher-well-being-with-feelings-check-ins-and-care-packs/ Mon, 28 Apr 2025 14:30:00 +0000 /?post_type=article&p=1014252 Committed. Exhausted. Comfortable. Frazzled. Valued. Stuck.

Once a month, staffers at Sullivan Middle School in Sullivan, Illinois, pick adjectives to describe their feelings about work as part of an anonymous online survey. 

Principal Nathan Ogle said the short questionnaire, which he implemented in October, has helped transform employee culture at the rural school of 250 students. It鈥檚 one of the products offered by , an education technology startup that鈥檚 trying to improve teacher well-being across the U.S. 


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


The company has pulse surveys, downloadable resources and staff care packages that schools and districts can purchase. Since its 2022 launch in Omaha, Nebraska, Alpaca has worked with more than 100 schools and districts in 25 states. It received at the 2025 Future of Education Technology Conference, which features ed tech innovations and businesses.

鈥淚t’s been incredibly useful just to get that feedback from my staff on what things they’re feeling and experiencing,鈥 Ogle said. 鈥淚’ve been able to respond to that stuff as it’s coming in, more or less in real time.鈥

During the 2023-24 school year, 48% of public school teachers reported declining mental health that impacted their work, up from 42% the year before, according to . The percentage of teachers who reported their schools offered minimal or no employee wellness programming increased from 68% in 2023 to 72% in 2024.

For Alpaca鈥檚 founder, Karen Borchert, the focus is employee engagement and retention: “What does it feel like to go to work when you are a teacher, what is it like and what could make it better?” Borchert went to college to become a high school teacher, but after earning her degree became interested in nonprofits and startups. She decided to create her own company after the pandemic hit and school staff shortages worsened.

She began by selling subscription care packs to teachers and schools. The packs 鈥 which inspired the name Alpaca 鈥斅燾ost $25 to $35 each and include items like snacks, pens, notepads, markers, tissues, lip balm and a handwritten note.聽

Kimberly Bailey

Last year, Alpaca launched its online pulse survey along with free like staff activities, teacher appreciation tips and strategies to help administrators make their employees feel valued. Borchert said most of the schools that use Alpaca will have staff complete the survey in monthly meetings. Some use Alpaca鈥檚 digital resources to host games and give out the care packages as prizes.

鈥淲e love to see them work together as a system or as a platform,鈥 Borchert said. 鈥淎nd then, by the time the principal gets back to their office, all of their survey data is live and ready, and they can see what鈥檚 needed.鈥

Ogle said the monthly pulse surveys are more useful than his district’s annual climate survey, which doesn鈥檛 provide results until after the school year is over. When he began implementing the survey last fall, many teachers said they felt stretched thin and wanted time to plan with one another.

In response, he restarted a school tradition of 鈥淲orking Wednesdays.鈥 Administrators took over supervising students during lunch so teachers could use that time to collaborate with colleagues.

鈥淪ince we’ve implemented that, 鈥榮tretched thin鈥 is no longer a phrase that people are choosing” on the survey, he said. 鈥淚 have staff members who, if I just went and asked them, 鈥楬ey, how are you doing?鈥 They’re going to say, 鈥楩ine,鈥 because that’s what they do. But this gives them that opportunity to anonymously let me know how they鈥檙e really doing.鈥

Alpaca’s reach also extends beyond schools and districts. 

High Desert Education Service District, a Bend, Oregon, agency that places thousands of substitute teachers in 10 nearby school districts every year, began . Part of the state’s Department of Education, High Desert uses the pulse survey for the subs to rank how they feel about working in different schools and districts. Substitutes also receive Alpaca packs when they accept a certain number of school assignments.

Borchert said Caddo Parish Public Schools in Shreveport, Louisiana, uses the products in its . And the University of Nebraska-Lincoln uses the pulse survey and care packages for its student teachers. 

Sue Kemp, a professor in the university’s special education department, said the survey results help her decide which schools to place student teachers at to gain practical experience in the classroom. 

鈥淚t gives me a better picture about how the students are feeling and doing in their school,鈥 she said. 鈥淚 get a better snapshot of the support that they’re feeling in the school and in their own skill development, and what they need on top of it.鈥

The student teachers and the educators who are supervising them in the classroom also receive monthly Alpaca packs as a way to say “good job” or “thank you” for their work, Kemp said. She said the students and the supervisors have reported that the care packages make them feel more positive about their jobs and more connected to the college.

鈥淲e are at a moment where I think our educators are going to need so much care, and they’re going to need so many good support systems,鈥 Borchert said. 鈥淭hey’re going to need to be able to say how they’re feeling and what they need while we kind of walk through uncertain or unprecedented times.鈥

]]>
Podcast: How AI Is Changing How Young People Connect /article/podcast-how-ai-is-changing-how-young-people-connect/ Wed, 26 Mar 2025 20:01:00 +0000 /?post_type=article&p=1012554 Class Disrupted is an education podcast featuring author Michael Horn and Futre鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

On this episode, Diane and Michael welcome guest Julia Freeland Fisher, the director of education research from the Clayton Christensen Institute. The conversation explores the potential and challenges AI presents in the educational landscape. Julia shares her insights on the importance of using AI to enhance personalized learning experiences and facilitate real-world connections for students. She also voices her concerns about AI’s impact on human connection, emphasizing the risk of AI replacing genuine interpersonal relationships. 

Listen to the episode below. A full transcript follows.

Diane Tavenner: Hey there, I’m Diane, and what you’re about to hear is a conversation Michael and I recorded with our guest Julia Freeland Fisher as part of our series exploring the potential impact of AI in education. This is where we’re interviewing optimists and skeptics, and I really enjoy talking with Julia and keep thinking about a few key ideas from the conversation. First, Julia’s expertise related to social networks gives her a really important perspective on AI and the potential for it to either harm or help with social networking, which is such a critical factor in career and life opportunities for young people. She was really compelling in talking about how real experiences matter, and I think you’re going to enjoy listening to her talk about how using AI to create what she calls infrastructure in digital experiences could enable young people to build social networks. Infrastructure is in contrast to sort of chatbots or agents, which are a really different experience. The conversation caused me to deeply reflect on my own social network, how I created it, and how I use it, and how complex it is. And at the same time, I’m thinking a lot about a handful of young people I know and what their social network is currently, and how AI may or may not be interrupting them building the social networks that they need and will depend on in the future. And then I’m also thinking about that for me and my age and stage, and what does that mean? It’s been a fascinating rabbit hole that I’m really hopeful will yield some positive impacts on the product I’m building in the future, and how my behaviors as the leader of a company sort of evolve and respond to this moment in time. All of that to say, I truly cannot wait to thoroughly think through all of these ideas with Michael, but until then, I think you’ll really enjoy this conversation we had with Julia.

Diane Tavenner: Hey, Michael. 

Michael Horn: Hey, Diane. Good to see you. 

Diane Tavenner: You too. So, Michael, since we started this miniseries on AI and have begun interviewing all these really interesting people, I’ve started to notice AI literally everywhere in my life. And so I remembered something about this from my psychology days, and I had a conversation with yes, GPT about this to try to make sense of what’s going on. And it turns out there is a particular psychological phenomenon that is going on here. I think I’m going to pronounce this correctly. It’s the Baader Meinhof phenomenon. It’s also known as the frequency illusion.

And basically what happens is when you learn something new, you think about it, you focus on it, and then you start noticing it everywhere. And this is the result of two cognitive biases. So the first one is selective attention bias, where your brain is now sort of primed to notice the thing you’ve been thinking about, so you pay more attention to it in your environment. And then the second is confirmation bias. You know, once you notice it repeatedly, you interpret this as evidence that it’s suddenly more common, even though its frequency hasn’t actually changed. I don’t know about you, I find this fascinating how our brain sort of filters and amplifies information based on what we’re focused on. And so, yeah, that’s happening to me. Just to illustrate my confirmation bias.

I’m actually going to say out loud that I do think AI is everywhere. And I’m betting the person we are about to talk to today might feel the same because a lot of her recent work is on AI

Michael Horn: Well, I think you have nailed the lead in Diane. That’s a perfect segue as well for today’s guest, who I suspect is nodding along, excited to have her Julia Freeland Fisher. I’ll say up front, I’m really excited about this one because Julia has been a longtime colleague and friend of mine. I hired her at the Christensen Institute as a research fellow. And then when I left full time a decade ago, just about a decade ago, she stepped in as my successor. It was like a version 2.5 or 3.0 or something like that. We jumped ahead several generations.

So it was terrific. And I couldn’t be more thrilled, frankly, about the work that she’s done since because she’s really elevated the important topic of social capital into the education conversation. She’s frankly, taught me a ton along the way. The book that if you want to sort of catch up on it that she wrote a few years ago is Who You Know. But most recently, she published some really interesting research about AI in education titled Navigation and Guidance in the Age of AI, which we’ll link to in the show notes. But I’m sure we’re going to get into that and much more. But first, Julie, before we do that, just welcome to Class Disrupted. Great to see you.

Julia Freeland Fisher: Thank you. So honored to be here with both of you.

Michael Horn: Well, we hope you’ll still feel that way by the end, but yeah, but before we get into a series of questions we have for you, actually, let’s table set a little bit and share with the audience. How did you get so deep into this topic of AI itself? Because, as we said, you’ve been researching social capital for several years now in education. You’ve thought a lot about the role of technology in that equation, clearly. And you thought a lot about how schools perhaps should redesign themselves to become more permeable, if you will, to the outside world. But why AI and what’s been the scope of your research around it?

Reimagining EdTech for Human Connectivity

Julia Freeland Fisher: Yeah, absolutely. So I think, you know, historically I was sort of obsessed with the concept of, and I’m putting this in air quotes, edtech that connects. I’ve been really disheartened, but still optimistic that there’s a long runway of innovation if we were to start to think about education technology, not just in service of content delivery or productivity or assessment, but also in service of connection, that young people could overcome the boundaries of their existing networks, they could connect with peers and protect professionals that shared their interests, that there’s just so much possibility if we started to do in the classroom what many of us do in our working lives, using technology to connect across time and space. So I’ve been studying that for a long time, and it has been a small but mighty market, certainly not something that has grown significantly and that has made me painfully aware of just how much the ed tech market ignores connection as part of the value proposition of school. And so enter AI, and we’ll get into this more. But you know, for all of its sort of fantastic productivity, upside and intelligence, the piece of AI that I’ve been paying attention to is the tendency to anthropomorphize it and to make it human-like, to make it capable of mimicking human emotion and empathy and conversation. Because what I see unfolding, and this is not inevitable, it just has to do with how the market absorbs it is a true possibility of disrupting human connection as we know it because we don’t value it to the level the market ought to.

And because the technology has suddenly taken this dramatic turn towards human-like behavior, affect, tone, etc. So I’m just fascinated by that. And I want those of us inside of education, I want parents to be awake to this kind of dimension of the technology that like wasn’t really, it was maybe lurking, but it wasn’t really dominant in the edtech sort of old days, the sort of version one of ed tech where we weren’t giving these tools the same sort of voice and emotion that I’m seeing now. So that’s a little bit of it. But I want to, you know, at various conferences I’ve been labeled a pessimist and a doomer. I really want to come to this conversation as a realist. Like I’m, I work for the Clayton Christensen Institute for Disruptive Innovation. I am not anti-technology. I am worried about the market conditions inside of which the technology is evolving.

Diane Tavenner: Well, Julia, I’m so glad we started there to like just ground everyone in the work you’re doing and how you think about it. And I’m going to give you your moment to sort of be the realist. Let’s start with just inviting you to sort of make the steel man argument in favor of AI in education. Like in your mind, what’s the best case possible scenario for AI in education from your perspective, given your work, you know, as a mother even, you know, like, what’s the best possible outcome we could reach?

Rethinking Personalized Learning Potential

Julia Freeland Fisher: Yeah, I want to first just describe how surreal it is to have Michael B. Horn and Diane Tavener asking me that question. Like I’m chatting with two luminaries that I’ve learned so much from in thinking about the potential of tech to really personalize learning. And I know that term gets overused and now it’s maybe out of fashion, but like it’s a little absurd that I would be providing an answer to you on that. But here I go. Anyways, I think, just quickly, I think the potential to scale a system of personalized content, experiences and support and thinking about those three things actually as kind of separate strands or value propositions being key. The adaptive content and assessment piece may be the most obvious, the most familiar sort of evolution on top of how we’ve talked about ed tech in the past. But I’m actually probably equally or more excited about the possibility of seamless infrastructure to support a mastery based system that also gets students connected to new people and learning in the real world.

And it’s infrastructure doing that. It’s not the AI talking to the student that’s doing that. And I’m not sure how much investment we’re seeing that you guys may know more than I do, but that’s kind of my vision of what the more time I spend with the tech, the more I see how much that could actually be feasible in a way that even 10 years ago, I think we all had sort of dreams of that. But the tech was a little bit clunky and was, you know, it could create a pathway. But the idea of flexible pathways that actually were adaptive in real world contexts felt a little more out of reach.

Diane Tavenner: So let’s stay here for just a minute, Julia, because I want to make sure people really understand what you’re saying by infrastructure. We’ve had dialogue around and by the way, I’m working on this, you know, I’m working on this. Got one person in your corner. We’re getting closer and closer, but like, we’ve had a bunch of conversations about sort of chat bots or agents or things like that. And when you’re talking infrastructure, that’s kind of in contrast to the experience that I think most people are having right now. So just illuminate that a little bit for us. Like make it, make it. So everyone can visualize what you mean.

Julia Freeland Fisher: Yeah. Let me name two pieces of infrastructure, one of which I know Michael has featured in some of his work, and then another of which I’m not sure if you guys have talked about. So one is a tool called Protopia. It’s used in higher education founders Max Leisten, I believe. And the tool that Max has built, you know, he partners with alumni engagement offices. And the way the tool works is students can go onto their career services website, ask a question, and based on the content of the question, Max’s tool will call through the alumni directory of that school, find the alum who is best suited to answer the question, and email them directly to their email. There’s not a clunky app that you have to go through and if they answer that student’s question, fine. If not, it will go to the next best alum to answer the question.

So that’s infrastructure. It’s behind the scenes, it’s facilitating an opportunity for learning. And in this case, obviously I’m highlighting it because it’s facilitating connection as well. But it’s sort of doing the behind the scenes manual work that is not like high quality human work, but is necessary if you want a system where students are moving beyond just a singular predetermined path and actually having opportunities or conversations beyond it. The other one I want to highlight that I actually think is illustrative of why this is exciting and also why I’m like a little bit getting labeled. The doomer is a tool called Project LEO that spun out of Da Vinci schools in Los Angeles. And it’s designed to create bespoke individualized projects aligned with the principles of project based learning based on students’ interests in they’re like ikigai, which is that Japanese Venn diagram thing.

What was so exciting in the initial version of this tool is that they were then not only did students get a personalized project aligned to their interests, that aligned also to the teacher’s sort of core content that they were trying to hit on, but that it would also connect them to a working professional who would give feedback on their project. Now, as they’ve rolled out the product, the demand or the willingness to pay for that last feature has been quite limited. So it’s not currently sort of part of the main product. And I say that to say like infrastructure for project based learning, that’s exciting to me. Right. It’s been perennially hard to scale project based learning that’s interest based. Diane, this is like again absurd for me to explain, explain this to you, but that’s really exciting, right, that it doesn’t sit on a teacher’s desk to have to create 25 unique projects.

I would like to though see the market mature in a way where demand for that last mile connection out to the real world is also there and people are willing to pay a premium for real world experience. So those are just two examples of like it’s the behind the scenes creation of stuff that students then do. It’s not necessarily a student facing adaptive tool, which I’m not totally down on. Like I think there’s a place for that. But that’s the infrastructure conversation.

Diane Tavenner: Super helpful.

AI: Pessimistic and Realistic Concerns

Michael Horn: Yeah, yeah. So Julie, if you’ve painted that picture of what could be and frankly a layer of AI that’s much more invisible, I think facilitating these sort of interactions, experiences, connections and so forth, I’d love you to take now the flip side. And you said you’ve been labeled a pessimist, so maybe it’s. I was gonna say give us the skeptical take, natural side, maybe it’s the realistic take. But, let me ask it in this way a little bit more directed because I. We want this part of the conversation which is what do you fear that AI is going to hurt and how and although I’m sure you could also offer like a real, you know, sort of steel man argument here as well. I think that your research has a lot to say around what you’re seeing and what implications that might make mean that we ought to be wary or at least on guard about right now.

Julia Freeland Fisher: Yeah. So there’s, there’s two things I want to name here, and one of them that I could go on and on about, which is human connection. So I’m going to let me say the first one briefly, which is I’m worried about it harming the concept of experiential learning. And then we’ll get to human connection. The concept of experiential learning is so exciting to me. It’s what I want for my kids. It’s what I want for all kids. And as much as I think that I just described two examples of infrastructure that could get us there, I think the market is much bigger for simulated experiences than actual experiences.

And I think a lot of the hype around AI is like, these bots can simulate anything. They can be anyone. You can be pretending to talk to fill in the blank. And yes, that may be a context to develop skills in a more applied way, but it’s not real experience. And I’m worried about that for two reasons. One, I think that you run the risk of young people becoming accustomed to sort of synthetic interaction. But two, because if you look at what employers are demanding of entry level work, it is experience, it’s not just skills. And Ryan Craig has written a lot about this, the experience gap. 

As AI actually chips away at entry level work, Higher ed needs to step in and actually prepare students in new ways. But the piece of that I think we’re not paying attention to in the education conversation is that that actually requires true experiential learning, not just simulated skills, not sort of performance tasks. And at least from what I’m seeing, and Diane, I’m right where you are at the beginning of the episode of like, I’m just reading all of this stuff through my little doomer lens now. But I just think there’s so much more hype, partly because employers are willing to pay for like, simulation experience stuff in the L and D market. There’s much more hype around simulation than around, what would it take to scale true experiential learning, which, by which I mean learning skills in an applied context with other humans. Yeah, so that’s my number one.

But now that was like, not my real rant. My real rant is, I actually think, Michael, that’s something you probably thought more about than, than I have.

Michael Horn: So yeah, let’s hear number two then.

Threat to Human Connection

Julia Freeland Fisher: Okay, so number two, what I think it could hurt is human connection. And I want to put this in a context of what I said initially around bots being anthropomorphized. And this is happening across many different pockets of both the consumer and ed tech market. I think we should be way more worried about the consumer applications. So we’re talking here about romantic companion apps like Replica, character AI where people in general and young people included are being drawn into parasocial relationships with bots that emulate and can even exceed sort of human behaviors in meeting those users emotional needs. That is emerging against the backdrop of a long standing loneliness epidemic, which is a lagging indicator of our underinvestment in human connection and inside of schools, it’s emerging against the backdrop of what I have observed over the past decade of my research of a lot of sentiment about relationship, but very little strategy, very few metrics guiding whether students are actually connected, very little budget dedicated to human connection again, as a value proposition in its own right. And so it’s really, and Michael taught me this, right, Michael taught me disruptive innovation theory.

It is a classic disruption story in that loneliness is providing a foothold in the market for these bots to take hold. And there is very little stopping their upward march in the market. There is very little to hinder their growth because we as a society have basically said go get less lonely on your own, like go solve this loneliness thing by yourself. Which is ironic at best and really dangerous at worst. So that’s my big concern again, I don’t think ed tech is going to be the straw that breaks the camel’s back. Like if we asked over the last 20 years what technology most affected young people’s lives, like, I’m sure some of our colleagues would like to be like Khan Academy, but I think many of us would agree, like, no, it was commercial tech.

Michael Horn: Yeah, sure, yeah. In particular. Well, so let’s stay on that because I think you’ve raised two very interesting challenges and the consumer. I mean, we also know from schools right now that frankly what plays off in the consumer space impacts how engaged teens are and so forth.

AI’s Impact on Human Learning

Michael Horn: In the school experience as well. So I think something that has been on both of Diane and my mind’s around the AI conversation is what AI hurts of that, like what will still be relevant, if you will, in the future. Right. And how much is this about replacing outdated structures? I’M going to guess that you think real human relationships and social capital and the like will still be important in the future. I’m hoping you’re going to tell me that, but I guess I’d love you to play with this theme a little bit and get a little bit more nuanced, like, so take the experiential learning piece, right? If we’re offering simulations as entry level to get someone information of, hey, is this something you want to explore more as an entry point to then get something different, you know, is that a bad thing? Or like, where’s the slippery slope? And where is it really chipping away at something that’s fundamentally what makes us human and that we ought to really be concerned about handing over to AI.

Julia Freeland Fisher: Yeah, totally. I mean, I think let’s look at the upsides real quick, both on the experiential and the human connection front. Like on the experiential, these simulations are a way to scale practice, which we know, again, we use the shorthand of skills, but it’s actually we should always be talking about skills and practice. And so I don’t want to claim that like simulated practice is a bad thing. It’s a great cross training for like developing skills. I think I just worry that the market is so blunt that it treats that as the outcome of interest versus applied skills plus human connections. On the human connection front, you know, I’ve been looking at the navigation guidance space and there’s really two stories emerging. On the one hand, we have the potential to disrupt the social capital advantage that has perpetuated opportunity gaps by giving students from all sorts of backgrounds access to resources, information and guidance that otherwise often travels through inherited networks.

So that’s huge, right? Like, democratizing access to information and advice is not something that we should devalue in some sentimental name of like preserving human connection. The piece of it, the slippery slope though, right is that what I found in my research, at least based on our interviews with the supply side, is that the demand side really treats navigation and guidance as an information gap, not a connection gap. And we know that an estimated half of jobs and internships come through personal connections. So if you just use AI to solve the information gap piece, you’re not doing the last mile work of actually addressing opportunity gaps. You’re improving, you’re sort of. It’s like a rising tide lifts all boats, but the gaps are still going to be there if you don’t get the social connection piece right.

So that’s where I’m very wary of these like self help bots that, you know, tout democratizing access and opportunity but are actually sending the wrong message to young people about just how social the opportunity equation in America is.

Diane Tavenner: Yeah. Oh, I could not agree more. Literally. Okay, let’s, let’s take a little bit of a turn here, Julia, you probably can guess this if you don’t know it. One of the things I do for fun in my spare time is imagine the designs of new schools that I would be excited about teaching in or my child would be excited to go to. And so let’s go there for a minute. Like if you had a magic wand, you could design the school to look any way you wanted to, presumably using this new technology we have.

What parts of AI could you take advantage of and you know, what would you avoid because it’s not going to work well. And like what would that actually look like in a school?

Julia Freeland Fisher: Yeah, again, maybe I’ll stick with the relationship theme partly because I’m like Diane, you just tell me your answer and I’ll copy it as like the school designer in this conversation. And there’s a lot of people in the field who I trust more to sort of think about the like whole school design. But when I think about like how do we design a deeply connected school experience for young people in the age of AI? I think there’s three kind of main things I’m looking at. One is, and most of them are infrastructure, just to be clear. One is infrastructure to support high touch webs of support for each and every kid. So this is very clear in the youth development literature that young people don’t just need one caring adult. Even though for some reason that term like, like people grabbed onto it and has stuck.

Young people need webs of support and they are most effective when the people in those webs are connected to one another. This is research from John Zaff and Shannon Varga at BU. It’s informed really great models like City Connects and Bar, but those are expensive to run and the data systems to actually make them highly responsive and even predictive of what a young person needs just like don’t really exist. So that’s number one, high touch webs of support. The second though is more diversified networks aligned with students’ interests. And that’s what we found in our own evaluations of particularly career connected learning efforts at the high school level that are trying to expand students’ options. Young people were least likely to report that they were connected to people who shared their interests. And so I think there’s a ton of opportunity there again to like use AI to detect young people’s interests to,

Conversations and Confidence in Networking

Julia Freeland Fisher: Michael, to your point, to do some front end exploration of like future possible selves. Diane, I know you’re thinking a ton about this, but then to build the middleware so that you are starting to have conversations with people who share those interests. And maybe the best unit to think about there is conversations, not relationships. These don’t have to be long lasting connections necessarily. But how is the high school experience a constant stream of conversations with other humans? And then lastly, you know, I, I do think that the one place I’m interested in these self help bots and I know I’m giving them that sort of derisive term and it’s on purpose, I think we need to be wary of them. But I am really interested in something we see time and again when it comes to building and deepening and diversifying young people’s networks is confidence is really the moderating variable that you can teach young people communication skills. You can do these kind of surface level, here’s how to write a professional email. But confidence makes or breaks whether they go out and mobilize networks on their own, whether they even start having new types of conversations with people they already know.

And I do think that’s like a little wedge in the system where these self help bots could make a difference. A couple providers playing in that space now climb together, Kindred, Backers. These are all sort of startups that I think are keying into like what if AI could de-risk help seeking or reaching out, which for an adolescent can be like so daunting. So those are a couple thoughts of like those being in the background. So that high school, and I’m thinking mostly of high school is like an inherently networked experience. It’s not just if you are outgoing or wear your ambitions on your sleeve or do an independent study, but for every student.

Diane Tavenner: Yeah, that, that’s so fascinating. You know, quick just personal anecdote here, I’m stunned at how reluctant sort of the younger generation is to ever make a phone call. Literally they don’t call people. It’s not a thing. And you know, my son worked on the campaign, the presidential campaign and he had a quota of 175 phone calls a day. And he actually thinks, and I agree with him, this is one of his greatest skill sets now like month after month doing that, like that ability to just talk to people is so missing in our world right now in that generation. So that really resonates with me.

Let’s do one more, if you’re okay, I’d love to zoom out because I know given the work that you do, you’re influencing people, how they’re thinking about policy and procedure and, you know, all of those things, like, what’s on your mind in this moment in time? What are you telling people that they should be looking at, thinking about, you know, wary of promoting in terms of policy, procedure, and, you know, you pick the level, whatever.

Julia Freeland Fisher: Yeah, well, I’ll riff on your last point about your son to answer that initially, Diane, which is something that came out in our research time and again. And this was talking to founders like yourself, but who are incredibly thoughtful about the design of their products and services. And time and again, and you were not one of them, Diane, because you are not pro chatbot, at least in what you’re currently building. But time and again, folks would bring up, and again, this is in the guidance advising space. You know, sometimes students would rather tell a chatbot something than a human. And it’s a safe space and it’s a place for sort of less, there’s less risk involved.

Exploring Student Reliance on Chatbots

Julia Freeland Fisher: And I came away from that research being like, is that a feature or a bug? Like, how are we internalizing the fact that students don’t want to talk to humans? And what is that a reflection of? And so I think that’s number one. Like, what I hope at this, like, sort of ecosystem level people start thinking about is like, if students want to be talking to chatbots like that, let’s actually interrogate that a little bit more. I think the second piece is around really starting to come up with language and some markers of what I’m calling pro social technology. So again, I don’t think AI is inevitably going to disrupt human connection. But I think if bots are not trained to nudge students into the real world offline, if bots are actually trained to keep students engaged, if consumer tech, right, is making money on engagement, that is all moving in an antisocial direction. And I just think we need more language around that because, like, I was in a, like, off the record chat with someone from a, one of the big who recently left one of the big AI companies. And, you know, everyone’s worried about like, national security and China and things that I know I should also be worried about while I’m like, lying awake about AI companions.

But, you know, I said to him, like, what about the fact that these are being anthropomorphized and like, encroaching on what we sort of hold dear as human. He was like, yeah, everyone working in industry is, like, creeped out by that, but has no idea what to do about it. And that was revealing, right, that there’s a real prisoner’s dilemma here. That, like, there’s a creep factor. But it’s like bullet seven on slide four. Like, no one’s really as worried about it as I think we should be. So that’s number two. And then the last thing is really much more parent facing.

Like, I think whether you agree with the, like, moral panic, Jonathan Haidt stuff around cell phones over the past year, he’s tapped into parent anxiety that I’m like, this is the right anxiety in some ways around screen time and addiction. But, like, we’re not even talking about what’s coming. And, you know, if you think social media was designed to appeal to our deeply wired need to connect, AI companions, are that on steroids and so I am not myself, like, a parent organizer, that’s like, not. I wish that was, like, who I was born to be. But I’m hoping that there will be more conversations around parent organizing around just like, not creating barriers to innovation. This is the tightrope we need to walk right, like, not shutting down the tech, but being super aware that, like, we have seen this movie before.

Michael Horn: Yeah.

Julia Freeland Fisher: So those are my big three.

Diane Tavenner: Well, I got carried away there, Michael. Any other questions you want to ask before I take.

Michael Horn: I think we asked the right questions. This been fascinating.

Diane Tavenner: Okay, good. Yeah, I couldn’t help myself. I so appreciate your thoughts, Julia. And we’re going to ask you for one more. So we always invite our guests to join in our sort of end of show ritual, which is where we share what we’re reading, listening to, watching. You know, we try to do it outside of work, but we often, you know, regress back into to work. But we’d love to hear what’s been up for you lately?

Julia Freeland Fisher: Yeah, so I just finished this, like, breathtakingly beautiful book called Nobody Will Tell You This But Me by Bess Kalb. It’s a memoir about her grandmother, and it’s done really beautifully. It’s like her grandmother is talking to her. Like, the form she chose is just stunning. And yeah, it was just intergenerational connection is, like, one of the most beautiful things. It was beautifully done. And I was actually thinking about it when I was.

And then I’ll stop talking, I promise. But I was listening to your guys’s last episode on AI and you were talking about Notebook LM. And like putting a chapter of a book into that and just how much texture of like the brilliance of what she did would be lost listening to these, like, TED Talk adjacent fake voices, like, riffing on it. And like, our kids deserve to live in nuance and to detect it. And like, how do we. Anyways, that book in particular is just such a beautiful, like only a human could have written it. And I know all sorts of people in Silicon Valley will debate me on that, but. Highly recommend.

Diane Tavenner: Yeah, for sure. I love that recommendation. I’m working on planning a dinner called Generations Over Dinner and so that might be a fun.

Julia Freeland Fisher: Oh, my gosh, check it out it’s beautiful

Diane Tavenner: So I might add that in there. I will. Okay, what’s up for me right now? Well, I’m gonna stick with my biases that I introduced at the top of the show and say that we just finished the second season of the Foundation, which is a series on. I forget one of those. I don’t know. It’s on something. Anyway, based on the writings of Isaac Asimov, you can tell how good I am at tv. Not very.

And yes, one of the big plot lines is all about AI. There’s no doubt about it. And so I’m seeing that literally everywhere. I will say it’s for me, having not read the books, unlike my kiddos, it’s a little bit hard. It’s a lot going on there. It’s hard to follow. I don’t remember everything. I was glad I had some guides, human, actual guides, sort of coaching me through it, and it came together for me at the end and felt worthwhile. So it’s certainly beautifully done and well acted and. And all of that. How about you, Michael?

Michael Horn: This may be my entree, Diane, into it. Because I’ve struggled with the books. Sal Khan has actually tried a couple occasions and I just I cannot get into them. So I like that. I will also stay with biases, but on a totally different front. I feel like I’m going to stereotype myself here or everyone listening is going to be like, yep, that’s Michael. So I just recently finished the Master: The Long Run and Beautiful Game of Roger Federer by Christopher Clary. My tennis fandom, I think, continually comes out recently on this podcast. So beautifully organized book. Really enjoyed it. I will say I’m like, there’s Rafael Nadal people. They’re Roger Federer people. I’m a Nadal, Pete Sampras sort of vintage person. But I was really glad I read the book, gained a deeper appreciation of Federer, and frankly, actually picked up some tips that I wish I had known much earlier in my professional career from practices that he would employ at the tournaments that he would show up at with everyone around the tournament, not actually the playing itself, which was not something I expected. So we can offline about that later. But it’s all about relationships, it turns out.

Diane Tavenner: So you have me curious now. I wasn’t expecting to be curious afterwards.

Michael Horn: But it’s all about relationships. It’s comes back to Julia’s thesis. And with that, a thank you, Julia, for joining us and taking us through this fascinating conversation that we’re going to be reflecting on for a while. I know. And thank you to all of you, our listeners. And we’ll see you next time on Class Disrupted.

]]>
Opinion: Educators’ View: How AI Boosts Learning in Our Tennessee & Colorado Districts /article/educators-view-how-ai-boosts-learning-in-our-tennessee-colorado-districts/ Mon, 17 Mar 2025 14:30:00 +0000 /?post_type=article&p=1011702 Judging by the tsunami of sales pitches that school district leaders get from ed tech companies, artificial intelligence is the antidote to every problem in education today. However, there’s every reason to be apprehensive 鈥 so many tech products over the last 30 years have overpromised and underdelivered.

As an underlying technology, AI does seem different 鈥 more conversational, more flexible, more powerful. Most notably, past ed tech products have relied on multiple-choice tests that don鈥檛 always accurately assess a student鈥檚 understanding of different concepts. Now, AI can analyze and react to open-ended student responses, helping to boost critical thinking skills and deepen comprehension. In addition, AI provides real-time visibility into each student鈥檚 performance so teachers can be more strategic with classroom discussions.

Here are three guiding principles to help educators be rigorous when selecting AI tools to pilot and scale as they lean into this new chapter of teaching:


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


First, rather than look to develop an 鈥淎I strategy,鈥 district leaders should create a strategy for teaching and learning and use AI to power specific aspects of it. They should start by identifying goals and priorities, then ask: What can AI do to help our district achieve them?

In both our districts, the most urgent focus was increasing student achievement. To help schools achieve this goal and narrow down potential tools from the on the market, district leaders centered objectives on implementing high-quality instructional materials, increasing teacher effectiveness and improving student engagement and well-being.

Our districts landed on that creates high-quality, interactive experiences for students with personalized feedback and support to deepen their understanding of the curriculum. It also directs educators’ attention in real time to the students who most need help. By combining the power of top-rated curricula and AI, teachers can embed intervention-type support into core instruction.

Second, ed tech providers should design their tools with teachers, students and district leaders, not just for them. Part of the reason educators have not gotten needed quality and usability out of products in their schools is that vendors exclude teachers and students from the development process.

A big part of why teachers and students in our districts are enthusiastic about this is that educators were able to offer feedback directly to ed tech company leaders who regularly visited our schools 鈥 and then implemented that feedback. This fall, Sumner County teachers asked to make the AI writing support more bite-sized, giving students an initial score, one piece of feedback at a time and the ability to revise their writing multiple times and update their grade. A Denver Public School leader asked whether AI could identify the most common misconceptions students were having in class, which led to an expanding suite of real-time analysis tools. Students asked for more clarity into their progress at each step, more celebrations and the ability to customize their experience.

Because every voice was valued and the solutions evolved to meet stakeholders’ needs, both student achievement in English Language Arts and teacher satisfaction have increased. In Sumner County, the six schools using the tool have shown significantly more progress on their English assessments than the six schools not using it, and 90% of teachers reported that the product made their jobs easier and more enjoyable.

Third, educators must break the ed tech habit of having students work silently on their own personalized pathways with headsets on and without interacting with their classmates. Instead, AI should emphasize the and foster connection, inclusion and discourse. 

At our districts, a top priority is the effective implementation of high-quality instructional materials. While various schools have chosen different, top-rated curricula, they share a vision of classrooms with rich and interesting texts, student writing and lots of discussion both in small groups and the full class. District leaders want AI products that bring schools closer to this vision. Rather than dedicating 20 minutes a day to a supplemental, skills-based tool that students work on silently, teachers should have tools that make collaboration easier and give students more confidence to bring their insights into full-class discussions. 

AI brings new possibilities for better ed tech, but schools will realize this potential only if district leaders lean into this moment, guided by their goals and values. If they do, they can create future-ready schools that prioritize transformative student outcomes and human connection.

]]>
OpenAI鈥檚 Education Leader on AI鈥檚 鈥楳assive Productivity Boost鈥 for Schools, Teachers /article/openais-education-leader-on-ais-massive-productivity-boost-for-schools-teachers/ Wed, 12 Mar 2025 16:30:00 +0000 /?post_type=article&p=1011387 Class Disrupted is an education podcast featuring author Michael Horn and Futre鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

In this episode of Class Disrupted, Michael and Diane chat with Siya Raj Purohit, who works on education initiatives at OpenAI, about the transformative potential of AI in education. Siya shares her career journey and how it led her to focus on bridging the gap between education and workforce development. Highlighting the immense value of AI tools like ChatGPT, particularly in university settings, she underscores its potential to personalize learning, reduce teacher burnout and enhance classroom interactions. Siya also addresses concerns around AI by emphasizing that, while AI can elevate thinking and productivity, the irreplaceable human element in teaching 鈥 such as mentorship and personal inspiration 鈥 remains vital.

Listen to the episode below. A full transcript follows.

Michael Horn: Hi there, Michael Horn here. What you are about to hear is a conversation that Diane and I recorded with Siya Raj Purohit from OpenAI as part of our series exploring the potential impact of AI on education from the good to the bad.

Now, here are two things that grabbed me about this episode.

First, I was struck by how much Siya uses ChatGPT in her daily workflow already. Yes, she works at OpenAI, but it has seemingly revolutionized her life. As she said, it’s a massive productivity tool. From using it as a tutor to helping her figure out what projects to prioritize, what to learn, this is just part of how she works now. 

Second, I was struck by how much she’s really on the ground level with universities, particularly professors, helping them figure out how to make it part of their workflow as well for teaching and learning, and how deep she is in specific use cases as a result, and how she sees this, frankly, as an important tool to free up teacher time, elevate student thinking, and the like.

As the conversation wrapped up, I’ve also been reflecting on a couple things.

First, what would it take for ChatGPT to be a massive productivity tool for me personally? And if that’s the framing, what does it mean this technology can and can’t be used for in education?

I was also struck by how OpenAI has decided to go deep on supporting those in college and beyond with their tool, but they haven’t yet created their own products or services for students who are under 18. Candidly, that’s not something I had really realized or reflected on before this conversation. So I’m excited to reflect a lot more with Diane after we talk to a number of people about this topic. But for now, we’d love to hear your thoughts about this conversation. Please share it with us over social media or through my website, michaelbhorn.com. And with that as prelude, I hope you enjoy this conversation on Class Disrupted.

Diane Tavenner: Hey, Michael.

Michael Horn: Hey, Diane. Good to see you.

Diane Tavenner: I confess I am really excited about today’s conversation because the first two we’ve had about AI have been super interesting and have been raising some big questions for me around the assumptions that I had coming into these conversations and AI and schools, and in particular how we organize schools themselves around new technologies. But it’s made me even more curious to talk to other people and get other perspectives. So I’m really, really looking forward to talking today.

Michael Horn: As am I, Diane. And I. I agree that the first two episodes have piqued my attention on different things, and I’m looking forward to digging in on more at some point. But whereas our last episode featured someone who is, I think it’s fair to say, largely skeptical about AI, I suspect we will get a very different take today, given our guest actually works on Education at OpenAI, the company that of course developed and operates ChatGPT. Her name is Siya Raj Parohit, and she has been focused on supporting ed tech and workforce development in the startup community and at AWS over the past decade before she more recently joined OpenAI to work specifically on education. We’re going to get to hear about all that up front. But first, Siya, welcome.

Michael Horn: It is so good to have you.

Siya Raj Purohit: Thank you so much for having me.

Michael Horn: Yeah, you bet. So before we get into a series of questions, questions starting to dissect AI and its impact, or not on education, I would just love you to share with the audience a little bit about how you got so deep into AI around the question of education, perhaps specifically, and maybe you’ll also humor me as you do so, because I’m curious OpenAI’s interest in all this because it seems like more than maybe any product launch other than the iPad that I can remember anyway, I can’t think of any other consumer tech product or service that has made education such a cornerstone of all of its announcements and sort of promise and potential for the new technology. So maybe you can tell us a little bit both about your journey, but also how OpenAI sees education.

Siya Raj Purohit: Absolutely. So I’ve spent my career at the intersection of education, technology and workforce development. This all started when I was 18. During college, I published a book about America’s job skills gap, talking about how American universities weren’t teaching the skills that students needed to land jobs in industry. This stemmed from my own experiences and the fear that I may not be able to land the jobs that I aspire to. And that’s something that I think a lot of young adults relate to. But I’ve spent the next 10 years from that just trying to help bridge that gap. I worked at early stage startups, venture capital funds, and most recently Amazon, trying to bridge that gap between learning and opportunity, helping make economic mobility more possible for different types of learners.

Siya Raj Purohit: I joined OpenAI about 8 months ago to help build up our education vertical. As you all might remember. November 2022, ChatGPT launched and suddenly became like such a used product around the world. And what was interesting for OpenAI is that learning and teaching was one of the most common use cases on why people were engaging with ChatGPT. So this year we launched a product called ChatGPT EDU is designed for universities and school districts to be able to use an enterprise grade version of ChatGPT. With that, it brings all sorts of different types of benefits. There are all sorts of network effects that can exist on a campus once all students, faculty and staff have licenses.

Siya Raj Purohit: I will share a couple of examples of what that looks like. But a big part of my job is to help education leaders, educators and students start using AI more effectively on different types of campuses.

Michael Horn: Perfect. Perfect. Go ahead, Diane.

Diane Tavenner: Yeah, I mean, I think it sounds like rightfully so. Michael and I are both operating under the assumption that you’re probably biased towards seeing AI as something that offers real opportunity to improve and transform education. And clearly your personal pathway and journey is leading you to that impact. And so one of the things we’re interested in is having you sort of make the best case for how AI will impact education in a positive way. And we have a lot of things in our minds that we’ve thought about, but we’re really curious to be expanded in our thinking and have you make that very best case for us.

ChatGPT: Revolutionizing Personalized Learning

Siya Raj Purohit: So I believe for education as a sector, personalized learning was always the holy grail. We always said that if we achieve that, we have made it, like we have accomplished a lot of education goals with that. And I think that with ChatGPT, it exists. I have a personalized tutor that I talk to every day. It knows my projects, the skills I’m developing, like my aspirations. And it helps me become a better knowledge worker every day. And I think that in education, it’s making high quality tutoring available to anyone with an Internet connection and supporting educators by automating a lot of the time consuming jobs that they do to let them focus on what matters a lot for them, which is like mentoring and inspiring students.

Diane Tavenner: That’s interesting. Let’s stick on that one for a moment because, and we’ll get to this a little bit later, but like I wonder, does that mean that the schools don’t actually end up changing very much because the tutor and the sort of automated assistant just allow students and teachers to do things the way that they have been doing them, just better and more efficiently? I’m curious what you think about that.

Siya Raj Purohit: So right now the most interesting examples we’re seeing is that educators accrediting ChatGPT for reducing teacher burnout, which as you both know is a big problem in America. Teachers who used to spend so much time doing lesson planning, quiz grading, like all the preparation for classroom activities are able to outsource a lot of that work or kind of use ChatGPT, do a lot of that work. And so then they can focus on those classroom interactions and the engagement within different peers in the classroom, which I think is much more valuable. As far as the classroom dynamics go, I think that it is a big compliment in the way that it brings personalized support and tutoring to individuals. But at the same time I do think that there’s still value in students being grouped with others that are of the same age as them because then you develop a lot of social skills and you learn how to interact more. So I’m not of like mind that like people should just do online school and have ChatGPT because I think that social component is becoming increasingly more important.

Diane Tavenner: Got it. I’m thinking back to your 18-year-old self who wrote a book, which we could spend a lot of time just even talking about that, but having we’ve both written books, we know what it takes. We weren’t writing them at age 18, I don’t think. And your whole premise there that like I’m not learning the skills that I’m going to need to be successful in the jobs that I want to have or the careers I want to have. How do you see AI and what you’re doing with ChatGPT contributing, you know, making that not true or improving that. What is the intersection there of your personal sort of passion?

From Personal Struggle to System Change

Siya Raj Purohit: The reason I wrote that book and I felt so passionately about that and I guess that passion still, like it’s so deep in me is because at first I thought it was a Siya problem. Like Siya was not able to be learning engineering skills to be able to land a job that she wanted. And then I did enough research by speaking with some really accomplished individuals to then realize this was actually a system problem. And the book was like my attempt to capture like the scale of this problem and also prove to myself that this is not just like the thing that I’m struggling with. And then I think the next part of that was like, how can I free other people from the struggle? And that’s when like, this journey to try to make economic mobility more accessible has become like my life passion. So I think with ChatGPT one thing that it does really phenomenally, which I hope the students will take advantage of, is it helps elevate our thinking. A lot of times I share my thoughts on a project and I’m like, how can I elevate my thinking? How would a COO of a rocket ship company approach this? And it helps kind of expand my thought process much more.

Siya Raj Purohit: And I think while doing that, it helps us feel like less alone in a lot of these things that we encounter a lot of the problems because we can find the right examples, we can think bigger about this, we can find our own gaps. And I think these things are very powerful.

Diane Tavenner: Yeah. One of the things that’s interesting about talking to you that I’m observing is when we ask other people to make the best case scenario for AI, it’s a little bit detached from them. But what I hear in you is literally this is what you’re doing. This is how you’re working every day. It sounds like you are a true believer. Am I missing anything or am I hearing that right?

Siya Raj Purohit: I used to work really hard at AWS, but I accomplish about three times more every day at OpenAI just because I have AI now. I use it a lot to up level myself, but also to uplevel the project outcomes I provide.

Diane Tavenner: Interesting. Awesome. Well, this next question might be more challenging for you.

Michael Horn: It’s a massive productivity tool for you. And I’m interested in your book. There’s this common theme, right? You used “me search”, as we would say, not just research around your book. And then you were doing the same thing with this tool because you’re living it in terms of your massive productivity boost. But I guess I’m curious, like the flip side of some of these things because I, you know, there’s a lot of skeptics, as you know about, oh AI might not even just like not have these transformational impacts, but also might undermine certain things. And so I’m sort of curious where you come out on some of this stuff. And I’ll Just name two. And then you can go wherever you want on it.

Michael Horn: Which is one, you said in some ways it actually makes you feel like you have a companion alongside of you to elevate your thinking. Some people said that actually could be dangerous because maybe you’ll be in isolation. Right. And not feel like you have to connect with others. And then you talked about elevating thinking. And I think that’s the other big worry that people have is that it’ll actually do the thinking for you. Right. And we won’t do the difficult, effortful work to learn about how to construct an argument and, you know, critical thinking and build knowledge so that we can analyze it and so forth and so on.

Michael Horn: And I’m just sort of curious, like I kind of want you to steel man the argument and make the best skeptics take, but I almost more want you just to start to dig into these different use cases, you’ve heard the ones that I just named and others and sort of talk us through how you think about them.

Human Connection in Education

Siya Raj Purohit: Yeah. So let’s first talk about the human connection piece. It’s really interesting because a lot of educators come talk to me about their own doubts and concerns about the future of their profession. They’re like, will I still like be a teacher or educator given that ChatGPT exists and it’s getting so good? And this question honestly surprises me a lot because the reason that I remember educators that have influenced my journey is because of who they were and how they made me feel and who they told me I could become. Right. These are things that ChatGPT doesn’t do, because ChatGPT and AI know about me what I tell it, right? But great mentors can see things about me that I don’t even know about myself. And I think that’s a really important distinction. And I think that educators have this really unique opportunity in this era to double down on those things, they got into teaching to mentor and inspire and find these connections.

Siya Raj Purohit: And now they have the opportunity to do more of that because if they can help increase the potential or vision for more people, that’s the true power of education. I’m really excited about that. And I don’t think that ChatGPT will replace human relationships. I think it’s just gonna become like a support system. So like, the reason, like how I use ChatGPT on my personal career front is that I tell it like the things that I might want to become, like, this is like my 5 year goal, this is my 10 year goal. Can you create a really robust roadmap on how I can get there. And it gives me really, like, precise instructions as I join these types of organizations, publish this type of content, think about taking on these types of projects at work. It’s really detailed.

Siya Raj Purohit: But what it misses out on is, like, when my manager comes in and goes like, hey, this is your superpower. You should double down on this. You know, like, forget, like these type of strategic projects. They just hone in on what makes Siya, Siya. Right. And that’s what we need more people to do for other people.

Michael Horn: Super interesting talk about the other part of this. The you mentioned elevating thinking, giving you a personal roadmap. It’s amazing. Again, the other fear that I hear a lot of is people say, well, it’s actually going to cause people to not do the effortful work to actually learn or even get to the questions that you’re able to ask of it. How do you think about that concern?

Siya Raj Purohit: I think educators need to show more about what an extraordinary outcome looks like. And we need to just be able to showcase what amazing end products look like in different verticals and different domains. And the reason for that is that if you give a generic input to ChatGPT, you’ll get a very generic output, which a lot of students are realizing, because they’re just like, okay, I’m going to plug in my homework, get a very generic output, submit that. And that’s not what professors are looking for. So I think one of the most creative use cases I’ve seen is a professor at the Wharton School. He always had an essay as a final submission for his MBA class. And he says, he’s like, what is the value of an essay? The value of an essay is not necessarily in its output, but in the conversational skills and critical thinking skills that go into getting to that output. So now he requires the students use ChatGPT.

Siya Raj Purohit: He’s like, they are going to use it anyway, might as well make it a requirement. And now he measures the number of prompts they use to get to an essay that they’re really satisfied with. Some students are so good at prompt engineering that they take like two or three prompts and they have a really good essay. And some students go back like 18 or 19 times to get to a good essay. And he uses that as their ability to clearly articulate what they’re looking for, which he thinks is a really important skill. So if he can teach students how to communicate those skills, like in terms of communicating that output that they want to see, and also be able to visualize some really extraordinary output, then they’re going to be able to use AI as just a tool to get there.

Michael Horn: So maybe this is the last question in this section that I have because building off that, I think it’s almost an implied set of knowledge and awareness, right, that students need to have as baseline to be able to have those expectations or hopes for outcomes and things of that nature. I’m sort of curious, you also mentioned that what the purpose of an essay is implicit in all of that is that some of the artifacts that we have used historically to gauge, you know, thinking processes and argumentation, et cetera, et cetera, like they might change in the future. Right. The example we’ve used a few times at this point is Brorr Saxberg, one of our friends likes to say Aristotle worried deeply that the written word would mean people didn’t memorize Homeric epic length poems anymore. And he was absolutely true.

Michael Horn: And I don’t think any of us regret that. And so I’m sort of curious, your take of like, you know, sort of how we do work or the artifacts of what we think of as representing learning, how might those change even in the future? And maybe some of these concerns, they won’t all be that relevant because we will show our knowledge and skill development through other means.

Siya Raj Purohit: So I think a lot of like basic calculations, basic strategic work, all of that is going to become much less important. I think a lot of listeners would probably relate when their teachers told them they wouldn’t always have a calculator around, so they needed to learn basic math early. And now we do. So it’s just like these kind of like, the basic elements of strategic thinking, I think are gonna be less important than they used to be. But the things that are going to be more important is like, like critical thinking, but also emotional reasoning and the ability, like emotional intelligence to be able to these outputs and make sure that they match the type of Persona that you’re serving. So right now in my current role, I do a lot of like, I guess, partnerships and BD work and those kind of things. And like, yes, I use AI to create the different types of documents and slides and those kind of like assets that we share. But the way that I communicate them to the end user to kind of inspire confidence or interest is like the unique ingredient here.

Siya Raj Purohit: And we need to be able to teach that. So when the strategic work, as our reasoning models get smarter and do more of that strategic work, that human element helps people distinguish their work and stand out.

Diane Tavenner: Interesting I’m so curious because I think you maybe more than other people have started to maybe personally see some changes happening in schools because of AI and like how it looks different and how it feels different and/or I bet you can imagine them a little bit better than a lot of people. And one of the things that I think we suffer from is just imagination in this space, right? Like we all know what school looks like and we have a really hard time breaking out and imagining something different. So can you just take us there? Like what could possibly look different, feel different for a teacher, for a student in a school? What are you seeing? What are you predicting?

AI Revolutionizing University Experience

Siya Raj Purohit: For this one, I’m going to actually focus more on the university setting because that’s where we’re seeing the fastest changes happen. Our current thinking around what an AI native university looks like is that every campus will have multiple AI touch points across that help enhance the student/faculty/staff experience on campus. So basically the idea is that we’re going to take the knowledge of the campus, make it conversational and more accessible to these users. So when students come on campus, they’re going to have these orientation GPTs which, where they can ask questions like where’s the best pizza place in town? Or how do I change my roommate? Or any of these kind of preterm questions that they have. Then they’re going to come into classrooms where professors will have designed these custom GPTs that are just basically that have learned from the professor’s material and help answer questions. So a professor at HBS, Jeffrey Buskyang, was telling me that most of his class uses custom GPTs between 12am and 3am when like a human tutor is not available. And they can ask questions like which CEOs handle layoffs well and get the exact examples to help understand these kind of concepts. So classroom conversations will become much more in depth because of this.

Siya Raj Purohit: But also students will be able to do things like I have a statistics exam coming up, can you give me some practice quiz questions that relate to the same like level as my professor provides and just be able to go back and forth in classroom content that way. They’ll go to career services where they’ll be able to use the university’s proprietary data to practice interviewing with a McKinsey partner and McKinsey recruiter, all with like AI. So like all of these experiences will happen, student clubs, career services, classrooms, and it’s going to happen seamlessly for students. So they’ll be able to navigate between this very easily as they try to like grow as students and professionals.

Diane Tavenner: Super helpful I want to dig a little bit more and this might be surprising to you, but I actually think a number of people who listen to our podcast, maybe fewer that listen to our podcast, but sort of in education, have literally never even used ChatGPT yet. They haven’t logged into it. So let’s spend just a moment helping them picture what it means to have a GPT. Is it on their phone? Is it on a computer, Is it on a kiosk? What does it literally look like if I’m a student when I’m engaging? And what makes it seamless?

Siya Raj Purohit: I saw a meme recently which I thought was really funny in Harry Potter and the Chamber of Secrets. Harry starts writing in this diary and it’s like Tom Riddle responding at the other side. But I really liked that example because your first experience of ChatGPT feels similar to that. You just start writing. It’s a blank screen and you have a conversation and it converses back with you. And it’s actually a very magical feeling because you’re able to have conversations with the super intelligence that exists outside of our brains, which is very powerful. So I think that it’s really important to be able to first start having this conversation. You can use chat.com, you can use your mobile app, you can start actually on WhatsApp now or even call in.

Siya Raj Purohit: There’s a 1-800 ChatGPT number. So any of these mediums that make sense for you, you can start and you can ask basic questions. What we see most people do is start with very basic questions and kind of start building up as they gain more confidence in the back and forth interactions of this and then they’re able to do more and more complicated jobs. So how we think about transformation for organizations is the very first step is at an individual level. So when individuals start writing emails better, they start doing better, like project planning or activity building. Then it shifts up to the department level. That’s when people start collaborating together on different projects. One of the best examples I saw of this is that a school district told me it takes 40 people several weeks to assign which class goes into which room on campus.

Siya Raj Purohit: And now ChatGPT can do that in a few minutes. So hugely empowering at the department level. And then finally get to that organization wide level, which is when you’ll have so many different AI touch points and make that experience much easier as you navigate different levels of knowledge on campuses.

Diane Tavenner: I think the other thing that you’re saying that I’m not sure everyone will pick up unless we call it out. So I’m going to ask you to call it out is the reason, this is not like going to be a generic GPT. The intersection with the campus is that you’re actually taking the data and the information and the expertise of the campus and well, you’ll tell me the right words, but like mixing it with the power of GPT to make it sort of a customer customized experience. Did I get that right? What does that look like? What’s going on there?

Siya Raj Purohit: So basically there’s ChatGPT, which is accessible to everyone. Everyone will have slightly different experiences as they go through it, but it’s basically a knowledge base and a conversational platform. Custom GPTs are specific instances of ChatGPT which are basically trained to do very specific tasks. So a professor can be like, this is my six months of curriculum. This is all the case studies I provide. Just reference these when answering all student questions. So now that super intelligence is focused. So it doesn’t like look at the web, it doesn’t research answers, it focuses on the six months of curriculum, goes very deep and helps students be able to learn from that more effectively.

Siya Raj Purohit: And you can use these custom GPT instances for any type of knowledge base. One of my favorite examples of this is that a professor at the University of Maryland told me that they created a custom GPT of themselves. They uploaded about 24, 25 pieces of research work that they’ve done. And like there are different pieces of writing and now they talk to what they call Virtual Dave and get good ideas on what their next research project should be. So it’s like having a thought partner which is only limited to a finite amount of information that you share, but it’s super intelligent itself.

Diane Tavenner: Interesting. And let’s just stay here for one more quick beat because you’re leading us into what, maybe the work looks like for the teacher or the professor, but like just get a little bit more concrete. So that professor literally like copied and pasted his stuff into GPT? Tell it, tell us a little bit about what that, what’s his work now? What’s he doing?

Siya Raj Purohit: Yeah, so it takes about 15 minutes to build a custom GPT. You upload PDFs or documents and so you don’t need to copy/paste and you give it instructions. Again, this is where the assistant piece comes in. You explain to the custom GPT what his job is. So in this case, this professor is like, you are going to be my virtual thought partner. As I think about my next research papers. As I think about my next book or my LinkedIn posts, I need you to sound the same as I have in my career so far. So maintain the same tone and professionalism, but help me ideate on what the next iterations of these projects can look like and give me like very honest feedback.

Siya Raj Purohit: So these are the instructions it gave and then the professor just has conversations with it. It’s just like, could I go in this direction? And custom GPT is like, no, it’s a little bit like overdone. Why don’t we look at this path and it just becomes a good like research assistant for you.

Diane Tavenner: Awesome. Michael, here’s the jobs to be done at the moment, I think.

Michael Horn: Seriously, right. What we’re going to flag that for coming back to Diane?

Diane Tavenner: For sure. So let’s now bring in. I promise we will stop really soon as soon that we’re getting to the end here. But I know that OpenAI you think a lot about, you talk a lot about, you focus a lot on policy and you’re engaging with the policy, you know, field and whatnot. You know, what are you learning about the intersection of education policy and policy around AI? Like what, what should we be looking at, looking for, watching out for, paying attention to from your perspective as educators, as people who are leading schools and school systems and universities, you know, what, what do you see coming? What’s important, what should we be thinking about?

Siya Raj Purohit: So right now universities are in a couple of different groups when they’re thinking about AI policy. Some have like very established guidelines and clarity in terms of where AI plays the role in their student journey. So like, I think some of the most forward thinking education leaders that I’m working with are like, okay, like AI is accessible. The cat is out of the bag, it’s going to happen. And now I need to think about how I change my curriculum at the university to be able to use AI and help students prepare for the future. The best examples of this is Harvard Business School, there’s a professor named Jake Cook who teaches a digital marketing course and he’s mapped out what a digital marketing marketer’s journey looks like now in the profession and the seven different jobs that a digital marketer does and where does AI enable each of those jobs? And he’s turned all of his projects,

AI Integration in Education Evolution

Siya Raj Purohit: So now you use AI to do competitive research, AI to create marketing assets and images, AI to help you with the copy and website and all of these kind of elements of what he thinks the students will graduate into the workforce and need to know, and like policies that enable this kind of forward thinking nature are really helpful for students because then they go into Enterprise and have ChatGPT Enterprise and actually are able to use that effectively. And then there are other institutions that I think are still trying to figure it out. They’re concerned about how it might change their former assignments, how they can’t use the same kind of syllabus they might have used in the past years. And a big part of our job right now is to help kind of showcase these examples of the forward thinking institutions and help these other universities learn, kind of grow their own thought process. At the end of the day, universities are the ones best suited to make these decisions for their students because they understand them the best. And it’s so interesting because when you like speak with a state school, you realize they care a lot about like navigation of tools and being able to help students find the right information on a campus that is 50-60,000 students whereas a small liberal arts schools are just like, how can I help the student be able to voice their opinion more effectively? And all of these things have AI solutions. But it’s universities that need to kind of figure out what they want to become and how AI can help with that.

Diane Tavenner: Interesting. I could ask 27 more questions, but I’m going to ask Michael to rein me in and either wrap up with something something or

Michael Horn: No, I think this is super helpful, Siya. I guess my last question is you’re clearly spending a lot of time with colleges and universities. Are there others in the OpenAI team? Are you spending similar amounts of time with K12 institutions or how do you think that’s going to evolve over time? Because clearly it seems like the colleges and universities are, not all as you just said, but many of them are wrestling with this yesterday. Are you seeing similar movement among K12 schools and districts or not? In which case that also tells us something.

Siya Raj Purohit: They have a growing number of K12 customers. But the big caveat is we don’t have an under 18 product right now. So it’s not for students, it’s for like teachers and staff members in K12.

Michael Horn: Gotcha. Okay, super helpful. All right, well let’s maybe wrap up there. Something we love to do, Siya, though, before we let our guests go, is to wonder what else you’re reading or watching or listening to outside of your day jobs. And so maybe ChatGPT has recommended you reading lists or watching lists. But I’m just sort of curious, one thing outside that maybe you could point us to.

Siya Raj Purohit: It’s interesting to say that I’ve actually been asking ChatGPT a lot for book recommendations because I think it’s very magical when you find the right book at the right stage of your life. And I want to see if ChatGPT can help make that happen more often. It’s mixed results so far.

Michael Horn: Okay.

Siya Raj Purohit: One book that I’m reading right now which is super fascinating, it’s called Say It Well, it’s written by one of President Obama’s former speechwriters, and he intertwines, like, how to be a good public speaker with stories from President Obama. And it’s just super fascinating to read about how, like, things that President Obama slipped on in different talks, which make him much more human and accessible, but also like the ways that he thought about providing great speeches and connecting with audiences around the world. So I’m finding the book really interesting so far.

Michael Horn: Very cool. What about you, Diane?

Diane Tavenner: Awesome, thanks for sharing Okay. Well, I am going to turn to TV because we’ve been talking so often. I’ve exhausted all the books I’m reading right now, and I’m a little slow on this one, about a year behind. But we just watched the series on FX, Shogun, and I was. I must say, I was a little skeptical going in. I was a young kid when the book came out and then the miniseries on tv, and I was like, there’s no possible way this could be done well or without some real issues.

Diane Tavenner: And you all may know it’s won 18 Emmy awards, the most ever for a single season. It’s truly extraordinary and really thought provoking. Yeah. Highly recommend.

Michael Horn: So I was gonna say, you could imagine it winning awards, but someone who’d read the books being like, it still didn’t quite deliver, but it delivered for you, it sounds like.

Diane Tavenner: Well. And I never read the books or watched the original series.

Michael Horn: Okay. Okay. Okay. So.

Diane Tavenner: But I just had this image in my head, and as I understand it, the current version is very different from the old ones, but it’s. It’s great.

Michael Horn: Very cool. It’s been teasing me for a while, so that is a good endorsement. For mine. I. I guess I, I want to say, like, the NFL football playoffs or Australian Open, but I feel like that gives away when we’re recording, but too late, I’ve given it away. But I’ll give you one other. I’ve actually really been enjoying or I enjoyed because I finished it in a day, a book recommendation that one of my daughters gave me, or she actually ordered me to read it.

Michael Horn: She had finished, it’s called the Girl with the Secret Name by Yael Zoldon. And I’ll apologize if I’ve mispronounced her name. But it’s a historical fiction, takes place during the Spanish Inquisition and it was fascinating. It was a history that I knew at a high level, but not with any depth at all, I will say, like literally zero. And so my daughter was teaching me quite a bit. It was fun. So, that’s mine.

Diane Tavenner: I love when that happens.

Michael Horn: Yeah, no, know you’ve had that experience with Rhett giving you many recommendations. So now maybe this is the first of many for me. But I’ll, let’s, let’s wrap up there, Siya, a huge thank you for joining us for shedding light on this topic, for sharing frankly how you are using it in your daily life to both on your learning journey but also in your work itself on, on a day to day basis. So really appreciate it and we hope you’ll keep staying in touch so we can stay ahead of the curve as well alongside you. But huge thank you. And for all of you tuning in, we will see you next time on Class Disrupted.

]]>
The Future of AI: How HBCUs are Leading Innovation in Education /article/the-future-of-ai-how-hbcus-are-leading-innovation-in-education/ Tue, 04 Mar 2025 16:00:00 +0000 /?post_type=article&p=1011009 Join 社区黑料 and the Progressive Policy Institute at 3 p.m. ET Tuesday for a special conversation about HBCUs and the future of artificial intelligence. Historically Black Colleges & Universities play an essential role in contributing to K-12 innovations across the country as laboratories for excellence.

Tuesday鈥檚 conversation will focus on how these schools are now serving as incubators for new AI tools and advancements. Joining Curtis Valentine, director of PPI鈥檚 Reinventing America’s Schools Project, will be Yourway Learning President Jason Green, EdSolutions CEO Jeff Livingston and Morehouse College鈥檚 Metaversity Director Dr. Muhsinah Morris. Click here to RSVP

Sign up for the Zoom or tune in to this page Thursday at 3 p.m. ET to stream the event.

More artificial intelligence coverage from 社区黑料: 

]]>
Opinion: What’s the Best Way to Tell If an Ed Tech Product Works? Science of Learning Can Help /article/whats-the-best-way-to-tell-if-an-ed-tech-product-works-science-of-learning-can-help/ Mon, 03 Mar 2025 15:30:00 +0000 /?post_type=article&p=1010915 Educational technology such as apps and learning platforms are used by millions of children in classrooms and at home. Recent that not all ed tech, including some of the most popular tools, are supporting learning.  It鈥檚 crucial to evaluate whether these tools are truly effective. But how to tell what works?

A debate about how best to do this has long centered on two competing approaches: randomized controlled trials (RCTs) and co-design. Kirk Walters from West Ed and Katie Boody Adorno from LeanLab represent the opposing views. Walters argues that RCTs, which test tools in controlled environments, provide the most reliable and objective data on what works in the classroom. Boody Adorno champions co-design, a process that involves teachers and students in shaping technology to ensure it meets their needs. Both believe that relying on the other type of evidence leaves ed tech evaluations flawed.

Framing this as a choice between two methods misaligns with the principles of the Science of Learning. The studies how people learn and how teaching methods can be improved through research. It combines expertise from psychology, neuroscience and education to determine the most effective strategies for diverse students and resources. Because learning depends on many related influences 鈥 a student鈥檚 background, teaching methods, culture and classroom situation 鈥 Science of Learning uses various methods to understand what works best. 


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


When learning scientists gather evidence on whether and how a technology changes education, they based on the goal of the evaluation. If the aim is to understand how a tool can be created to fit teachers鈥 needs, then co-design methods are the best fit. If the goal is to measure a tool鈥檚 impact on specific learning outcomes with statistical precision, then RCTs are more appropriate.

So, why does not everyone simply use both approaches? There are both philosophical and pragmatic reasons for this. 

Typically, learning scientists specialize: They are either experts in the quantitative (number-based) tradition, where RCTs are considered the highest form of evidence, or they focus on qualitative (descriptive) studies, which emphasize deep exploration of a topic. Similarly, research firms and consulting labs tend to stick to one research method. They train their teams in a specific approach and streamline their reports according to that method. This allows them to deliver quick results, which is great for fast-moving ed tech companies.

But focusing too narrowly on one method means ed tech providers don鈥檛 get the full picture of how their tools actually impact classrooms. Important insights, especially those that fall outside the chosen research approach, are often overlooked. As a result, teachers and users are left without a clear understanding of what truly works 鈥 and what doesn鈥檛.

This lack of clarity is a major problem. I鈥檝e been in numerous meetings where ed tech companies came with a predefined list of outcomes, asking researchers to conduct a study to confirm only those results. They assumed that by sponsoring the research, they could get the outcomes they needed to boost their marketing. Such a report would be then used to sell the technology to schools, turning what should be objective research into a sales pitch.

This isn’t just happening behind closed doors; it also happens with some public calls for proposals. If you look at research requests from some large ed tech companies, you鈥檒l often find that their terms dictate exactly what the study should find and which positive results they want to highlight. But this is not how real research works, nor is it an accurate reflection of how learning happens in the classroom.

Learning isn鈥檛 as simple as either/or. It’s a in which multiple factors work together, with trade-offs among different approaches. What really matters is understanding how learning progresses in different ways over time and in various situations. For instance, studies have debunked the old idea of separating emotions and thinking: The same brain circuits process both. This means that a study focusing only on how a tool supports children’s emotional learning, without considering cognitive factors like memory, won鈥檛 give a complete picture of how it truly impacts learning.

If ed tech evaluations are to align with the latest and best science of how learning works, it鈥檚 time to completely rethink the way these tools are assessed.

First, studies that follow solid Science of Learning principles must share all findings, whether they are positive, negative or neutral. Transparency is crucial for both good research and the development of better products. If providers worry that being honest about what doesn鈥檛 work will hurt their business, they should think again.The market , and schools are tired of marketing hype. Being upfront about both successes and failures can build trust and lead to tools that genuinely make a difference.

Second, all research 鈥 regardless of its source or method 鈥 must uphold the highest standards of quality. Just as there can be poorly executed co-design studies, there can be flawed randomized controlled trials. What truly matters is applying careful, accurate and reliable methods. This should be the North Star for all researchers.

Third, to truly understand how learning works in different settings and with various tools, researchers and teachers need to accumulate evidence over time. Especially with new tools, like those powered by artificial intelligence, it鈥檚 crucial to run studies that not only describe and measure learning, but also help engineer and improve it across different students, grades and settings. Relying on just one study to understand its effects is like reading only one chapter of a book 鈥 it鈥檚 just a small piece of an ongoing story.These would bring much-needed accountability to ed tech research, ultimately leading to better learning experiences and outcomes for students.

]]>
Opinion: To Make Ed Tech More Secure, Software Companies Need to Step Up /article/to-make-ed-tech-more-secure-software-companies-need-to-step-up/ Tue, 25 Feb 2025 19:30:00 +0000 /?post_type=article&p=740414 Last month it was revealed that student information system provider PowerSchool suffered the in history, as stolen credentials were used to expose and steal sensitive data belonging to over 60 million students and teachers. In 2024, K-12 schools have become the for ransomware, with recovery costs averaging over this past year alone 鈥 more than . 

Education technology 鈥 or edtech 鈥 software is often the entry point for these cybercriminals, accounting for of K-12 school data breaches between 2016 and 2021. As one can imagine, the COVID-19 pandemic forced school districts across the country to shift to remote learning; they received significant federal and state funding to support this transition, of which was spent on acquiring new software.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


The average district now uses edtech products 鈥 nearly the number from 2018 鈥 increasing the attack surface for cybersecurity threats at a time when only school districts employ a full-time IT staff member.

To further complicate the matter, K-12 school districts are chronically understaffed and underfunded on the this front, with the average school spending less than of its IT budget on cybersecurity, and one in five schools dedicating less than . 

Schools are not equipped to secure all of the edtech products they depend on, but these software products are critical to running a modern school. Most critical school functions 鈥 including attendance, bus routing, lunch information, learning and grading systems, staff management and finance 鈥 all rely on edtech products to operate. 

That puts edtech software manufacturers in a unique position to improve cybersecurity outcomes for K-12 schools by integrating more security features into their products, shifting the burden from schools to industry.  

A forum held last October by UC Berkeley鈥檚 Center for Long-Term Cybersecurity, conducted in partnership with the U.S. Department of Education, convened representatives from 12 software manufacturers serving a large portion of U.S. school districts to discuss measures to help strengthen K-12 cybersecurity. Two key themes emerged again and again during the discussion, which are top of mind for industry as we go into 2025.

First, edtech software manufacturers need to take a greater responsibility for improving security outcomes for their K-12 customers.

The use of multi-factor authentication (MFA), an essential security feature in edtech products, is seldom enforced as a mandatory requirement, even for privileged users. However, some software manufacturer participants demonstrated industry leadership by requiring it for all administrative accounts. 

One provider, inspired by Microsoft鈥檚 forthcoming requirements in and , implemented mandatory MFA for administrative accounts and financial staff and adopted phishing-resistant authentication. But the rollout was difficult; despite many advance notifications of the change, the provider described the transition as disruptive for customers, even though the change ultimately provided better security. 

Software manufacturers who have implemented mandatory MFA recommended other providers try a phased approach, such as extending authentication prompt intervals to once every one to two weeks to allow school administrators, IT staff, and teachers adequate time to adapt to the new requirements. They also recommended deploying changes during the summertime when school districts’ IT demands are at their lowest. 

Some are experimenting with new MFA tactics and security features, like authentication based on suspicious account activity and tracking data changes in their systems. Other solutions discussed include monitoring the dark web to identify stolen passwords and systems that prompt users to choose stronger alternatives, as well as solutions tailored for schoolchildren and parents, such printable QR code badges that students can scan to authenticate during login.

Second, vendors must overcome obstacles to integrating basic security controls into their products.

One of the biggest obstacles software manufacturers face in launching mandatory security features is balancing security with user convenience. They cite feeling pressured to prioritize ease of use, fearing that customers would switch to competitors with 鈥渟impler鈥 but less secure solutions. 

Vendors shared case studies of schools that resisted platform changes that introduced friction into their operations or student learning, such as requiring an additional step to log on. For example, some providers observed that K-12 users prefer less secure authentication methods, such as email and text messaging services, over more phishing-resistant methods, such as app-based tokens or hardware keys.

Technical hurdles pose another barrier. Providers noted that some school districts rely on legacy software for HR, payroll, and bus routing that may be incompatible with modern authentication protocols such as SAML or OAuth. Some systems lack support for these protocols altogether or only offer them as paid features, especially for mobile applications. This makes integration challenging, requiring extensive testing to resolve compatibility issues, making the process resource- and time-intensive for software manufacturers. 

What鈥檚 Next

Incidents like the PowerSchool breach demonstrate the urgent need for edtech software vendors to do more to protect K-12 student and teacher data. Fortunately, the federal government has made headway on the issue in recent years.

For example, the Cybersecurity and Infrastructure Security (CISA) agency鈥檚 initiative, launched in September 2023, expanded from a K-12 specific pledge with 12 signatories into an enterprise-wide pledge by May 2024, with over 260 industry signatories. CISA also recently released a guidance for software manufacturers.

The growing industry interest in prioritizing cybersecurity is encouraging. Evidence from our roundtable conveys that there鈥檚 an appetite from K-12 and companies to do more to relieve the burden on schools and secure edtech products. It is critical to continue this momentum; the edtech industry must pursue product changes that improve security, and federal agencies like CISA should continue building a coalition of companies who do so. 

Secure products benefit everyone, from teachers, to parents, to school children. Let鈥檚 double down on our progress before the next breach happens.

]]>
Class Disrupted Podcast: Ben Riley on Why AI Doesn鈥檛 Think Like Us /article/class-disrupted-podcast-ben-riley-on-why-ai-doesnt-think-like-us/ Fri, 21 Feb 2025 15:30:00 +0000 /?post_type=article&p=740289 Class Disrupted is an education podcast featuring author Michael Horn and Futre鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

Techno-optimists have high hopes for how AI will improve learning. But what鈥檚 the merit of the 鈥渂ull case鈥, and what are the technology鈥檚 risks? To think through those questions, Michael and Diane sit down with Ben Riley of Cognitive Resonance, a 鈥渢hink and do鈥 tank dedicated to improving decisions using cognitive science. They evaluate the cases made for AI, unpack its potential hazards, and discuss how schools can prepare for it. 

Listen to the episode below. A full transcript follows.

Diane Tavenner: Hi there, I’m Diane, and what you’re about to hear is a conversation Michael and I recorded with our guests, Ben Riley. It’s part of our series exploring the potential impact of AI in education, where we’re interviewing optimists and skeptics.

Here are two things from the episode that I keep thinking about:

First, our conversations are starting to make me wonder if AI is going to disrupt the model of education we’ve had for so long, as I think Ben perhaps fears, or if it’s actually going to strengthen and reinforce our existing models of the schoolhouse with classrooms filled with a teacher and students.

The second thing that I was really thinking about and that struck me was that Ben’s sort of one case for what could be beneficial about AI is something that’s directly related to his work and interest in understanding the brain. And kind of how learning occurs. To be fair, there’s a theme emerging across all the conversations we’re having with people where they see value in the thing that they value themselves. And perhaps that’s an artifact of the early stages and who knows, but it’s making me curious.

And speaking of curious, a reflection I’m having after talking with Ben is about the process of change. Ben is a really well reasoned, thoughtful skeptic of AI’s utility in education. He comes to his views at least partially from using AI. I would consider myself much more of an optimist and yet I’m finding myself a little bit annoyed right now, that every time I want to write an email or join a meeting or send a text or make a phone call that I’ve got AI pretty intrusively jumping in to try to help me. And it’s really got me thinking about the very human process of change, which is one of the many reasons why I’m really looking forward to sense making conversations with Michael after all of these thought provoking interviews.

In the interim, we’d both love to hear your thoughts and reflections. So please do share. But for now, I hope you enjoy this conversation on Class Disrupted.

Michael Horn: Hey, Diane. It is good to see you again.

Diane Tavenner: You too. And I’m really excited to be back. Coming off of our last conversation around AI and education, it’s making me even more excited about what we’re going to be learning in this series. And I think today will be no exception in really stretching our minds and our thinkings about the possibilities, the limitations, the potential harms of AI and its intersection with education.

Michael Horn: Yeah, I think that’s right, Diane. And to help us think through these questions, today, we’re bringing someone on the show that I think both of us have known for quite a long time. His name is Ben Riley. He previously founded the Deans for Impact in I believe 2014. And Deans for Impact is a nonprofit that connects cognitive science to teacher training. And then Ben stepped aside a couple years ago, and has most recently founded Cognitive Resonance, which is a think and do tank, in its words, and a consultancy organization that’s really, its focus actually is on this topic of AI and learning, which is perfect and makes Ben the perfect guest for us today. So, Ben, welcome.

Ben Riley: Thanks so much for having me. We’ll see if you still think I’m the perfect guest by the end of it, but I appreciate being invited to speak to both of you.

Ben Riley鈥檚 Journey to the Work

Michael Horn: Absolutely. Well, before we get into a series of questions that we’ve been asking our guests, we’d love you to share with the audience about how you got into AI so deep, specifically because I will confess and I’ll give folks background, I’ve been reading. I’ve actually been an editor on a couple of the things that you’ve submitted into Education Next on AI, and I found them super intriguing. And then somehow I had no idea that you created this entire life for yourself around AI and education. And you have some language on this that I think is really interesting on the site where you say the purpose is to influence how people think about Gen AI systems by actually using the lens of cognitive science. And you believe that will help make AI more intelligible, less mysterious, which will actually help influence what people do with it in the years to come. And then you write that you see it as a useful tool, but one with strengths and limitations that are predictable. And so we really have to understand those if we want to harness them in essence. So how and why did you make this your focus?

Ben Riley: Yeah. Well. And thank you for clearly having read the website’s cognitiveresonance.net or the Substack Build Cognitive Resonance, in many ways, the organization reflects my own personal journey because several years ago I started to become aware that something was happening in the world of AI, and at the time it was called deep learning, and that was the phrase that was starting to emerge. And to be completely candid, my focus has always been, and in some ways still very much is on how human cognition works. And so AI, artificial intelligence, is considered kind of one of the disciplines within cognitive science, along with psychology and neuroscience and linguistics, philosophy. There’s like it’s an interdisciplinary field. And for me, quite honestly, AI was sort of like this thing happening somewhere over there that I had maybe a loose eye on. And I got in touch with someone named Gary Marcus at the time, and we’ll come back to Gary in a second, and then just said, hey, Gary, can you explain deep learning to me and what it is and what’s going on? And that, you know, sort of began that conversation. And then quite frankly, I just kind of squirreled away and didn’t think much about it. And then, like it did for all of us, ChatGPT came into our lives. And I was stunned. I was completely stunned when I first sat down with it and started using it. And what really irked me was that I didn’t understand it. You know, I was like, I don’t get how this is doing, what it’s doing. So I am now going to try to figure out how it’s doing, what it’s doing. And that is not easy. At least it wasn’t easy for me. I don’t think it’s even now. I don’t think it’s easy for those who might have spent their entire lives, much less those of us who are coming in late in the game or just trying to make sense of this new technology in our lives. And what I was able to draw upon was both sort of the things that I do know and have learned over the last decade plus around human cognition and frankly draw on a lot of relationships I have with people who are in cognitive science broadly, and just start having a bunch of conversations, doing a bunch of reading, and really trying to, you know, build a mental model of what’s taking place with these tools and with large language models specifically. And when I finished all that, I thought, well, geez, it seems like, you know, that took a lot of work. Maybe it would be helpful to sort of try to pass this along and bring others into the conversation. So that’s really the thesis of Cognitive Resonance.

AI鈥檚 Educational Upside

Diane Tavenner: Ben, everything you just described is just so consistent with my experience with you over the years and the conversations that we’ve had and what my perception is what you care about. And I’m so glad you brought it together in that way, because I’ll be honest, when I was like, wait, Ben is doing AI? Like, that didn’t totally land with me. And so what I’m hearing from you is like, well, I’m super curious for this conversation because I’m. I’m not getting the vibe that you’re a total AI skeptic. I’m not getting the vibe that you’re a total cheerleader. I’m guessing we’re gonna have a really nuanced conversation here about this right now. So let’s start there. Like, let’s start with kind of that polar, and then see where we go. Can you make the argument for us of how AI is going to positively impact education? And I’m not saying it has to be your argument, but can you just stand up an argument for us based on what you’ve learned about how it could. Like, what’s the best case to be made for AI positively impacting it?

Ben Riley: Yeah. So this is what people are now calling steel manning, right? Like, can you steel man the argument that you may not agree with. I had a law school professor who taught me that the best way to write a good legal brief is to take the other side’s best argument, make it even better than they can make it, and then defeat it. And you all gave me this question in advance, and I’ve been thinking about it since you did, and I don’t know if I can make one best case. What I want to do is make three cases which I think are the positive bull cases. So number one, one that I think should be familiar to both of you because we’ve been having this debate for nearly a decade, is sort of personalized learning, a dream deferred, but now it can be real. When we said we were going to use big data analytics and use that to figure out how to teach kids exactly what they want to know, when they need to know it. Like, what we meant was we needed large language models that could do that. And now, lo and behold, we have that tool. And as Dan Meyer likes to joke, it can harness the power of a thousand suns. It’s got all of the knowledge that’s ever been put into some sort of data form that can be scraped from the Internet or from other sources, not always disclose what those sources are, but nonetheless, there’s a lot of data going into them and using these somewhat mysterious processes that they have of autoregression and back propagation. And we can go as deep as you want in the weeds on some of those terms, but we doing that, we can actually finally give kids like an incredibly intelligent, incredibly patient, incredibly, some would even say loving, some have said that, tutor. And we can do that at scale, we can probably do it cheaply. And boom, Benjamin Bloom’s dream, two sigma gains. It’s happening finally. There we go. All right, so that’s argument number one. Call that personalized maximization argument. Argument number two, I think, is the sort of AI as a fundamental utility argument. And the argument here is something along the lines of, look, this is a big deal technologically in the same way the Internet or a computer is a big deal technologically, and it’s one of those technologies that’s going to become ubiquitous in our society, the same way the computer or the Internet has become ubiquitous in our society. And we don’t even know all the many ways in which it’s going to be woven into the fabric of our existence. But that includes our education system. And so some benefits will accrue as a result of its many powers. Okay, so that’s the utility argument. The third argument would say something like this. It would say the process of education fundamentally is the process of trying to change mental states in kids. And I mean, frankly, doesn’t have to be kids, but we’ll just talk about it from teachers to students.

Michael Horn: Sure.

Ben Riley: And, there’s some really big challenges with that. When you just distill it down to the act of trying to make a kid think about something. One of the challenges is that we cannot see inside their head. So the process of what’s taking place, cognition or not, is opaque to us, number one. And number two, experiments are really, really hard. They’re not impossible. But you can’t really do the sort of experiments that you can do in other realms of life the same way. It’s just for ethical reasons, but also just frankly from like scientific, technical reasons. Because again, we can’t see what’s happening in the head. So even when you run an experiment, you’re getting approximations of what’s happening inside the head. Some would then say, well, now we have something that is kind of like a mind and we can kind of emphasis on kind of, see inside it. And we definitely can run experiments on it in a way that doesn’t implicate sort of the same ethical concerns and others. That argument, and I’ll call that the cognitive arguments, human and artificial, would say that can use this tool to better help us understand ourselves. In some ways it might help us by being similar to what’s happening with us, but in other ways it might help us by being different and showing those differences. So those are the three arguments that I see.

Evaluating the Case for AI

Diane Tavenner: Yeah. Super interesting. Thank you for making those cases. Which of any of them do you actually believe? Now you, I’m curious about your opinion and why?

Ben Riley: Yeah. So I have bad news for you. The first one, the personalized maximization dream, is going to fail for the same reason that I would like to say I predicted that personalization using big data analytics would fail. We could spend the entire podcast with me unpacking why that is. I’m not going to do that. So I’m going to limit it just to two arguments. Okay. The first would be that these tools fundamentally lack a theory of mind. Okay. So that’s a term that cognitive scientists will use for the capacity that we humans have to imagine the mental states of another. And these tools can’t do that. There’s some dispute in the literature and researchers will say, well, if you run these sort of tests, maybe they’re kind of capable of it. I’m not buying it. I don’t think it’s true. And there’s plenty of evidence on the other side as well saying that they just don’t have that capacity. Fundamentally, what they’re doing is making predictions about what text to produce. They’re not imagining a mental state of the user who’s inputting things into it. Number two, I would say, is that it obviously misses out on a huge part of the cultural aspect of why we do and why we have education institutions and the relationships that we form. And I think that the claim that students are going to want to engage and learn from digitized tutors the likes of which Khan Academy and others are putting out, I think is woefully misguided and runs counter to literally thousands, if not hundreds of thousands of years of human history. Okay, so number one, doomed. Number two is to me like a kind of like, so what? Right? So I use the example of computers and the Internet as ubiquitous technologies that AI might join. So, like, let’s say that’s true. Let’s say that comes to pass. So what? Like, we have the Internet now, we have computers now. We’ve had both of these things for decades. They have not, I would argue, radically transformed education outcomes. The ways in which technologies like this become sort of utilities in our lives, transforms our day to day existence. But just because a technology is useful or relevant in some way or form does not mean emphasis, does not mean that it is somehow useful for education purposes and for improving cognitive ability. So I have absent a theory as to in what ways these tools are going to do that. Whether or not they become, you know, ubiquitous background technologies is kind of a, so what for me. Number three, the argument, the cognitive argument that this tool could be a useful example and non example of human cognition, I have a great deal of sympathy for. I am very curious about. There’s a lot, a lot that has changed just within linguistics, I would say, in the last several years in terms of how we conceptualize what it is these tools are doing and what that says about how we think and deploy language for our own purposes. We may have just scratched the surface with that. The new models that are getting released that are now quote unquote reasoning models have a lot of similarities in their functionality to things in cognitive science like worked examples and why those are useful in helping people learn. A worked example being something that sort of lays the steps out for a student as to here, think about this, then think about this, then think about this. Well, it turns out if you tell a large language model, do this, then do this, then do this, do then this, or just sort of program it to do that, their capabilities improve. So you know, without sounding too much like I’m high on my own supply, this is the cognitive resonance enterprise. It’s sort of to say, okay, let’s put this in front of us and instead of focusing so much and using it as a means to an end, let’s study it as an end unto itself, as an artificial mind, quote unquote, and see what we can learn from that.

Michael Horn: Super interesting, Ben, on, on that one. And I’m just thinking about an article I read literally this morning about where it falls short of mimicking, you know, the true neural networks, if you will, in our brain. So I’m pondering on that one now. I guess I, before we go to the outright skeptic take if you will, I’m sort of curious on like other things that you think AI won’t help with in your view, beyond what you just listed in terms of, you know, this broad notion of personalizing learning or AI as utility, if you will, and, and the so what question, like are there other things that people are making claims around where they think AI is really going to advance the ball here. And you’re like, I just, I don’t see that as a useful application for it.

Ben Riley: Well, you know, we launched into this conversation and we didn’t define what we’re talking about when we talk about AI. Right, sure.

Michael Horn: There’s different streams of it. Yep.

Ben Riley: Yeah. And I think that, like, when I’m talking about AI, and least have been talking about it in this context thus far, I’m talking about generative AI, mostly large language models, but it includes any sort of version of generative AI that is in essence, sort of pulling a large amount of data together and then sort of trying to make predictions based on that, using sort of an autoregressive process or diffusion in the case of imagery, but sort of like trying to essentially aggregate what’s out there, and as a result of that, aggregation produce something that sort of relates to that. If you’re talking about beyond that, like, who knows? I mean, there’s just so many different varied use cases. There’s, I was mentioning off air, but I’ll say now on air, there’s a great book, AI Snake Oil, written by a couple of academics at Princeton, which talks about sort of the predictive AI, which they put in a sort of separate category from generative AI, and they’re very skeptical about any of those uses. My fundamental thing is that to the extent people think like the big claim, right? And unbelievably, Sam Altman, the CEO of OpenAI, just a few days ago declared that, like, we’ve already figured out how to create artificial general intelligence. In fact, that’s like a solved problem. Now we’re on to super intelligence. I think people should be very, very skeptical of that claim. And there’s a lot of reasons why I would say that, which again, could eat up the entire podcast. But I’ll just give you one. What we now know is true, I think from a scientific perspective about human thought, is that it exists, it does not depend on language. Language is a tool that we use to communicate our thoughts. So if that’s true, and I would argue in humans, it is almost unassailably true. And I can give you the evidence for why I think we think that or why we know that, then it would be very strange if we could recreate all of the intelligence that humans possess simply by creating something like a large language model and using all of the power of all the Nvidia chips to harness what’s in that knowledge. Now what people will say, and frankly, this is where all the billions and the leading thinkers on this are trying to do is okay, well now we can only go so far with language. How about we try to do it for other cognitive capacities? Can we do that? Can we create neuro symbolic, as it’s called, AI that is as powerful, powerful as generative AI with large language models and sort of start to piece this together in the same way that we may piece together various cognitive capacities in our own brain and then loop that together and call it intelligence. To which I say, well, good luck. I mean, honestly, good luck. But there’s no reason to think that just because we’ve done it with large language models that we’re going to have the same sort of breakthroughs in the other spaces. So don’t know if this fundamentally answers your question, Michael, but I would say that it’s sort of like, you can have progress in this one dimension. It can actually be quite fascinating and interesting. But I would urge people to sort of slow down in thinking that it just means that, you know, all of science and humanity and these huge questions around whether we will ever be able to fully emulate the human mind have suddenly been solved.

The Skeptical Take 

Diane Tavenner: Yeah. Wow. So fascinating. I have so many things coming to me right now, including my long journey and experience with people who make extraordinary com, you know, claims and then kind of make the work a little bit challenging for the rest of us who are actually doing it behind them. But let’s turn now, we’re kind of steering in that direction, but let’s go all the way in on the skeptical take. And so I feel confident you’ve got some good material here for us. Like what is AI going to hurt specifically in education? Let’s start there, and how’s it going to do harm?

Ben Riley: Yeah, well, I don’t think we should use the hypothetical or the future. Let’s talk about what it’s harming right now. So I mean, the big danger right now is that it’s a tool of cognitive automation. Right? So what it does is fundamentally offer you an off ramp to doing the sort of effortful thinking that we typically want students doing in order to build the knowledge that they will have in their head that they can then use in the rest of their life. And this is so fundamentally misunderstood. It was misunderstood when Google was starting to become a thing and the Internet was becoming a thing. You would hear in education, well meaning people say, well, why do we need to teach it? If you can Google it. Right? That was a thing that many people said, put up on slides. I used to stop and listen and look. It makes sense if you don’t spend any time with cognitive science and you don’t spend any time thinking about how we think. And so I don’t, I don’t want to throw those people too far under the bus, but just a little, because now we know. We know this. Like, this is a scientific, like, as established as anything else is established. It’s like our ability to understand new ideas in the world comes from the existing knowledge that we have in our head. That is the bedrock principle of cognitive science, as I like to describe it. So suddenly we have this tool that says, you know, to the extent you need to express whether or not you have done this thinking, let me do that for you. You know like, this exists in order to, to, to solve for that problem. And guess what? It is very much solving for that problem. Like, I think the most stunning fact that I have heard in the last year is that OpenAI says that the majority of its users are students. Okay, the majority. Now, I don’t know what the numerator and denominator is for that, and I’m talking to some folks trying to figure that out, but they have said that at the OpenAI education conference, Lea Crusey, who some of you may know who was over at Coursera, got up and said, and they said, and I think they meant this is like, they were happy about this, that their usage in The Philippines jumped 90% when the school year started. What are those kids using it for? Yeah, you know, what are those kids using it for? Like, I don’t think, like, we need to stop pretending that this isn’t a real issue. And for me, people sort of go, well, it’s plagiarism, you could always plagiarize. And it’s like, not exactly. Not exactly like. And I think it actually is sort of both overstates and understates the case to talk about it in the context of plagiarism. Because again, the real issue here is that we will lose sight of what the education process is really about. And we already have, I think, too many students and too much of the system sort of oriented around get the right answer, produce the output. And I think teachers make this mistake, unfortunately, too often, I think a lot of folks in the system make this mistake of we just want to see the outcome and we are not thinking about the process because that’s really what matters. And building that knowledge over time. And you’ve got now, I mean I literally sometimes lose sleep over this. You’ve got a generation of students whose first experience of school was profoundly messed up because of the pandemic. And then right on top of that, we have now introduced this tool that can be used as a way of offloading effortful thinking. And I don’t think we have any idea what the consequences are going to be for that cohort of students and the potentially, like, dramatic deficiencies in a quality education that they will have been provided. That’s one big harm. There’s another. I mean, there’s many others, but there’s another that I’ll highlight here, too. I don’t know if you, either of you watched, I imagine you did, the introduction of ChatGPT multimodal system last year, which included the family Khan, Sal Khan and his son Imran were on there. I thought it was fascinating and speaks again to the amount of users who are students that OpenAI chose Saul and his son to debut that major product. If you watch that video closely, and you should, you’ll see something, I think, that is worth paying attention to, which is at multiple points, they interrupt the multimodal tutor that they’re talking to. And why not, right? It’s not a life form. It doesn’t have feelings. And we know that, it’s a robot. You know, to a degree. I don’t think we’ve really grappled with the implications of introducing something like human like into an education system and then having students who are students who are still learning about how to interact with other humans, that’s another part of education and saying, you know what, it’s okay to behave basically however you want with this tool, right? Like the norms and the sort of, you know, ways in which schools inculcate values and inculcate, sort of how it is we relate to one another could be profoundly affected in ways that we haven’t even begun to imagine, except in the realm of science fiction. And I think it’s worth looking at science fiction and pointing to how we tell these stories. I don’t know if either of you watched HBO’s Westworld, particularly the first season before the show went off the rails. But if you watch the, if you watch.

Diane Tavenner: Season one was a little intense, too.

Ben Riley: Season one was intense, but it was good. I thought it was good. And, and, but it was haunting. And one of the things that was haunting about it is it’s like for those who haven’t watched the show, it’s a It’s filled with cyborgs who are quasi sentient, but they, you know, people come and they’re at amusement parks and it’s like the old west and what can you do? You can kill them. You can kill them and people do that or worse.

Diane Tavenner: Right, yeah. Well, talk about the other bad thing.

Ben Riley: Right, right. I mean, but, you know, but it’s sort of like the fact that we now can imagine that sort of thing being a future where you could like humans, but not. The philosopher Daniel Dennett, who passed away, talked about the profound dangers of counterfeiting humanity. And I think that’s the sort of concern that is just almost not even being discussed at any real level as we start to see this tool infect the education system.

AI鈥檚 Impact on How We Think

Michael Horn: I suspect that’s going to be something we visit a few times in this series. But you’ve just, you’ve done a couple things there. One, you’ve, I think, more articulately answered, you know, a lot of the bad behavior we’ve seen on social media. How that actually could get exacerbated is not through deep fakes per se, but in terms of actually how we relate to one another. But you also answered another one of my questions that I’ve had, which is I can’t remember a consumer technology where education has been the featured use case in almost every single demo repeatedly. And you may have just answered that as well. I’m curious, a different question because I know you and Bror Saxberg have had sort of a back and forth about, you know, where is certain things that maybe it’s harming going to be less relevant in the future. And he loves to cite the Aristotle story. Right. About we’re not going to be memorizing Homeric length poems anymore. And maybe that’s okay because it freed up working memory for other things. I’m sort of curious to get your reflection on that conversation at the moment because I think Diane and I would strongly agree. Replacing effortful thinking, thinking that you can just, you know, have people not grapple with knowledge and build mental models and things like that, that’s going to have a clearly detrimental impact. Are there things where you say actually it’s going to hurt this, but that may be less relevant because of how we accomplish work or something like that in the future? I don’t know your take on that.

Ben Riley: Yeah, I don’t think you’ll like my answer, but I’m going to give you my honest answer.

Michael Horn: I don’t know that I have an opinion. Like, I’m just curious.

Ben Riley: Yeah, I mean, I’m not a futurist and I’ve made very few predictions ever in my life, at least professionally. One of the few that I did was that I thought personalized learning was a bad idea in education. And I’d be curious, I don’t know in this conversation another, whether you two reflecting back on that would go actually, you know, knowing what we know now, there were reasons to be skeptical of it and the, the I’m annoyed at the turn he seems to have taken because I used to like to quote Jeff Bezos. So with all the caveats around, you know, Jeff Bezos and anybody right now from big tech, he has said something that I think is relevant, which is he said, he’s asked all the time, you know, how the, what’s going to change in the future and how to prepare for that. And he says that’s the wrong question. He says, you know, the thing that you should plan around is what’s not going to change. He’s like, when I started Amazon, he was like, you know, I knew that people wanted stuff, they wanted variety, they wanted it cheap and they wanted it fast. And he’s like, that, as far as I could tell, wasn’t going to change. Like, people weren’t going to like, I want to spend more or take longer to get to me. And it’s like I said, once you have the things that won’t change, build around those. So I said it earlier, I’ll say it again. The thing that’s not going to change is fundamentally our cognitive architecture is the product of certainly hundreds of thousands, if not millions of years of biological evolutionary processes. It is further, I think, the product of thousands of years, tens of thousands of years of cultural evolution. We now have something, we have digital technologies that can affect that culture. So it does not mean, and I am not contending that our cognitive architecture is some sort of immutable thing, far from it. But on the other hand, it would suggest that what we should do is A, not plan around changes that we can’t possibly imagine, but B, maybe more importantly, and I would say this to both of you, not try to push for that future, you know, that we should fundamentally be small c, very small c, conservative about these things, because we don’t know, you know, I don’t know what the amount of time that took place back in Socrates and Aristotle’s time in terms of the cognitive transitions that took place, but they took place. My strong hunch not so much as the product of any deliberate choice, but to get a sort of social conversation about which ways in which should we talk to one another. And it was clearly the case that writing things down proved to be valuable in many dimensions. It may prove to be the case that having this tool proves very valuable in many dimensions. But let the time and experience sort that out rather than trying to predict it.

What Schools Can Do To Prepare

Diane Tavenner: Super helpful. I love where you’re taking us, which is into actual schools. So I appreciate that you’re like, let’s talk about what’s actually happening right now. And, you know, that is where my, like, heart and work always is, is in real schools. And so given what we are seeing, what you’re articulating about what’s actually happening right now in schools, and given that, well, I won’t say it as a given. What do schools need to do to mitigate the challenges you just said to, to recognize this as a reality that is coming our way that maybe can’t be put back in the box. Now, I’m going to say that with a caveat because I’m reading in the last day or two too, that it’s people declaring, you know, that they’ve won the cell phone war and cell phones are going to be out of schools here pretty soon. So maybe, maybe you actually believe it’s possible to kind of put it back in the box in schools. But, like, what’s the impact on schools and what do they do literally right now, given what you’re saying is actually happening already?

Ben Riley: Yeah. So great questions, all of them. So, I mean, thank you for bringing up the cell phone example, because I cite that often and even before there was this sort of wave now, both at the international level, national level, state by state, district by district, to suddenly go, these tools of distraction aren’t great for the experience of going to school and having you concentrate on hopefully what the teacher is trying to impart through the act of teaching. So we can, it’s not easy, but we can take control of this. Nothing is inevitable. So, you know, people always say, well, you can’t put it back in the box. You know, AI will exist, but how do we behave towards it? What ethics and norms do we try to impart around it? These are all choices we get to make. I like the phrase, and I’m borrowing this from someone named Josh Break, who’s a professor at Harvey Mudd. He has a wonderful Substack called I think It’s Just the Absent Minded Professor. But he writes a lot about AI in education. And his phrase is just you have to engage with it, but that doesn’t mean integrate. Right? So what I do think, you know, Diane, you kept saying schools. I just think it’s teachers, educators need to engage with it. That can still mean that the answer after you engage with it is no, not for me, and also no, not for my students. I think that’s a perfectly acceptable thing to say. And look, maybe the students won’t follow it, but that, you know, you’ve done what you can, right? And, and that is all you can do. There’s a teacher out there who I’m desperately trying to get in touch with, but she made waves. Her name is Chanea Bond. She teaches here in Texas. She made waves on Twitter a while back by saying, look, I’ve just banned it from my kids because it’s not good for their thinking. People are like, what? And it was like, she was like, yeah, no, it’s not good. Like it’s interfering with their thinking. So I’ve banned it. So that’s a perfectly reasonable answer. I also think that, you know, once you start to understand it at a basic level, I’m not talking about getting a PhD in back propagation and artificial neural networks, but just starting to understand it, you’ll start to understand why it’s actually quite untrustworthy and fallible and that you know, if you just think that everything it’s telling you is going to be accurate, you have another think coming, you know, and one of the things in the workshops that I’ve led that I’ve been very satisfied by is when people come out on the other side of them, they’re like, yeah, okay, so this thing isn’t reasoning and it’s not this all knowing oracle. And once you have that knowledge, once you’ve demystified it a bit, I think it gets a lot easier to sort of grapple with it and make your own choices and your own decisions about how you want to do it. I will say that right now, in the education discourse, it’s like, you know, things are way out of balance between sort of the hype and enthusiasm versus the sort of, hey, pump the brakes, or at least have you thought about this, if you’ll forgive me, but again, sort of, you know, it’s a, it’s a free resource. But if you go to cognitiveresidence.net we’ve put out a document called the Education Hazards of Generative AI, which literally just tries to, in very bite size and hopefully accessible form, sort of say, here are all the things you really need to think about and might be some cautionary notes across a number of dimensions, whether you’re using it for tutoring or material creation, for feedback on student work. Like, there’s a lot of things that you need to be thinking about and aware of. One of the things that frustrates me is that I see a lot of enthusiasts and this ranges from nonprofits to the companies that make these tools, sort of saying, well, teachers, fundamentally, it all falls to you. Like, if this thing is not factual or it hallucinates, like, it’s your job to fact check it. And it’s like, well, come on, like, A, that’s never going to happen, and B, like, not fair, you know, like not fair to put that on educators and just kind of wipe your hands clean. So I do think that’s something that, like, we’re still going to have to sort of sort through society on a, you know, social level as well as within schools and well as like individual teacher and ultimately students are going to have to bear some agency themselves about what choices they make around whether and how to use it at all.

What We鈥檙e Reading and Watching 

Diane Tavenner: I’m so appreciative of this idea of agency here. And I do think that that’s like, certainly a place that I’ve always been and is core to my values and beliefs as an educator is the importance of agency, not only for educators, but for young people themselves. And so, I love that this is such a rich conversation. We go on and on and on. But I feel like maybe leave it there. Like really real people, real teachers, real students, real agency. So grateful for everything that you brought up, so much to think about. And we’re gonna pester you for one last thought, which is Michael and I have this ritual of, at the end of every episode, we share what we’ve been reading, watching, listening to. We try to push ourselves to do it outside of our day jobs. And sometimes we seep back into the work because it’s so compelling. And so we want to invite you, if you have thoughts for us and to share them.

Ben Riley: So I told you I had a weird one for you here. So I was just in New Orleans and when I was in high school, for reasons that I won’t go in detail here, my family got really into the Kennedy assassination and the movie JFK by Oliver Stone came out. And I don’t know whether either of you have watched that film in a long time. It’s an incredible movie. It’s also filled with lies and untruths, and it’s much like in large language.

Michael Horn: I think we watched it in high school, but keep talking.

Ben Riley: Yeah. Yeah. Well, the thing that, the reason I bring it up is because Lee Harvey Oswald lived in New Orleans in the summer of 1963. And that movie is based on the case that was brought by the New Orleans District Attorney, a guy named Jim Garrison. But there’s a bunch of real life people who are in that movie or portrayed in that movie. And I just started to think about accidents of history where all of a sudden you could be, you know, just a person of relative obscurity as far as, you know, anyone broadly paying attention to your life. And all of a sudden something happens and now you become sort of this focus of study. And trust me when I tell you that every single person who had any connection with Lee Harvey Oswald in his life has become this object of study to people and books have been written. And so I’m trying, this is very bizarre, I know, but what I’m trying to do is think about and understand what it is like for people in that situation. Like what it is like to suddenly have your story told that you don’t have control of it anymore, you know, and if you know where, this isn’t supposed to be work related but in a way I think it does connect backup because it goes back to the fact that these tools are taking a lot of human created knowledge and sort of reappropriating it for their own right. And we haven’t got touched on that. I don’t think we need to now. But it’s sort of like it’s, there are a lot of artists who feel a profound sense of loss because of what’s happening in a our society today. That’s another thing I think worth thinking about.

Diane Tavenner: Wow, you’re right. I didn’t see that one coming. But it’s fascinating. Thank you for sharing it. I am unfortunately not going to stray from work today. I can’t help myself. Three of my very good friends have recently released a book called Extraordinary Learning for All. And that’s Aylon, Jeff Wetzler, Janee Henry Wood. And it’s really about the story of how they work closely with communities on the design of their schools and in a really profound and inclusive way. And so I’m deep in that, been involved in that work for a long time and think it’s just a really powerful kind of inspiration slash how to guide of how communities can really take agency over their schools and own them and figure out what they want and what matters and what they need and how they design accordingly.

Michael Horn: So I was gonna say now, Jeff has appeared twice in a row in our book recs, I think, on episodes or something like that. So love that. Diane, I’ll wrap up with saying I’m gonna go completely outside of, I think, the conversation today. But, Ben, you may say it actually relates as well, because I’ve been binging on season two of Shrinking. I loved season one and season two, with the exception of a couple episodes in the middle has been no exception, I think. So I’m. I’m really, really enjoying that so far. And I suppose you could connect that back to.

Ben Riley: What is Shrinking? I don’t know. I have to. I don’t know what it is.

Michael Horn: Okay, it’s basically about three therapists in a practice and one who’s grappling with the deep personal tragedy. And Harrison Ford is outrageously hilarious. Yeah.

Diane Tavenner: So good. It’s so good. Okay, well, I’m gonna tag on to your, you know, out of work one and say yes, we love Shrinking as well.

Michael Horn: Perfect. Perfect. All right, well, we’ll leave it there. Ben, huge thanks for joining us. For all of you tuning in, huge thanks for listening. We look forward to your thoughts and comments off this conversation and continuing to learn together. Thank you so much as always, for joining us on Class Disrupted.

]]>
Opinion: Randomized Controlled Trials Remain the Gold Standard for Ed Tech Research /article/randomized-controlled-trials-remain-the-gold-standard-for-ed-tech-research/ Mon, 10 Feb 2025 19:30:00 +0000 /?post_type=article&p=739671 In a recent interview published by 社区黑料, Leanlab founder Katie Boody Adorno says randomized controlled trials may be “an outdated mode of research.” I wholeheartedly disagree. 

RCTs remain the gold standard for effective research for good reason. They reduce sources of bias that plague other designs by accounting for observed and unobserved characteristics between the groups being studied. They can often be done quickly and provide the strongest evidence of whether a product’s impact varies based on students鈥 racial, ethnic and socioeconomic backgrounds. Most important, RCTs answer the key question about any ed tech product: Does it work? This is not to say RCTs are always appropriate; for instance, using one to evaluate a program to be used from elementary school through high school would be unworkable. But, where feasible, RCTs are the best approach to determining whether a product or program functions as intended.

Ed tech is a rapidly growing $150 billion industry that shows no signs of slowing down. In fact, it will likely double or triple in size over the next decade. It is absolutely essential that ed tech companies work with researchers throughout the development process to ensure that their products are as effective for educators and students as their marketing materials claim. A range of research designs, including RCTs, is required to ensure a product works as intended in the courses and classroom for which it was created.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


Randomized controlled trials are uniquely structured to determine whether a product鈥檚 impact is the result of the product itself or other factors. This matters 鈥 a product that works for students with a teacher who鈥檚 enthusiastic about it may not be effective if the teacher is less familiar or uninterested. In an RCT, the product is randomly assigned to a group during a fixed period of time, while a comparison group continues with its usual activities. The groups have similar characteristics, including the degree of interest in the product.

This means that RCTs can reliably reveal which interventions are working. This is especially important in efforts to bolster the success of the most vulnerable students. For example, a randomized controlled trial of a statewide public prekindergarten program in Tennessee found that it was actually degrading future academic growth in elementary school for low-income kids 鈥 an alarm bell that weaker methods would not have been able to detect because they could not have controlled for the fact that families who self-select into public pre-K are different from families who don鈥檛. Only through randomized control was it possible to determine whether the program was producing the desired outcomes for all students regardless of their families鈥 economic status or focus on early education.

Beyond producing rigorous evidence of impact, RCTs have other advantages over different designs.

First, they are simple and transparent. Because they compare two similar groups whose only difference is the use of a product, RCTs clearly and directly demonstrate whether the kids in Group A tend to do better than the kids in Group B. The answer can be clearly understood without the use of statistical models or nonreproducible 鈥渋nsights鈥 of the type that come from classroom observations.

Second, because conducting RCTs requires only as much statistical training as one gets in a typical master鈥檚 program involving math and statistics, such as finance or economics, they can easily be set up by state and local educational agencies. This frees local leaders from relying on studies done in other schools and districts that have different student demographics and serve different communities, allowing them to move from asking, 鈥淒oes this work overall?鈥 to, 鈥淒oes this work for my students?鈥 

Third, RCTs can strengthen studies that are conducted to help shape the final product. When ed tech products are in at the start of the development process, it鈥檚 often unknown whether version A or B works better. Suppose you鈥檙e building an app that focuses on fractions and you want to find out if one visual representation engages students more than another. Randomly assigning students within a classroom to see either version A or B will produce better information than letting the students decide on their own which one to view.

Fourth, RCTs eliminate inherent bias. Many districts and schools pilot programs and products with volunteers before deciding whether to scale them more broadly. RCTs provide a more realistic picture of implementation because the group testing the product is assigned randomly rather than because its members have already expressed interest in it.

It is undeniable that RCTs require some time to complete, but that is the cost associated with rigor, and developers, educators and students all deserve to benefit from the most rigorous research methods available. The ed tech cemetery is rife with products that couldn鈥檛 deliver on their promises. The nation鈥檚 children and educators deserve educational tools backed by the strongest evidence possible to support learning and academic success. RCTs provide that evidence better than any other method does.

]]>
Opinion: Developing Ed Tech With a Focus on Students, Teachers and the Research /article/developing-ed-tech-with-a-focus-on-students-teachers-and-the-research/ Sun, 20 Oct 2024 17:01:00 +0000 /?post_type=article&p=734391 I have never been so aware of the lack of evidence-based tools and resources to serve student needs as when I worked in the central office of a New Orleans school district following Hurricane Katrina in 2005. I looked at many curricular and professional learning options, but high-quality, were extremely limited. We were at the mercy of the market, which at the time offered little to no support to serve the range of needs of students experiencing poverty.

In today’s educational landscape, research and development is essential to driving progress and innovation in pre-K-12 teaching and learning. Yet, the dominant approach continues to revolve around market-driven solutions that are developed without evidence or teacher input. Unlike industries such as health care and energy, where there is seamless integration between scientific discovery and practical application, education often lacks a formal process that bases investments in tools and solutions on scientific investigation and proof of effectiveness. This fragmentation in R&D leads to inefficiencies, wasted funds on solutions that do not serve most students and missed opportunities for innovation that can improve learning gains.

In health care, the efficacy of a new treatment or medication must be rigorously demonstrated before it can reach real-world practice. In education, research should similarly align with the practical needs of students and educators to ensure that solutions work and genuinely help all those they are intended to impact. This trajectory from initial scientific breakthrough to classroom application demands rigorous measurement, evaluation and feedback. It requires a commitment to exploring, making mistakes and always improving without sacrificing a dedication to positive change. And it requires meaningful and frequent input from the people who will use these innovative solutions 鈥 starting at the beginning.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


As chief learning officer at the , I have seen first-hand the real impact that systemic change and student-centered and research-based design of educational technology can make. Understanding the experiences of students, caregivers and educators is essential to staying learner-focused from the outset, especially because caregivers hold dreams for their children鈥檚 futures and teachers know what young people’s academic needs are. Involving researchers, developers and policymakers from the beginning makes it possible to identify the right opportunities and help design solutions that align with the needs of all students. Then, when a promising innovation takes shape, it’s essential to bring in industry experts and investors who can provide the financial backing for an expansion from a single classroom or school.

The need for innovative solutions to help students recover from pandemic learning disruptions has never been more urgent. The nation needs all hands on deck 鈥 educator and community engagement, high ethical standards for ed tech and importantly, federal investment in education R&D. Here are four key criteria that teachers, administrators and those purchasing education technology and curriculum for their districts should look for: 

  • A strong focus on both scientific evidence and real-world experience is essential in building new solutions for the future. Educators should talk with colleagues about what has worked for them in supporting student learning and improving classroom practice, and engage expertise from many sources 鈥 fellow teachers, administrators, industry experts, researchers and policymakers.
  • Look for products that were created using expert insights from educators, researchers and product developers, based on how students learn best. These should include diverse perspectives to ensure the technology is accessible to students who are disproportionately impacted by poverty. Ask questions about where the product was piloted, in what types of schools, in what geographic locations, with which demographic and socioeconomic groups 鈥 and look for evidence of positive impact.
  • Does the product foster inclusivity, equity and transparency? Does it serve the best interests of all the students involved while treating each as an individual? Does it ensure data privacy, is it accessible and does it offer diverse content? Will it work in rural areas and in large, diverse cities? These are key ethical questions to consider.
  • Teachers lack the time and bandwidth to spend their valuable time navigating how to move in and out of each ed tech tool. New products need to work in concert with one another interoperably and mesh seamlessly with teacher workflows. They should not be just created inside a lab, but developed based on what works inside the classroom, with a lot of input from the people who will use them.

These four elements can change the way education solutions are developed, enabling faster, bigger breakthroughs that can benefit pre-K-12 teaching, learning and assessment systems.

]]>
Q&A: Katy Knight鈥檚 Quest to Fund Ed Tech鈥檚 鈥楧eeply Unsexy Things鈥 /article/the-74-interview-katy-knights-quest-to-fund-ed-techs-deeply-unsexy-things/ Wed, 16 Oct 2024 12:30:00 +0000 /?post_type=article&p=734215 Over the past year and a half, Katy Knight has been on a quiet quest to uncover good education-related tech tools, often powered by artificial intelligence. With access to a bank account nearing half a billion dollars, she鈥檚 got money to spend if she finds something she likes. 

But she鈥檒l readily tell you, 鈥淭here’s just not a lot of stuff that’s worth funding.鈥

Knight is president and executive director of the Siegel Family Endowment, created by computer scientist David Siegel, a co-founder of the embattled, $60 billion quantitative trading firm . A former Google and Two Sigma employee herself, Knight sees her role as helping to bring evidence-backed tools to market 鈥 tools 鈥渢hat we can learn something from.鈥 


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


That has led her to underwrite small, often experimental undertakings such as , which works with students and teachers to promote , focusing on student needs and inputs. For instance, if students want to improve the quality of school lunches, instead of asking nutritionists or school staff to design menus, a school would turn to kids to study the problem and suggest solutions. 

She also supports , an innovative high school network in Pennsylvania, and the , a nonprofit that promotes instruction paced by students, relying on mastery rather than seat time.

Knight has espoused an approach that she calls 鈥渋nquiry-driven philanthropy,鈥 searching for schools and startups doing important work 鈥 and treating grantmaking 鈥渁s almost field experimentation鈥 alongside more traditional research she funds. 鈥淓verything has an orientation toward, 鈥榃hat can we learn from this, success or failure, to give back to the field?鈥欌

She has also said educators and policymakers are missing something in the conversation about classroom technology, reducing it to an 鈥渁ll or nothing鈥 question. 鈥淲e either have to say 鈥楴o tech鈥 or 鈥榁ery low tech 鈥 lock away the phones, keep the kids disconnected, ban ChatGPT, etc.,鈥 or it’s 鈥榃e’re all in. Every kid gets an iPad. They’re going to learn on technology all day.鈥 鈥

Accordingly, she has many thoughts on AI, the current panic about phones in schools, and how she separates good ed tech from bad.

This interview has been edited for length and clarity.

社区黑料: You鈥檝e said your goal is to fund “deeply unsexy things” in ed tech to invest in. As someone who gets email pitches every morning about deeply sexy things that I’m very skeptical about, that was a breath of fresh air. What are “deeply unsexy things?” Why is that important?

Katy Knight: Philanthropy can be very much like the private markets and everything else, consumed by . We are just as fallible and just as susceptible to chasing the Next Big Idea, the next sexy thing. And I think that’s fine in some respects. Philanthropy should be risk capital, which means sometimes there’s going to be a sexy thing that will impact the social sector 鈥 and we should fund it.

But more often than not, change is happening on the back end. It’s not always something new, and it’s not always using the latest and greatest technology. Sometimes we’re talking about the reality of the digital divide in a place where people want to be talking about generative AI, and that’s not capturing attention. So it’s even more important that we, as a philanthropy with the bully pulpit, are thinking about what are the layers of the bureaucracy that we can tackle to achieve systems change? Even though they’re unsexy from a news perspective or a razzle-dazzle perspective, I think they are actually impactful and interesting.

Let’s talk about some of the things you’re funding, starting with , the non-profit that offers free AI-powered writing, reading comprehension and language skills lessons. What鈥檚 your thinking there?

Quill is sexy, in that they’ve got this front-facing technology. Everyone wants to talk about consumer-facing tools. What’s less sexy, I think, is that we’re not talking about how it’s the latest ChatGPT model. This is about years and years of actual teacher feedback. It’s about training something really specific. It’s relatively niche. And those are the kind of AI applications that I think actually have the highest potential: Applying a powerful technology to something niche should have outsized impacts. That kind of thing makes sense to me. I think there’s a lot of opportunities for us to think about, 鈥淥.K., if we weren’t just chasing the best, coolest image-generating technology, what might we be doing to actually serve student need and teacher need? It starts from asking questions about what matters, what the actual challenges are, and then you get to something that’s useful 鈥 even if it’s not as shiny as some of the other ed tech startup things that are coming across your inbox.

You鈥檙e also funding Quill and others to develop a 鈥淩esponsible AI Playbook.鈥 Say more about that.

Even though the social sector is smaller than the private markets in terms of investment in new ed tech tools, if we have even a small chorus of people thinking about responsible AI and pushing back against this overarching narrative that we just have to let it run amok, that’s net beneficial to the field.

Talk about the small chorus. Who are the other singers? 

The big one is the . The other network we’ve been involved with is the . (based in Zurich, Switzerland) helped found this group of funders, developers and researchers globally who are now thinking together about responsible development, specifically through the lens of 鈥淗ow do we create real-world environments for developers to test their tools and hear feedback from teachers and young people more directly,鈥 rather than just building things that sound like they’ll capture a lot of market share.

Can you say more about the trialing network?

We are funding some of the U.S. work, particularly through our partners at and . Leanlab has been crucial because what they do really at their core is very much aligned with this vision of having real live environments where there’s some co-creation of these tools. We’re funding that work through them. They’ve had two global meetings that I participated in. 

Leanlab Executive Director Katie Boody Adorno has built a very cool, small, nimble organization that’s focused particularly on the notion of the co-design of ed tech tools. They work with startups that are really genuine about wanting to design for impact, not just for investors. And they create relationships with schools to have teachers be paid for their participation and to have teachers actually be testers and provide feedback directly to the designers at these startups. I think it鈥檚 just a very cool model for almost an accelerator for impact, rather than an accelerator for marketing.

Do you have thoughts on phone-free schools?

It’s a simple solution to a complex problem. On the one hand, in a vacuum, I might say “Absolutely, we need to be more distraction-free.鈥 And much like when I was in elementary school and they were taking our away, we’ve got to put the phones away. On the other hand, I understand the complex issues of school safety, of child care arrangements in a world where parents have to work. Thinking about what students are in school for 鈥 and what we want them to be doing, and how we want them to be learning, and whether or not we want them to feel so attached to these devices 鈥 is a really important conversation. But we can’t divorce it from reality: We live in a really uncertain and sometimes dangerous world, and I understand the perspective of parents who might want to be able to reach their kids during the day in the event of an emergency and other things. 

When I was at the last spring, somebody I was with said, “Take a good look around: Half of these guys will be gone by next year.” On the one hand, that seems like a very cynical thing to say. It also seems entirely right. Is it a good thing that companies come and go, that you’re always dealing with somebody who’s got a different vision? Is that a healthy thing for education?

In any private market solution, some cycling of companies and iteration is not a bad thing. I think there’s a mismatch between how the tech startup venture world works and how education products need to work. In the VC-backed startup world, we’re funding a bunch of things with the intention that one or two of them will have 100x, 1,000x returns, and a lot of them will go bust. Those companies are incentivized and encouraged to capture as much market share as possible to achieve that investment return. Whether they are actually impactful to students or not is almost irrelevant in that initial drive to capture market share.

That’s not to say that there shouldn’t be competition and a diverse set of tools that educators can dig into. But if they’re getting served up a shiny new presentation for a new tool that they’re being told they absolutely need every month, that sort of churn is incredibly disruptive. 

How do you separate good ed tech from bad? 

When I hear a startup say that their total addressable market is all 80 million students in the country, I know it’s unlikely that product is worthwhile because there are so few ed tech products 鈥 there are so few products in general 鈥 that can actually serve every single student in the country. So unless you’ve got a more limited perspective on what the market is, I don’t think you’ve actually aligned what you’re building with the reality of what is needed.

I was heartened to read in journalist Audrey Watters鈥 last month that she鈥檚 returning to writing about ed tech. She wrote that she鈥檚 ready to 鈥渄utifully remind you that the future of human and machine learning as envisioned by Silicon Valley’s libertarian elite is a pretty shitty one.鈥 Thoughts?

I love that! I mean, look: Not to zoom out too much, but I think as a society we’ve grown somewhat accustomed to being test subjects for tech companies across the board because everything is free. And they say, “Oh, if it’s free, then you’re the product.” And we are. “We’re releasing a new version of this tool. Your email client is going to change tomorrow.鈥 Do you have any say in it? Nope. We’re very used to living in a world where we’re told what to do by tech platform companies and they will manage just how they see fit.

That doesn’t work for education. That doesn’t work when you have no grounding in learning science, pedagogy, or even just being in a classroom. And so I think that is not just an education problem. It impacts the education sector specifically, but I do think it’s a broader societal concern. Our interaction with technology is not one where we have enough agency.

]]>
Ed Tech Startup Behind L.A. Schools鈥 Failed $6M AI Chatbot Files for Bankruptcy /article/allhere-ai-los-angeles-schools-tool-bankruptcy-filing/ Thu, 12 Sep 2024 10:30:00 +0000 /?post_type=article&p=732760 The education technology company behind Los Angeles schools鈥 failed $6 million foray into artificial intelligence was in a Delaware bankruptcy court Tuesday seeking relief from its creditors and to sell off its meager assets before shutting down entirely.

The latest chapter in AllHere鈥檚 dizzying collapse revealed more information about the once-lauded company鈥檚 finances and its relationship with the Los Angeles Unified School District. But the hearing failed to answer key questions about why AllHere went under after garnering $12 million in investor capital, a blizzard of positive press and a contract with the nation鈥檚 second-largest school district to create 鈥淓d,鈥 the buzzy, AI-powered chatbot.

During the hearing held over Zoom, one of AllHere鈥檚 only remaining executives, former chief technology officer Toby Jackson, struggled to explain why the company paid ousted CEO Joanna Smith-Griffin $243,000 in expenses from the past year and owed $630,000 to its largest creditor, education technology salesperson Debra Kerr. 


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


鈥淚 don鈥檛 know exactly the nature of all of [Smith-Griffin鈥檚] expenses. She was the CEO and so that is one of the outstanding questions that we also have,鈥 Jackson said when quizzed about the six-figure amount by the bankruptcy trustee. 鈥淪he did do quite a bit of travel as the CEO of the company.鈥 

Similarly, Jackson said he had no invoices to substantiate the $630,000 debt to Kerr, who is a longtime associate and of Los Angeles schools Superintendent Alberto Carvalho, dating back to his days leading Miami-Dade schools. Kerr鈥檚 son, Richard, is a former AllHere account executive who told 社区黑料 this week he pitched the AllHere deal to Los Angeles school leaders.

鈥淚鈥檓 not really sure what exactly that entails,鈥 Jackson said of Kerr鈥檚 claim.

Moments later, Kerr chimed into the Zoom hearing, arguing the company owed her the money after she helped AllHere close the lucrative deal in L.A. Kerr said she was never paid her commission from the first payments that LAUSD made to AllHere under the contract. 

The district has said it paid AllHere roughly $3 million of the $6 million for the chatbot, which was taken offline shortly after AllHere announced in June that it was in financial distress and had furloughed most of its employees. 

鈥淚 never did collect any commissions and it鈥檚 in the contract based on commission percentages that would have been made on any sales accrued,鈥 Kerr told the trustee.

Smith-Griffin, who now lives in North Carolina, was not present for the Zoom hearing and could not be reached for comment. There were indications in the hearing that her separation from AllHere was not amicable, including that the former CEO has refused to disclose the password to her $500 company-owned laptop, one of its few remaining assets. 

Court records show that Jackson, now the head restructuring officer, earned $305,000 a year in his role with the company before it shuttered, nearly three times the $105,000 paid to Smith-Griffin, a Harvard University graduate who built AllHere in 2016 with financial backing from the prestigious institution. 

Filed in mid-August, AllHere鈥檚 Title 7 bankruptcy petition strengthens doubts that it could find a new owner to take over its mission as an AI pioneer in K-12 schools. That scenario was put forth by a Los Angeles school district spokesperson earlier this year with the assertion that 鈥淓d鈥 could still be successfully launched as a personalized, interactive learning acceleration tool for all of the district鈥檚 roughly 540,000 students and their families.

Instead, court records show AllHere鈥檚 few remaining employees are preparing for 鈥渢he wind down of the company鈥 and officials acknowledged during Tuesday鈥檚 proceeding that AllHere was unable to fulfill the terms of its contract with L.A. Unified. 

A lawyer representing the school district was present at the hearing. In a statement Tuesday evening, a district spokesperson said LAUSD is 鈥渆valuating its next steps to pursue and protect its rights in the bankruptcy proceedings.鈥 

Los Angeles schools Superintendent Alberto Carvalho appears in a photograph with Debra Kerr, which the education technology salesperson later posted on LinkedIn. (Screenshot)

Kerr and Carvalho 

Ties between Kerr and Carvalho go back to at least 2010, when she worked for the behemoth education company Back then, she gave Carvalho and Miami students what she to an original print of the U.S. Declaration of Independence. Ever since, Carvalho, who took over leadership in Los Angeles in 2022, has been a regular staple on Kerr鈥檚 social media. 

A LinkedIn post promoting L.A.鈥檚 chatbot noted that the tool worked in partnership with services from seven companies including , the creators of digital education program ABCmouse and where Kerr previously worked as head of sales. 

Kerr didn鈥檛 respond to requests for comment but her son, Richard, who began working at AllHere in 2022, said among the school district deals he worked on for the company was the chatbot project in Los Angeles. 

鈥淲e had a big deal in L.A. and the investors, I guess, didn鈥檛 have patience to wait to get paid from it,鈥 he said. 

Kerr said he met with education officials in Los Angeles and 鈥渄id a lot of work鈥 helping the company secure the ageement. When asked about his mother鈥檚 role in closing AllHere’s contract in Los Angeles, Kerr said 鈥渟he had a lot to do with it,鈥 but didn鈥檛 elaborate further.聽

A statement from the L.A. district spokesperson said that 鈥淟os Angeles Unified launched a competitive鈥 request for proposals that received 鈥渕ultiple responses,鈥 which eventually led to AllHere鈥檚 selection. This spring, Carvalho went on the road with Smith-Griffin to promote “Ed,” billing the chatbot personified by a yellow sun as being 鈥渦nprecedented in American public education.鈥

Before he was furloughed, Richard Kerr said AllHere was a great place to work 鈥 in part because of Smith-Griffin鈥檚 leadership.

鈥淚t’s very unfortunate what happened to Joanna. I thought she was on a great path and she was doing an amazing thing,鈥 he said, adding that she made a mistake when she 鈥渂rought in the wrong investors that were pretty vindictive鈥 and decided to cut short the company without giving it a proper chance. 

AllHere鈥檚 former senior director of software engineering, who became a company whistleblower, told 社区黑料 earlier this year that AllHere struggled to meet the terms of its contract in Los Angeles and took shortcuts that violated bedrock student privacy principles and district rules. Both the district鈥檚 independent inspector general and top administrators have launched separate investigations into what went wrong with AllHere.

Even though his mother, Debra Kerr, was on the Delaware court鈥檚 Zoom call Tuesday, Richard Kerr said he was unaware his former employer had filed for bankruptcy.

What鈥檚 left

The company鈥檚 few remaining employees and board members, including former Chicago Public Schools Chief Executive Janice Jackson, have not made themselves available for comment. 

AllHere investor Andrew Parker, who was on vacation Tuesday and didn鈥檛 attend the court hearing, now serves as the company鈥檚 secretary. In addition to Janice Jackson, other players who signed AllHere鈥檚 bankruptcy petition are Andre Bennin, a managing partner with the investment firm , and education consultant Jeff Livingston. 

Even though Smith-Griffin is no longer with the company, court records show she still has a significant stake, holding 81% equity in its common stock. Rethink Education was by far the company鈥檚 biggest outside investor. 

Other top creditors, according to court records, are the law firm of at nearly $275,000, the information technology company at $190,000 and $123,000 to well-known education consulting firm  

Earlier in the summer, 社区黑料 spoke with Gunderson Dettmer partner Jay Hachigian, who said he had only worked with AllHere early in its formation. He didn鈥檛 respond to requests for comment this week about his firm鈥檚 large outstanding balance with the company. Whiteboard Advisors spokesperson Thomas Rodgers said in an email that his firm previously worked with AllHere but its role is covered by a nondisclosure agreement. 

Court records show the company earned $2.4 million in gross revenue last year but had generated much less since January, about $587,000.

At the time of bankruptcy, court records show the company had active contracts with just 10 school districts, including those in Cincinnati, Miami and Weehawken, New Jersey. Only Weehawken sought to use the chatbot platform created for LAUSD, while the rest relied on an earlier text messaging tool designed to combat chronic absenteeism. 

Despite landing millions of dollars in backing from a group of social impact investment firms, several of which cited their enthusiasm for investing in AllHere specifically because it was led by a Black woman, court records reveal the company鈥檚 coffers are nearly empty. AllHere claimed nearly $2.9 million in property and just shy of that 鈥 $1.75 million 鈥 in liabilities. The company鈥檚 actual assets, Toby Jackson acknowledged in court, are much lower. 

It claimed an 鈥渦nknown鈥 value on pending patents, which Jackson conceded Tuesday had been denied, and $2.88 million for licenses, franchises and royalties for its LAUSD contract. Other assets, including its website and chatbot source code, were also listed at a value of 鈥渦nknown.鈥

Jackson said the Los Angeles contract was valued at $2.88 million for the remaining outstanding balance the district owes to fulfill the agreement 鈥 money he admitted AllHere would be unable to collect because it has not 鈥渉eld up our part of the bargain in the contract鈥 and is closing shop.

Financial statements to the court show AllHere had $18,000 in savings and just $500 in physical assets: the value of Smith-Griffin鈥檚 work laptop, whose contents remain outside the tech company鈥檚 reach. 

鈥淲e have not been able to obtain the credentials for Mrs. Smith鈥檚 laptop. We did not receive any cooperation with that,鈥 Jackson testified Tuesday. 鈥淪he has been cooperative with some other matters, but not with this one.鈥

]]>