Google – 社区黑料 America's Education News Source Fri, 13 Mar 2026 22:11:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png Google – 社区黑料 32 32 Proposal for NYC AI-Focused Public High School Sparks Pushback /article/proposal-for-nyc-ai-focused-public-high-school-sparks-pushback/ Mon, 16 Mar 2026 18:30:00 +0000 /?post_type=article&p=1029829 This article was originally published in

New York City students with a passion for STEM 鈥 and an interest in artificial intelligence 鈥 may soon have a high school dedicated to training 鈥渢he next generation of technology professionals.鈥

But families in Manhattan鈥檚 District 2 are pushing back against for , a new screened admissions high school that would take the place of the tiny, girls-only Urban Assembly School of Business for Young Women. Next Generation would be the first city public school to focus its curriculum on AI and computer science.

As details of the two proposals emerged over the last month, so have dual tensions: What should fill the space left by Young Women in Business, and how private technology companies and their artificial intelligence products could shape the curriculum at Next Generation.

Much of the opposition to Next Generation has come from families at a middle school also in the Broadway building, Lower Manhattan Community School. Also known as LMC, parents at the school have called on the department for years to expand enrollment from grades 6-8 up to grade 12.

The Panel for Educational Policy, the board that votes on new schools and closures, is expected to consider the proposals for Next Generation and Business for Young Women at its April 29 meeting.

The Education Department released both proposals on March 6, the day after the city鈥檚 eighth graders received their high school acceptance offers. If approved, Next Generation would welcome its first class of ninth graders in the fall. (The plan to close Business for Young Women in June is not contingent on Next Generation鈥檚 approval.)

Despite not having the green light yet, Next Generation has already held three virtual open houses. Its states the school is 鈥渟et to open鈥 in fall 2026, noting that applications would open March 19.

Parents ask: 鈥榃hy this school and why here?鈥

Manhattan High Schools Superintendent Gary Beidleman introduced the idea for Next Generation Technology High School at a .

Panel for Educational Policy members and families of the three co-located schools at 26 Broadway 鈥 in addition to LMC and Business for Young Women, Richard R. Green High School of Teaching shares the building 鈥 said that meeting was the first time the district school community had been notified of the proposed STEM- and technology-focused screened high school.

At the Feb. 25 announcement, Beidleman said Next Generation grew out of his experience as a summer 2024 , and that Google and OpenAI are part of the planning team for the school. One of the school鈥檚 goals, he said, is to 鈥渆xpand pathways connected to high-growth technology careers鈥 and provide advanced STEM and technology programming for NYC students. Next Generation also plans to offer a summer internship program with Carnegie Mellon University.

Caleb Haraguchi-Combs, founding principal and project director of Next Generation High School, said in an information session that the school would utilize . How much of this AI-powered, AI-focused Google coursework would comprise the curriculum is still in flux, according to the proposal鈥檚 .

The school鈥檚 academic description includes similar or identical language as found on the Google Skills website: Next Generation鈥檚 鈥渟pecial access to technology industry mentors,鈥 鈥渢echnology certifications,鈥 and 鈥渃urriculum that adapts to the dynamic changes in the technology field鈥 are offerings advertised on the homepage of the Google Skills site.

Officials and families question new school proposal process

The community and Panel for Educational Policy members have asked questions about the fast proposal process, speaking to uncertainty around admissions for the coming school year.

in a letter to the Panel for Educational Policy that the proposal seemingly came out of nowhere, and families were not provided adequate engagement opportunities before its release. Panel Chair Greg Faulkner said he has received hundreds of similar letters from parents since the community learned of the incoming proposal in late February.

High school offers were released March 5, ahead of the panel鈥檚 vote and months before the proposed school would open. It remains unclear how the Education Department would handle screening requirements 鈥 such as interviews or assessments 鈥 after the main admissions cycle has concluded. The Office of District Planning did not respond to questions about how enrollment would work for this fall.

of the school, created by the Next Generation鈥檚 founding principal and program director on March 8, had under 100 signatures at the time of publishing.

A public hearing is scheduled for April 14, two weeks before the panel鈥檚 vote.

鈥淚 would love more transparency around why the department chooses certain schools to go in certain places,鈥 said Sarah Calderon, a parent at Lower Manhattan Community School. 鈥淲hen we asked the superintendent, 鈥榃hy this school and why here?鈥 he said he had no data on district demand.鈥

Beidelman told parents at the Feb. 25 District 2 meeting that expanding Lower Manhattan Community 鈥渨as not an idea that was on the table.鈥

The Education Department receives many proposals each year, including some from outside New York City, said Sean Rux of the Office of New School Development.

鈥淭his was the proposal that spoke to us,鈥 Rux said.

Families push to expand Lower Manhattan Community School

The plan to close the underenrolled Business for Young Women school has been percolating for a few years 鈥 with just 91 students this year, it鈥檚 the smallest district high school in the city, said Education Department officials.

Families at Lower Manhattan Community School say they have pushed for years to expand into a 6鈥12 model, and would like to move into the space used by Business for Young Women, if closed.

鈥淎 proposal to expand LMC could potentially open up sixth grade admissions to applicants citywide, but we have not been given the opportunity to even submit a proposal,鈥 said Anne Hager, a parent of a sixth grader at Lower Manhattan School.

At a PTA meeting with Education Department staff on Wednesday, LMC鈥檚 Student Leadership Team presented its case to expand the school instead of opening Next Generation.

A new 6-12 would eliminate the need for LMC students to go through a second, onerous application process, something that students with disabilities would especially benefit from, they said. The presentation also cited Department of Education data from 2024 that showed 6-12 schools have nearly three times higher demand than their 6-8 middle school counterparts.

compared with citywide averages.

The department鈥檚 proposal focuses largely on space at the Broadway campus, estimating that Next Generation would serve roughly 450 students by its fourth year. All three schools can comfortably co-locate, according to the proposal, though its capacity calculations do not allot for significant expansion for either Richard R. Green High School or LMC.

Debate over AI timing and oversight

Next Generation鈥檚 proposal arrives amid over artificial intelligence in schools.

The school initially marketed itself in information sessions and on social media as an 鈥淎I school,鈥 though DOE officials later clarified that students would learn about artificial intelligence rather than be taught by it.

鈥淪tudents need to be creators, not consumers, of technology,鈥 Beidleman said at the Feb. 25 meeting. 鈥淟essons learned from the past show us that new tech in place creates an opportunity.鈥

Some parents have argued that broad use of an AI platform in public schools should not be allowed before comprehensive guidelines have been released by the city.

Greg Faulkner, who chairs the Panel for Educational Policy, said he first learned of the proposal after receiving Next Generation鈥檚 last month. Since then, the panel has received hundreds of letters from parents opposing the plan and raising concerns about the lack of community engagement so far.

鈥淚 have two major hesitations with this: We don鈥檛 know what kind of AI involvement there will be. The development team has not provided a playbook for how that will look,鈥 Faulkner said. 鈥淎nd in reading the response letters from District 2 parents, I see that proper engagement and process was not done.鈥

At a District 2 town hall on March 5, Chancellor Kamar Samuels said the Education Department expects to release AI guidance in the coming weeks and will provide a 45-day window for community feedback once it鈥檚 published.

Five Community Education Councils have passed resolutions calling for a two-year moratorium on artificial intelligence use in schools. But calls for broad AI guidelines implemented at the city level are nothing new; of an AI-powered reading program in 2024 after former Comptroller Brad Lander called for a citywide playbook.

鈥淚 think the question of teacher capacity and teacher shortages, the research on kids and AI, is still nascent, and the DOE鈥檚 lack of its own AI policy leads me to question the timing of any AI school,鈥 said Calderon, the parent at Lower Manhattan Community.

Chalkbeat is a nonprofit news site covering educational change in public schools. This story was originally published by Chalkbeat. Sign up for their newsletters at .听

]]>
Exclusive: New Google Partnership a 鈥楽izable Investment鈥 in AI for Teachers /article/exclusive-new-google-partnership-a-sizable-investment-in-ai-for-teachers/ Mon, 23 Feb 2026 12:01:00 +0000 /?post_type=article&p=1028964 A top professional organization for teachers has inked a three-year deal with Google to offer AI training to 鈥渁ll six million K-12 teachers and higher education faculty鈥 in the U.S., an audacious undertaking by the tech giant that could reach millions of students and dwarf previous tech forays into education.

鈥淲hile Google’s been offering educational products for 20 years, this is a different moment for us,鈥 said Chris Phillips, Google鈥檚 vice president and general manager of education.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


He called the effort the largest for Google in two decades of working with teachers and students. Phillips didn鈥檛 immediately offer a price tag, but said it鈥檚 鈥渁 sizable investment.鈥

Chris Phillips

The training, offered through the ed tech-focused group , will include hands-on experience with Google’s Gemini and NotebookLM tools, offering certificates and digital badges.

鈥淲e have just heard so much feedback from teachers that are just saying, 鈥榃e are not prepared,鈥欌 said Richard Culatta, ISTE+ASCD鈥檚 CEO. 鈥溾榃e don’t have the training, we don’t have the background that we need for the realities of teaching in an AI world, both teaching in the classroom and also, secondarily, but equally as important, preparing students for the world that they’re going to be in.鈥欌

It鈥檚 the latest in a series of large-scale teacher training initiatives over the past few months. In July, the American Federation of Teachers, the nation’s second-largest teachers union, announced its own $23 million , partnering with Microsoft, OpenAI, and Anthropic to train up to 400,000 educators.

At the time, AFT President Randi Weingarten said the academy was a way to ensure that teachers, not technology, remain in control of the classroom.

But AFT’s partnership with OpenAI and Anthropic drew sharp criticism from educators and researchers, who questioned whether tech companies with products to sell and market share to protect are the right architects for teacher training. Education technology critic Audrey Watters called AFT’s academy 鈥渁 gigantic public experiment that no one has asked for,鈥 while ed tech analyst Alex Sarlin said tech companies were in a 鈥渓and-grab moment.鈥 

Microsoft has also launched its own community-based platform, Microsoft Elevate for Educators, offering free courses, live training sessions and credentials. 

Google itself in 2024 committed $25 million through its philanthropic arm to several nonprofits, including ISTE+ASCD, 4-H, and aiEDU, with particular attention to reaching underserved communities. Its goal at the time was to reach more than half a million K-12 and college students, as well as educators.

ISTE+ASCD 鈥 the group is a combination of two that merged in 2023 鈥 was the beneficiary of $10 million of the $25 million, saying it would collaborate with several other groups, including the National Education Association and the Computer Science Teachers Association.

Though Google has its own AI platform, Culatta insisted that the work won’t be about pushing specific tools, saying that kids need enduring AI skills as the tools change. 

Richard Culatta

In 2023 ISTE+ASCD introduced its own AI chatbot built on educator-focused content and trained solely on materials developed or approved by the organization. The chabot tapped into curated databases in a bid to give teachers routine access to high-quality research. 

In some ways, efforts like those of AFT and others reflect a lack of leadership at the federal level. The Trump administration, through an , has backed efforts to expand AI in schools, but has also eliminated the Office of Educational Technology, which long focused on making access to technology until Trump last spring.

Culatta, who ran the office under President Obama, said it鈥檚 important that organizations like ISTE+ASCD 鈥渟tep up when there are key needs that may not be filled at the federal level. And we just want to make sure that, regardless of where we would like some things to happen, at this point we just have to do all-hands-on-deck and make sure we’re supporting kids and teachers.鈥

鈥楳assive undertaking鈥 or waste of time?

The sheer scale of Monday鈥檚 announcement underscores how urgently educators see the need to learn about AI: RAND Corp. last spring found that the number of school districts training teachers on AI from 2023 to 2024, from 23% to 48%. Researchers predicted that as many as three-fourths of districts would be in the AI training business by the end of 2025. 

Robin Lake, director of the at Arizona State University, said the new partnership is 鈥渁 massive undertaking that is urgently needed right now. I hope it includes a research component so we can learn from it because much more is needed.鈥

Google鈥檚 Phillips said the company has 鈥渕ultiple arms of research happening all around the world鈥 and 鈥渨ill start to produce some of those and share them publicly where we’re doing studies鈥 in classrooms.

鈥淲e’ll see how the results land, but ultimately we want to improve learning outcomes,鈥 he said. 鈥淲e want to help change. We want to bend the curves on proficiency.鈥

Robin Lake (CRPE)

Lake, who has long urged schools to take AI readiness seriously, said school principals, district leaders and teachers-in-training 鈥渁lso need to be AI literate, as do students and families. We can鈥檛 rely only on private companies with an interest in AI products to fund and lead AI readiness.鈥

Others were more sharply critical of the new partnership.

Justin Reich, an associate professor of digital media at MIT and host of the podcast , said industry-sponsored professional development is, at its core, a 鈥渃ustomer acquisition鈥 campaign. Since ISTE+ASCD is historically both a membership-driven teacher organization and an industry trade association, he asked, 鈥淗ow can it be an honest broker to those two constituencies, while also launching an enormous initiative that privileges the products of one particular vendor?鈥

Google’s past educator certification programs, he said, 鈥渇ocused more on tool use and adoption than on learning,鈥 with no substantive evidence that improved student outcomes followed.

Phillips said its research is ongoing, but noted that its app is allowing students to self-pace lessons. 鈥淲here they struggle, they can dive deeper and learn more and get more up-to-date,鈥 he said. Among several unpublished findings, Phillips said, is one that found students spend more time on topics they鈥檙e struggling with and end up learning these topics more deeply. 

Culatta admitted that Google would of course like to see its products in the hands of teachers. But he said he and his colleagues 鈥渨ant to make sure that if there are products going to schools 鈥 and they already are 鈥 that they’re being used in ways that are really impactful.鈥

He added, 鈥淚f it was going to just be, 鈥楬ere’s how to use Gemini,鈥 Google actually doesn’t need us. We are coming in because Google is looking for somebody who can say, 鈥榃hat are really the best practices for learning with AI, not necessarily learning about AI?鈥欌

Google鈥檚 Phillips said teachers and students 鈥渃an choose other products in the market and so forth, but this program does come with using our products so that we can help teachers really get started, get going.鈥 

He noted a 鈥渟uper-generous free tier鈥 to make the tools widely accessible, and the training to use it. 鈥淏ut schools, districts, teachers themselves have choice, and I think that’s perfectly fine, but we want to play a role with not just providing tools, giving people access, but actually helping them apply it and use it鈥 to jumpstart 鈥渟afe, appropriate use of AI.鈥

Justin Reich

MIT鈥檚 Reich said his deeper concern is what he said is the near-total absence of evidence underlying AI professional development, either to teach educators how to use AI in their classrooms or simply to teach them how AI and large language models work.

鈥淟iterally no one on the planet understands how [AI] works,鈥 he said. 鈥淭he best computer scientists in the world cannot explain why LLMs generate plausible sounding text in a convincing theoretical framework.鈥

Reich recounted asking engineers at a Google DeepMind event in November whether they knew how to train junior engineers to use AI tools effectively in their work. 鈥淓very single person I talked to said, 鈥楴o,鈥欌 he said. 鈥淚f Google doesn’t know how to effectively use AI to write code, what is this business about teaching people AI literacy? We just don’t know.鈥

Benjamin Riley, a well-known AI skeptic who founded the think tank , was more blunt, casting the Google partnership as part of an ongoing process making ISTE+ASCD a 鈥渟hill鈥 for Big Tech.

鈥淚 admit I’m fascinated to see the major Big Tech companies competing so vigorously to control 鈥榯he education market,鈥欌 Riley said. 鈥淥penAI is giving away their premium model to teachers (until they won’t), and now Google is doing whatever this is.鈥

Benjamin Riley

In the past, Riley has questioned whether offering teachers and students skills such as 鈥淎I literacy鈥 and 鈥淎I readiness鈥 are effective, even as many others warn that they鈥檒l be essential.

鈥淚 guess I’d credit their clairvoyance a tad more if ISTE+ASCD had not claimed, as recently as just a few years ago, that 鈥榯he future鈥 would also demand that everyone . Oops!鈥

Riley, who also founded the cognitive science advocacy and research group , predicted that much of the training will end up wasting teachers’ time, Google’s money and ISTE+ASCD’s relevance. 

鈥淗uman beings have evolved to learn from each other in the context of our relationships. This is the superpower of our species, and the kids who’ve grown up in the past 20 years are increasingly disgusted by what tech has done to them personally, and society more broadly. They are not happy about the world we’ve given them, and their voices are growing ever louder.鈥

Culatta, for his part, said AI 鈥渋s not going away. Does learning happen with people connected with each other? Sure. It’s not the only way learning happens, but it’s a very important way. And we actually think AI can help make those human-to-human learning experiences much better.鈥

]]>
AI Trailblazer Google Doesn鈥檛 Want Schools to 鈥楤ypass the Human鈥 /article/ai-trailblazer-google-doesnt-want-schools-to-bypass-the-human/ Mon, 02 Feb 2026 11:30:00 +0000 /?post_type=article&p=1027968 In 1999, the Indian computer scientist and educational theorist Sugata Mitra created a small, if audacious, learning experiment: He and colleagues at the National Institute of Information Technology in a street-level wall of their New Delhi office building and mounted an Internet-connected personal computer, usable by anyone who passed by. No instructions, no suggestions, no lesson plans. Just access.

Within hours, Mitra would later write, children from a nearby slum appeared 鈥渁nd glued themselves to the computer.鈥 They learned how to use the mouse, download games and music, play videos and surf the Web, all by teaching themselves.

The experiment in what Mitra called 鈥渕inimally invasive education鈥 was . It became in the ed tech world, evidence that children simply need access to tools to be successful.

Dr Sugata Mitra in front of his ‘hole in the wall’ experiment.

But don鈥檛 mention Mitra too enthusiastically to Ben Gomes, the computer scientist who co-leads Google鈥檚 education efforts. While the 鈥渉ole in the wall鈥 experiment is a hopeful, charming story, he鈥檇 say, it鈥檚 missing a key element: teachers.

People are fundamental in the learning process. People learn from other people, and people learn because of other people.

Ben Gomes, Google

鈥淲e are paying attention to pedagogy, and we’re working with the teachers,鈥 he said. 鈥淲e’re not saying we just want a thousand flowers to bloom randomly.鈥

As AI becomes more ubiquitous in schools, Gomes maintains that Google has a duty to train teachers not just how to use its products but also how to help them move students from taking shortcuts to using AI for deeper, often independent learning.

That strategy could dull longstanding complaints that ed tech more broadly is focused on replacing teachers with tech tools that don鈥檛 .

鈥淚t’s a belief backed by science, to a large extent, that people are fundamental in the learning process,鈥 Gomes said, 鈥渢hat people learn from other people, and people learn because of other people.鈥

Children certainly can and do learn independently, but deep conceptual understanding and literacy require guidance 鈥 especially now, nearly three decades after Mitra鈥檚 hole in the wall, with many developers looking for ways to replace teachers with AI.

鈥淭eachers are critical in this process,鈥 Gomes said. 鈥淲e don’t want to bypass the human.鈥

AI as 鈥榯hought partner鈥

In a recent , Gomes and a handful of colleagues explored how AI could reverse declining global learning, largely through supporting teachers and turbocharging personalization. In mid-January, Google said it was on AI in the classroom, offering its AI-driven Gemini app to more educators and students for free, making tools such as available and partnering with Khan Academy to power a writing coach tool.

The search giant has put a former NASA trainer in charge of much of the effort. Julia Wilkowski, a neuroscientist, has also taught sixth-grade math and science. She began her career at an outdoor environmental school, where she recalled hiking trips in which she鈥檇 ask students to figure out the velocity of a stream using only an orange, a length of string and a stopwatch.

Wilkowski now spends 鈥減retty much 100% of my time鈥 focused on ensuring that Google鈥檚 AI for students rests on sound learning science.

In interviews over the past few weeks, Gomes and Wilkowski spoke openly about their work, in several instances admitting that much of it amounts to helping teachers find ways to get students to stop outsourcing their thinking.

鈥淭eachers have the opportunity to teach their students how to use these tools ethically and effectively that don’t bypass those critical thinking skills,鈥 said Wilkowski.

As an example, she said, she has worked with English teachers to help them instruct  students on how to use AI as 鈥渁 thought partner鈥 in essay writing, not as the writer itself.

These teachers, she said, have succeeded by breaking down essay writing into its component parts and openly discussing its goals. They use AI to help students brainstorm essay topics, refine thesis statements, help generate first drafts and offer feedback on them, giving students 鈥済uidance and guardrails鈥 without allowing them to turn in AI-written essays.

The work, stretching back a year and a half, 鈥渉as really informed my optimism about how AI can be used successfully,鈥 she said.

Guided learning

Both Wilkowski and Gomes spoke often of 鈥済uided learning,鈥 saying students learn best when they move beyond simple answers to develop their own ideas and think critically. To get them to do so, teachers must guide them with carefully designed questions.

There's no published research showing that GenAI chatbots have the pedagogical content knowledge to be effective Socratic tutors.

Amanda Bickerstaff, AI for Education

Perhaps unsurprisingly, Google has for that, a section of Gemini that acts much as a private tutor or guide, offering students a taste of 鈥減roductive struggle鈥 that engages but also challenges them without offering answers (at least not immediately). Rather, it steers them to the answer through a series of questions.

Gomes said the principle is working its way into most of Google鈥檚 AI products, including a newer one called , which uses the technology to help students learn topics in interactive, more appealing ways most textbooks can鈥檛: as a text with quizzes, a narrated slideshow, an audio lesson and a 鈥渕ind map鈥 that lays out related ideas in connected graphics.

At its root, Gomes said, the dilemma over AI and cheating stems from motivation. 鈥淚f I look back at my own childhood, there are certainly cases where I was just interested in getting something done for tomorrow,鈥 he said. 鈥淎nd there are other cases where I was curious and I wanted to read more.鈥

The ratio between how much time students spend in one state vs. the other varies, he said, 鈥渂ut getting more people into the state where they are motivated, I think, is the goal.鈥

But Amanda Bickerstaff, co-founder and CEO of , a training and policy organization, said the reasons students turn to AI are 鈥渇ar more complicated than lack of motivation.鈥 

Students are dealing with 鈥減erfectionism, high-stakes assessments that prioritize grades, skill and language gaps,鈥 among other dilemmas. 鈥淔raming this primarily as a motivation issue oversimplifies what’s actually happening in classrooms.鈥

She said Google鈥檚 shift toward Socratic reasoning 鈥渟ounds promising, but there’s a fundamental problem: There’s no published research showing that GenAI chatbots have the pedagogical content knowledge to be effective Socratic tutors.鈥

The chatbots are 鈥渟ycophantic by nature,鈥 Bickerstaff said, offering answers and completing tasks even when not explicitly asked to. 鈥淭hat’s the opposite of productive struggle.鈥

And most young people, she said, don’t have sufficient AI literacy to use these tools strategically. 鈥淲ithout that foundation, chatbots become for schoolwork rather than a learning tool. You can’t solve that problem through interface design alone.鈥

More, better feedback

For her part, Wilkowski said much of the struggle over AI comes down to feedback: How much should students get, how often, and what should it look like?

Wilkowski said her daughter is in high school and was required to write an essay for a final exam in December. When Wilkowski spoke to 社区黑料 in early January, she said the essay still hadn鈥檛 been graded. 

鈥淚 would rather have AI-generated feedback,鈥 she said. 鈥淕ive the first draft, and then the teacher [can] review it, of course, before giving it to the students.鈥

Teachers have the opportunity to teach their students how to use these tools ethically and effectively that don't bypass those critical thinking skills.

Julia Wilkowski, Google

More broadly, she said, AI could soon change how students are assessed altogether, helping teachers move away from tools such as multiple-choice tests, whose problems are well-known in the testing world: They鈥檙e easy to create, administer and grade, and they鈥檙e reliable. But they also allow students to guess rather than show understanding, and they encourage students to learn by rote memorization rather than deeper engagement with material. 

Multiple-choice tests also can鈥檛 evaluate higher-order thinking skills, creativity, student writing or the ability to construct arguments. If AI can make essays or long-form questions or even projects easier to grade, wouldn鈥檛 that put the multiple-choice test out of business?

鈥淟et’s say you’re in physics class and you’re studying acceleration-versus-time graphs and you ride your bike home,鈥 Wilkowski said. 鈥淎n AI tool might pop up and say, 鈥楬ey, here’s your acceleration-versus-time graph of your bike ride home. What did you notice about your velocity? How did it change as you changed acceleration? Was there a hill that you had to overcome?鈥欌 

More relevant assignments and assessments, she said, could get students to think more critically, incorporating school into their real life in deeper ways. 鈥淚t goes back to the heart of what excited me as a teacher: those excited, hands-on lessons. I’m seeing a way that 鈥 AI can facilitate those in the future.鈥

AI for Education鈥檚 Bickerstaff said it’s encouraging to see Google working to create more 鈥渇it-for-purpose tools鈥 for student use. 

鈥淭he education sector desperately needs companies to move beyond general-purpose chatbots and build tools that actually support cognitive work rather than replace it,鈥 she said. 鈥淏ut there’s still a lot of work to do 鈥 and a lot of research that needs to happen 鈥 before we can know if these tools are effective learning guides.鈥

]]>
AI Tutors, With a Little Human Help, Offer 鈥楻eliable鈥 Instruction, Study Finds /article/ai-tutors-with-a-little-human-help-offer-reliable-instruction-study-finds/ Wed, 03 Dec 2025 11:30:00 +0000 /?post_type=article&p=1024317 An AI-powered tutor, paired with a human helper and individual-level data on a student鈥檚 proficiency, can outperform a human alone, with near-flawless results, a new study suggests. 

The results could open a new front in the evolving discussion over how to use AI in schools 鈥 and how closely humans must watch it when it鈥檚 interacting with kids.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


In a involving 165 British secondary school students, ages 13鈥15, the ed-tech startup put a small group of expert human tutors in charge of a , or LLM, offered by Google鈥檚 . As it tutored students on math problems via Eedi鈥檚 platform, it drafted replies when students needed help. Before the messages went out, the human tutors got a chance to revise each one to the point where they鈥檇 feel comfortable sending it themselves.

Students didn鈥檛 know whether they were talking to a human or a chatbot, but they had longer conversations, on average, with the 鈥渟upervised鈥 AI/human combination than simply with a human tutor, said Bibi Groot, Eedi鈥檚 chief impact officer. 

In the end, students using the supervised AI tutor performed slightly better than those who chatted online via text with human tutors 鈥 they were able to solve new kinds of problems on subsequent topics successfully 66.2% of the time, compared to 60.7% with human tutors.

The AI, researchers concluded, was 鈥渁 reliable source鈥 of instruction. Human tutors approved about three out of four drafted messages with few to no edits.

Students who got both human and AI tutoring were able to correct misconceptions and offer correct answers over 90% of the time, compared to just 65% of the time when they got a 鈥渟tatic, pre-written鈥 response to their questions.

And the AI only 鈥渉allucinated,鈥 or offered factual errors, 0.1% of the time 鈥 in 3,617 messages, that amounted to just five hallucinations. It didn鈥檛 produce any messages that gave the tutors pause over safety.

The results suggest that 鈥減edagogically fine-tuned鈥 AI could play a role in delivering effective, individualized tutoring at scale, researchers said. Interestingly, students who received support from the AI were more likely to solve new kinds of problems on subsequent topics. 

The key to the AI鈥檚 success, said Groot, was that researchers gave it access to detailed, 鈥渆xtremely personalized鈥 information about what topics students had covered over the previous 20 weeks. That included the topics they鈥檇 struggled with and those they鈥檇 mastered. 

鈥淲e know what topics they’re covering in the next 20 weeks 鈥 we know the curriculum. We know the other students in the classroom. We know whether they’re putting effort into their questions. We know whether they’re watching videos or not 鈥 we know so much about the student without passing any personally identifiable information to the AI.鈥

Bibi Groot

That guided the AI鈥檚 strategy about whether students needed an extra push or just more support 鈥 something an 鈥渙ut-of-the-box, vanilla LLM鈥 can鈥檛 do, she said.

鈥淭hey don’t know anything about what the teacher is teaching in the classroom,鈥 Groot said. 鈥淭hey don’t know what misconceptions or what topics the students are struggling with and what they’ve already mastered, so they’re not able to dynamically change how they address the topic, as a human tutor would.鈥

Human tutors, she said, generally have 鈥渁 really good sense of where the student struggles, because they have some sort of ongoing relation with a student most of the time. An LLM tutor generally doesn’t.鈥

All the same, even master tutors typically don鈥檛 go into a session knowing a student鈥檚 comprehensive history in a course, including their misconceptions about the material. 鈥淎ll of that is too much information for a human tutor to read up on and deal with while they’re having one conversation鈥 with a student, Groot said.

And they鈥檙e under pressure to respond quickly 鈥渟o that the student is not left waiting. And that’s quite an intensive experience for tutors that leads to a bit of cognitive overload,鈥 she said. The AI doesn’t suffer from that. It needs less than a millisecond to read all of those contexts and come up with that first question.鈥

Even with their personal connection to students, human tutors can鈥檛 be available 24/7. Groot said Eedi employs about 25 tutors across several time zones who are available to students from 9 a.m. to 10 p.m. every day, but to give students broader access would require hiring 鈥渁n army of tutors,鈥 she said.

The new findings could encourage schools to use AI as a kind of 鈥渇ront line鈥 tutor, with humans intervening when a student is 鈥渄erailing the conversation, or they have such a persistent misconception that the AI can’t deal with it,鈥 said Groot. 鈥淲e think that would be an interesting way to collaborate between the AI and the human, because there is still a really important role for a human tutor. But our human tutors just cannot have conversations with thousands of students at once.鈥

The new study, published last week on Eedi鈥檚 site and scheduled to appear in a peer-reviewed journal next year, differed in one important way from recent studies that looked at AI tutoring. Researchers at in October 2024 examined AI-assisted human tutoring, in which tutors primarily drove the conversation. But in that case, the AI acted as a kind of assistant, providing suggestions behind the scenes. In the Eedi study, it was the other way around, with AI driving the conversation and humans overseeing it.

Robin Lake, director of the at Arizona State University, said the study is important in and of itself, but also in the context of broader findings elsewhere suggesting that, with proper training and guidance, 鈥淎I can be an incredibly powerful tool 鈥 and certainly has a potential to take tutoring to scale in ways that we’ve never seen before.鈥

Under controlled circumstances, she said, it鈥檚 also 鈥渙utperforming humans 鈥 that’s really important.鈥

AI can be an incredibly powerful tool 鈥 and certainly has a potential to take tutoring to scale in ways that we've never seen before.

Robin Lake, Center on Reinventing Public Education

Lake noted a from Harvard researchers that examined results from 194 undergraduates in a large physics class. They presented identical material in class and via an AI tutor and found that students learned 鈥渟ignificantly more in less time鈥 using the tutor. They also felt more engaged and motivated about the material.

Liz Cohen, vice president of policy for 50CAN and author of the recent book , said the study provides 鈥渧aluable evidence鈥 about new kinds of tutoring. 

But one of its limitations, she said, is that it relied on 13-to-15-year-olds. 鈥淪o immediately I have a lot of questions about if the findings are applicable for younger students, especially using a chat based model,鈥 which may not be a good one for such students.

I still mostly think that entirely AI tutoring programs are biased towards students who want to do the work or are interested in learning.

Liz Cohen, 50CAN

She also noted that there are many questions around student persistence with AI tutors, including what happens when students get frustrated or aren鈥檛 sufficiently engaged in the work? 

鈥淚 still mostly think that entirely AI tutoring programs are biased towards students who want to do the work or are interested in learning,鈥 Cohen said, 鈥渁nd it鈥檚 pretty easy to see that students who aren鈥檛 bought in or are frustrated are going to give up more readily with an AI tutor.鈥

She noted that her 12-year-old daughter has experienced problems persisting in an AI-powered math tutoring program. 鈥淪he gets frustrated if she can鈥檛 get the answer and then she doesn鈥檛 want to do it anymore, so I think we need to figure out that piece of it.鈥

]]>
Will New AI Academy Help Teachers or Just Improve Tech鈥檚 Bottom Line? /article/will-new-ai-academy-help-teachers-or-just-improve-techs-bottom-line/ Mon, 04 Aug 2025 10:30:00 +0000 /?post_type=article&p=1018966 Washington, D.C. 

Mariely Sanchez spent the last school year using generative artificial intelligence nearly every day in her classroom.

The Miami fourth-grade teacher began each morning by asking a chatbot 鈥 teachers in Miami-Dade have access not only to ChatGPT, but to Google鈥檚 Gemini and Microsoft鈥檚 Co-Pilot 鈥 to comb through Florida state standards and create reading passages for students. She鈥檇 also ask the AI to produce multiple-choice and short-response quizzes to test how well students understood the reading. 


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


The assignments, she said, weren鈥檛 easy for students. She built them by using 鈥渄ifficult standards that students need more practice with鈥 and prompting the AI to create materials.

Sanchez is spending her summer break learning more about AI, including its ethics, and helping colleagues do the same, warning:

We know it's not going to go away 鈥 it's here to stay, but we want to make sure we use it the right way.

Mariely Sanchez, fourth grade teacher

That effort got a big boost earlier last month, when the American Federation of Teachers that it would open an AI training center for educators in New York City, with $23 million in funding from OpenAI, Anthropic and Microsoft, three of the leading players in the generative AI marketplace.

AFT says it鈥檒l open the National Academy for AI Instruction in Manhattan this fall, offering hands-on workshops for teachers. Over five years, it said, the academy will train 400,000 educators, or one in 10 U.S. teachers, effectively reaching the more than 7.2 million students they teach. 

When she announced the academy in early July, AFT President Randi Weingarten said teachers face 鈥渉uge challenges,鈥 including navigating AI wisely, ethically and safely. 鈥淭he question was whether we would be chasing it 鈥 or whether we would be trying to harness it.鈥

鈥業t鈥檚 the Wild West鈥

AFT, the nation鈥檚 second largest teachers鈥 union, envisions the academy working much like those that train carpenters, electricians and construction workers,鈥渨here the companies, where the corporations actually come to the union to create the kind of standards that are needed鈥 for success, Weingarten said. 

Microsoft, for example, has said it plans to give more than $4 billion in cash and technology services to train millions of people to use AI, underwriting efforts at schools, community colleges, technical colleges and nonprofits. The tech giant already boasts an AI to train members of the larger AFL-CIO labor union, of which AFT is a member. And it鈥檚 creating a new training program, , to help 20 million people earn certificates in AI.

Rob Weil 鈥 AFT鈥檚 director of research, policy and field programs 鈥 said the new academy will bring high-quality training to a profession that so far has seen uneven opportunity for it.

鈥淚t’s the Wild West,鈥 he said in an interview during a training session at the union鈥檚 annual conference in July. 鈥淚t’s all over the place. You have some school districts that are out front, and they’re doing a lot of pretty good work.鈥 But others are banning AI or simply ignoring it, he said, leaving teachers to fend for themselves at a time when students need them perhaps more than ever.

鈥淲e have to make our instruction better. We have to be better on engagement. We have a crisis of engagement in our schools, and these tools can help with that.鈥

AFT鈥檚 move has been met with equal parts cautious optimism and weary skepticism.

Writing in her , ed-tech critic and AI skeptic Audrey Watters called  AFT鈥檚 partnership with the tech companies 鈥渁 gigantic public experiment that no one has asked for.”

Unions, she wrote, 鈥渟hould be one of the ways in which workers resist, rather than acquiesce to 鈥 the tech industry’s vision of the future.鈥 By joining forces with big tech, she said, AFT is implicitly endorsing its products. 鈥淭eaching teachers how to use a suite of Microsoft tools does not help students as much as it helps Microsoft. Teaching teachers how to use a suite of Microsoft tools is not so much an 鈥榓cademy鈥 as a storefront.”

Benjamin Riley, who has also about generative AI in education, said observers should 鈥100% worry鈥 that the new partnerships represent a play for market share. 

鈥淚t’s very obvious from a product standpoint that they see education as one of, if not the primary, place to go with their product,鈥 said Riley. 鈥淎nd the fact that AFT is willing to say, ‘Cool, let’s get some of that money and we’ll build a training center to help teachers use it,’ I can see why OpenAI would jump all over that.鈥

But he questioned whether AI training is what AFT members really want. He suggested instead that the union should recommit to helping teachers more deeply understand how learning works. 鈥淭hey haven’t been opposed to it,鈥 he said, noting that it has long run an 鈥溾 column in the magazine it mails to members. 鈥淏ut in reality it just hasn’t been a priority. Improving pedagogy hasn’t really been, to my eyes, a union priority for a long time.鈥

Riley, who in 2024 founded the think tank to explore AI issues, said an organization like AFT should ideally be thinking about whether embracing AI will lead to better outcomes for children 鈥 or whether it could 鈥減otentially erode and devalue the work of human teaching鈥 while opening up schools as customers for AI companies. 

Representatives of OpenAI and Anthropic did not immediately respond to requests for comment, but in an email, Microsoft鈥檚 Naria Santa Lucia said, 鈥淭his isn鈥檛 about Microsoft鈥檚 technology, our focus is on making AI broadly accessible, so everyone has a fair shot at the future. If we collectively get this right, AI becomes a bridge to opportunity 鈥 not a barrier.鈥

During the academy鈥檚 unveiling, Chris Lehane, OpenAI鈥檚 chief global affairs officer, said AI technology 鈥渋s coming 鈥 it is going to drive productivity gains. Can we ensure that those productivity gains are democratized so as many people as possible participate in them? And there is no better place to begin that work than in the classroom.鈥

OpenAI has noted that many of its users are students. In February, it said that of college-aged young adults in the U.S. use ChatGPT, with one in four of their queries related to learning and school work.

While a few observers said the tech giants are making a play for market share among the nation’s K-12 students, they noted that the companies are also filling an important role. 

鈥淚t’s welcome news that technology companies are bidding against each other 鈥 to outdo each other 鈥 to invest in public education,鈥 said Zarek Drozda, executive director of , a coalition of groups advancing data science education. 鈥淚 think that’s exciting at a time when federal investment in education is uncertain. Seeing industry step up is quite meaningful.鈥

But he said he鈥檚 concerned that the training might stop short after teaching teachers 鈥 and by extension students 鈥 simply how to use AI. 鈥淭raining needs to go beyond use,鈥 he said. 鈥淚f we want to train a generation of students to be AI-ready, internationally competitive, they have to understand how these tools work under the hood, when and why the tool might be wrong, and how they can customize LLMs [Large Language Models] or other models for their own pursuits, versus simply taking what’s given.鈥

He鈥檚 also concerned that the AFT has laid out a vision spanning just five years. 鈥淲e want there to be a deep investment in upskilling teachers for the skills that they will need to adapt to, not just AI, but what is the AI model five years from now?鈥 he said. 鈥淲hat is the next emerging technology that the field should be ready to adapt to?鈥

More than just a commitment to training, Drozda said, the union and its partners should commit to a long-term sustainability plan for teacher training to attract new, young career professionals to the field.

Ami Turner Del Aguila (left, standing) coaches Melina Espiritu-Azocar (center) and Monique Boone during a recent AI training sponsored by the American Federation of Teachers. Both former teachers, Espiritu-Azocar and Boone now lead local AFT chapters in Texas. (Greg Toppo)

Alex Kotran, founder and CEO of the , agreed that investing in teacher training is worthwhile. 鈥淭hat’s a very big rock that needs to be moved.鈥 But the reported $23 million commitment from the three tech giants 鈥渋s a bit of a drop in the bucket鈥 considering their valuations, 鈥渟ymbolic at best.鈥

That said, AFT鈥檚 involvement could make the training more palatable for many school district leaders, he noted, since one of the uncertainties in training efforts typically is whether unions will allow members to attend under contract rules. By taking the lead in developing the training academy, 鈥渢he unions have planted a flag and said, 鈥楶D [professional development] is important.鈥欌

All the same, tech companies are in the business of selling their products, making them imperfect messengers for AI literacy, he said. 鈥淭hey’re deeply incentivized on one side, and it isn’t necessarily for the benefit of students.鈥 

Other industry watchers fear the partnership could be viewed as a high-profile bid for market share at a critical time in the AI industry鈥檚 history. 

鈥淭his is a land-grab moment,鈥 said Alex Sarlin, co-host of the podcast. 鈥淚 mean, this technology is only three years old. There are already three or four major players in it, if you don’t count China, and they all want to be the one left standing.鈥

For its part, Google has said its suite of Gemini educational AI tools would for free to all educators with Google Workspace for Education accounts.

While it was the only major player not included in the AFT announcement, Sarlin said Google is, in some ways, 鈥減laying the incumbent in this because in K-12, they’re already there.鈥 Given the dominance of Chromebook laptops, the management tool and its programs, the search giant is 鈥渆mbedded in K-12,鈥 he said. 鈥淥pen AI and Anthropic, they’re basically consumer products that are being used by teachers.鈥

鈥極h yeah, what could go wrong?鈥

Matt Miller, an Indiana high school Spanish teacher, educational consultant and for teachers, said his colleagues are hungry for high-quality, classroom-tested training, but that what they often get from AI companies is over-the-top talk about 鈥渉ow much the world is going to change and how we’re revolutionizing education,鈥 with promises to help teachers work more efficiently.

Trainings typically skim over the fact that most students are simply using generative AI for 鈥渃ognitive offloading,鈥 Miller said, avoiding critical thinking and skill development  鈥渁nd letting AI do it for them.鈥 Many teachers, meanwhile, are searching for ways to 鈥淎I-proof鈥 their classrooms. 

The sessions typically all end the same way, he said: 鈥淚t all sort of funnels back to their product.鈥 

Miller, whose latest book, in 2023, was , said the AFT/OpenAI/Anthropic partnership 鈥渟cares the crap out of me.鈥

鈥淲henever you get that marriage between an organization and big companies, we just keep asking ourselves, 鈥極h, yeah, what could go wrong?鈥欌

Money means influence, Miller said, so will the curriculum be 鈥渢ool-agnostic? Is it going to be about the technology? Is it going to be about pedagogy? Or is it going to be a customized tutorial of how you can use our tool to do X, Y and Z?鈥

AFT鈥檚 Weil said those concerns are understandable but short-sighted. AI developers, he said, 鈥渄on’t get to engage with us if you’re not going to be agnostic about the tools.鈥 The academy鈥檚 directors talk openly to the developers 鈥渁bout how we have to have a practical, real relationship. This can’t be about product selling.鈥

More broadly, the partnerships are a way to exert influence upon how AI operates in schools and classrooms.

The only way we have a profession is if we control the profession.

Rob Weil, AFT鈥檚 director of research, policy and field programs

During the academy鈥檚 unveiling, Weingarten said its lessons will be 鈥渁s open-source as possible,鈥 not just for the union鈥檚 1.8 million members but more broadly through its free platform.

For his part, Weil said AI is 鈥渘ot going to go away. Nobody’s going to put AI back in the bottle. It’s here. The young people, for them to be successful in their jobs in the future, are going to have to know how to effectively and efficiently and safely use these tools. So why wouldn’t the education system help with that process?鈥

That鈥檚 likely the message that union leaders have been getting from members, said Sarlin, the podcast co-host. 鈥淭here was probably a moment a couple years ago where they were sort of teetering, where they could have gone anti-AI,鈥 he said. 鈥淏ut I think at this point that’s not where the puck is headed.鈥

]]>
University of Nebraska-Google Career Certificates Partnership Opens This Week /article/university-of-nebraska-google-career-certificates-partnership-opens-this-week/ Tue, 18 Jun 2024 16:30:00 +0000 /?post_type=article&p=728646 This article was originally published in

Enrollment opens this week for the University of Nebraska鈥檚 new partnership to offer Google Career Certificates in a variety of fields.

Beginning Wednesday, June 19, NU students, alumni and Nebraskans at large can begin to for a variety of self-paced, noncredit courses. Interim NU President Chris Kabourek said that since announcing the in April, with little marketing, more than 1,000 people had already .

Melissa Lee, NU鈥檚 chief communication officer, said 1,247 people had registered as of Friday. Of those registrants, 20% are current NU students and 40% are alumni, meaning hundreds of Nebraskans who might have no connections to NU are interested in more education.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


鈥淚 just think it solidifies what we thought, that Nebraskans are yearning for more skill sets and more education,鈥 Kabourek told the Nebraska Examiner.

An email last week to pre-registrants the week of June 10 from Ana Lopez Shalla, lead for NU鈥檚 microcredentials, offered to completing the minicourses. She wrote that registrants were 鈥渉elping to drive impact not only in your own career, but in our regional workforce, too.鈥

The Google Career Certificates will be offered in three cycles in the next year, with 2,500 seats available for each session. They begin in August, December and April. Enrollment will be open through July 31; courses in the first session will begin the next day.

In April, NU announced a special first-year rate of $20 per enrollment.

Kabourek said at the time that the partnership is designed for opportunities, not revenue, and that funds would be used to cover costs and any associated technological needs.

will be offered:

  • Cybersecurity
  • IT support
  • Data analytics
  • Digital marketing and e-commerce
  • Project management
  • User experience (UX) design
  • IT automation with python.Advanced data analytics
  • Business intelligence.

Kabourek, who will return to his sole role as NU鈥檚 chief financial officer come July 1, said one of his priorities as interim president has been to help the university reconnect with Nebraskans, which will include getting out to visit high schools in the fall.

As a rural Nebraskan from David City, Kabourek said, he knows every Nebraskan can find a place within NU.

鈥淲e never want your ability to go get your education or develop your skill sets or enhance your resume to be limited by your family situation or your location,鈥 Kabourek said.

is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Nebraska Examiner maintains editorial independence. Contact Editor Cate Folsom for questions: info@nebraskaexaminer.com. Follow Nebraska Examiner on and .

]]>
NU, Google to Offer Career Certificates to Students, Alumni and All Nebraskans /article/nu-google-to-offer-career-certificates-to-students-alumni-and-all-nebraskans/ Thu, 18 Apr 2024 15:30:00 +0000 /?post_type=article&p=725540 This article was originally published in

LINCOLN 鈥 The University of Nebraska and Google are entering a new partnership designed to further Nebraskans鈥 education and support state workforce needs.

Interim NU President Chris Kabourek announced Tuesday that the university will soon offer in a variety of fields. is open now on a first-come, first-served basis and will begin with the 2024-25 academic year. Three cycles will be offered 鈥 in August, December and April 鈥 with 2,500 seats available in each.

Kabourek said 鈥渋t鈥檚 a win鈥 when more education is brought directly to Nebraskans and students.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


鈥淎s a native of rural Nebraska myself, I believe strongly that every Nebraskan should have access to quality, affordable educational opportunities no matter where they live or what their personal circumstances are,鈥 Kabourek said in a statement.

The goal of the partnership is to provide opportunity, not make money, Kabourek added in a text. NU will retain all revenue raised through enrollment in the certificates, which will cover administrative costs and any associated technological needs.

Learn at their own pace

Google experts teach the programs, which are vetted by leading employers. NU students, alumni and Nebraska residents can get a special first-year rate of $20 per enrollment.

Students learn at their own pace over three to six months of part-time study in multiple courses:

  • Cybersecurity
  • IT support
  • Data analytics
  • Digital marketing and e-commerce
  • Project management
  • User experience (UX) design

Advanced certifications are also available, tailored for learners with multiple years of experience or as a next step after completing an entry-level certificate:

  • IT automation with python
  • Advanced data analytics
  • Business intelligence

U.S. Rep. Mike Flood, R-Neb., endorsed the partnership as providing affordable access to education and as 鈥測et another pathway for Nebraskans to pursue their dreams and expand their career horizons.鈥 He said he looks forward to seeing the positive impact it will have.

鈥淒eveloping Nebraskans to take the jobs of the future is one of the cornerstones of growing Nebraska鈥檚 economy,鈥 Flood said in a statement.

A 2023 report from the American Association of Colleges and Universities found that employers are generally in strong support of these 鈥渕icrocredentials.鈥 In the report, two-thirds said they would prefer college graduates with microcredentials for entry-level positions.

More than 250,000 people in the United States have earned a Google certificate, 75% of whom had a positive career impact, such as a new job, promotion or raise, according to Google.

鈥淲e鈥檙e committed to investing in Nebraskans to ensure that they have the tech and other job ready skills to enter the workforce and reach their full economic potential,鈥 said Lisa Gevelber, founder of Grow with Google.

More postsecondary credentials

Kabourek said the new partnership advances a 2022 legislative goal, which NU supported, to increase the percentage of Nebraskans with postsecondary credentials by 2030 to 70%.

State Sen. Lynne Walz of Fremont, who was then chair of the Legislature鈥檚 Education Committee, shepherded the 2022 and through the Legislature .

Tim Jares, dean of the University of Nebraska at Kearney鈥檚 College of Business and Technology, described the new partnership as 鈥渢errific鈥 and said it adds to the work faculty are doing to support students and alumni 鈥渁mplify their marketability.鈥

鈥淔rom our perspective, the more opportunities for education we provide, the better,鈥 Jares said. 鈥淚鈥檓 proud that the University of Nebraska is playing a leadership role in creating access for Nebraskans and growing a skilled workforce for our state.鈥

Other leading U.S. institutions already offer career certificates, including Syracuse University, the University of Texas system and two fellow Big Ten members 鈥 the University of California-Los Angeles and Rutgers.

is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Nebraska Examiner maintains editorial independence. Contact Editor Cate Folsom for questions: info@nebraskaexaminer.com. Follow Nebraska Examiner on and .

]]>
A Cautionary AI Tale: Why IBM鈥檚 Dazzling Watson Supercomputer Made a Lousy Tutor /article/a-cautionary-ai-tale-why-ibms-dazzling-watson-supercomputer-made-a-lousy-tutor/ Tue, 09 Apr 2024 13:30:00 +0000 /?post_type=article&p=724698

With a new race underway to create the next teaching chatbot, IBM鈥檚 abandoned 5-year, $100M ed push offers lessons about AI鈥檚 promise and its limits. 

In the annals of artificial intelligence, Feb. 16, 2011, was a watershed moment.

That day, IBM鈥檚 Watson supercomputer finished off a three-game shellacking of Jeopardy! champions Ken Jennings and Brad Rutter. Trailing by over $30,000, Jennings, now the show鈥檚 host, wrote out his Final Jeopardy answer in mock resignation: 鈥淚, for one, welcome our computer overlords.鈥

A lark to some, the experience galvanized Satya Nitta, a longtime computer researcher at IBM鈥檚 Watson Research Center in Yorktown Heights, New York. Tasked with figuring out how to apply the supercomputer鈥檚 powers to education, he soon envisioned tackling ed tech鈥檚 most sought-after challenge: the world’s first tutoring system driven by artificial intelligence. It would offer truly personalized instruction to any child with a laptop 鈥 no human required.

YouTube

鈥淚 felt that they’re ready to do something very grand in the space,鈥 he said in an interview. 

Nitta persuaded his bosses to throw more than $100 million at the effort, bringing together 130 technologists, including 30 to 40 Ph.D.s, across research labs on four continents. 

But by 2017, the tutoring moonshot was essentially dead, and Nitta had concluded that effective, long-term, one-on-one tutoring is 鈥渁 terrible use of AI 鈥 and that remains today.鈥

For all its jaw-dropping power, Watson the computer overlord was a weak teacher. It couldn鈥檛 engage or motivate kids, inspire them to reach new heights or even keep them focused on the material 鈥 all qualities of the best mentors.

It鈥檚 a finding with some resonance to our current moment of AI-inspired doomscrolling about the future of humanity in a world of ascendant machines. 鈥淭here are some things AI is actually very good for,鈥 Nitta said, 鈥渂ut it’s not great as a replacement for humans.鈥

His five-year journey to essentially a dead-end could also prove instructive as ChatGPT and other programs like it fuel a renewed, multimillion-dollar experiment to, in essence, prove him wrong.

Some of the leading lights of ed tech, from to , are trying to pick up where Watson left off, offering AI tools that promise to help teach students. Sal Khan, founder of Khan Academy, last year said AI has the potential to bring 鈥減robably the 鈥 that education has ever seen. He wants to give 鈥渆very student on the planet an artificially intelligent but amazing personal tutor.鈥

A 25-year journey

To be sure, research on high-dosage, one-on-one, in-person tutoring is : It鈥檚 interventions available, offering significant improvement in students鈥 academic performance, particularly in subjects like math, reading and writing.  

But traditional tutoring is also 鈥渂reathtakingly expensive and hard to scale,鈥 said Paige Johnson, a vice president of education at Microsoft. One school district in West Texas, for example, recently spent in federal pandemic relief funds to tutor 6,000 students. The expense, Johnson said, puts it out of reach for most parents and school districts. 

We missed something important. At the heart of education, at the heart of any learning, is engagement.

Satya Nitta, IBM Research鈥檚 former global head of AI solutions for learning

For IBM, the opportunity to rebalance the equation in kids鈥 favor was hard to resist. 

The Watson lab is legendary in the computer science field, with and six Turing Award winners among its ranks. It鈥檚 where modern was invented, and home to countless other innovations such as barcodes and the magnetic stripes on credit cards that make . It鈥檚 also where, in 1997, Deep Blue beat Garry Kasparov, essentially inventing the notion that AI could 鈥渢hink鈥 like a person.

Chess enthusiasts watch World Chess champion Garry Kasparov on a television monitor as he holds his head in his hands at the start of the sixth and final match May 11, 1997 against IBM’s Deep Blue computer in New York. Kasparov lost this match in just 19 moves. (Stan Honda/Getty)

The heady atmosphere, Nitta recalled, inspired 鈥渁 very deep responsibility to do something significant and not something trivial.鈥

Within a few years of Watson鈥檚 victory, Nitta, who had arrived in 2000 as a chip technologist, rose to become IBM Research鈥檚 global head of AI solutions for learning. For the Watson project, he said, 鈥淚 was just given a very open-ended responsibility: Take Watson and do something with it in education.鈥

Nitta spent a year simply reading up on how learning works. He studied cognitive science, neuroscience and the decades-long history of 鈥渋ntelligent tutoring systems鈥 in academia. Foremost in his reading list was the research of Stanford neuroscientist Vinod Menon, who鈥檇 put elementary schoolers through a 12-week math tutoring session, collecting before-and-after scans of their brains using an MRI. Tutoring, he found, produced nothing less than an increase in neural connectivity. 

Nitta returned to his bosses with the idea of an AI-powered cognitive tutor. 鈥淭here’s something I can do here that’s very compelling,鈥 he recalled saying, 鈥渢hat can broadly transform learning itself. But it’s a 25-year journey. It’s not a two-, three-, four-year journey.鈥

IBM drafted two of the highest-profile partners possible in education: the children鈥檚 media powerhouse Sesame Workshop and Pearson, the international publisher.

One product envisioned was a voice-activated Elmo doll that would serve as a kind of digital tutoring companion, interacting fully with children. Through brief conversations, it would assess their skills and provide spoken responses to help kids advance.

One proposed application of IBM鈥檚 planned Watson tutoring app was to create a voice-activated Elmo doll that would be an interactive digital companion. (Getty)

Meanwhile, Pearson promised that it could soon allow college students to 鈥渄ialogue with Watson in real time.鈥

Nitta鈥檚 team began designing lessons and putting them in front of students 鈥 both in classrooms and in the lab. In order to nurture a back-and-forth between student and machine, they didn鈥檛 simply present kids with multiple-choice questions, instead asking them to write responses in their own words.

It didn鈥檛 go well.

Some students engaged with the chatbot, Nitta said. 鈥淥ther students were just saying, ‘IDK’ [I don鈥檛 know]. So they simply weren’t responding.鈥 Even those who did began giving shorter and shorter answers. 

Nitta and his team concluded that a cold reality lay at the heart of the problem: For all its power, Watson was not very engaging. Perhaps as a result, it also showed 鈥渓ittle to no discernible impact鈥 on learning. It wasn鈥檛 just dull; it was ineffective.

Satya Nitta (left) and part of his team at IBM鈥檚 Watson Research Center, which spent five years trying to create an AI-powered interactive tutor using the Watson supercomputer.

鈥淗uman conversation is very rich,鈥 he said. 鈥淚n the back and forth between two people, I’m watching the evolution of your own worldview.鈥 The tutor influences the student 鈥 and vice versa. 鈥淭here’s this very shared understanding of the evolution of discourse that’s very profound, actually. I just don’t know how you can do that with a soulless bot. And I’m a guy who works in AI.鈥

When students鈥 usage time dropped, 鈥渨e had to be very honest about that,鈥 Nitta said. 鈥淎nd so we basically started saying, ‘OK, I don’t think this is actually correct. I don’t think this idea 鈥 that an intelligent tutoring system will tutor all kids, everywhere, all the time 鈥 is correct.鈥

鈥榃e missed something important鈥

IBM soon switched gears, debuting another crowd-pleasing Watson variation 鈥 this time, a touching throwback: It engaged in . In a televised demonstration in 2019, it went up against debate champ Harish Natarajan on the topic 鈥淪hould we subsidize preschools?鈥 Among its arguments for funding, the supercomputer offered, without a whiff of irony, that good preschools can prevent 鈥渇uture crime.鈥 Its current iteration, , focuses on helping businesses build AI applications like 鈥渋ntelligent customer care.鈥澛

Nitta left IBM, eventually taking several colleagues with him to create a startup called . It uses voice-activated AI to safely help teachers do workaday tasks such as updating digital gradebooks, opening PowerPoint presentations and emailing students and parents. 

Thirteen years after Watson鈥檚 stratospheric Jeopardy! victory and more than one year into the Age of ChatGPT, Nitta鈥檚 expectations about AI couldn鈥檛 be more down-to-earth: His AI powers what鈥檚 basically 鈥渁 carefully designed assistant鈥 to fit into the flow of a teacher’s day. 

To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents 鈥渁 profound misunderstanding of what AI is actually capable of.鈥 

Nitta, who still holds deep respect for the Watson lab, admits, 鈥淲e missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.鈥

These notions aren鈥檛 news to those who do tutoring for a living. , which offers live and online tutoring in 500 school districts, relies on AI to power a lesson plan creator that helps personalize instruction. But when it comes to the actual tutoring, humans deliver it, said , chief institution officer at , which operates Varsity.

鈥漈he AI isn’t far enough along yet to do things like facial recognition and understanding of student focus,鈥 said Salcito, who spent 15 years at Microsoft, most of them as vice president of worldwide education. 鈥淥ne of the things that we hear from teachers is that the students love their tutors. I’m not sure we’re at a point where students are going to love an AI agent.鈥

Students love their tutors. I'm not sure we're at a point where students are going to love an AI agent.

Anthony Salcito, Nerdy

The No. 1 factor in a student鈥檚 tutoring success is consistently, research suggests. As smart and efficient as an AI chatbot might be, it鈥檚 an open question whether most students, especially struggling ones, would show up for an inanimate agent or develop a sense of respect for its time.

When Salcito thinks about what AI bots now do in education, he鈥檚 not impressed. Most, he said, 鈥渁ren’t going far enough to really rethink how learning can take place.鈥 They end up simply as fast, spiffed-up search engines. 

In most cases, he said, the power of one-on-one, in-person tutoring often emerges as students begin to develop more honesty about their abilities, advocate for themselves and, in a word, demand more of school. 鈥淚n the classroom, a student may say they understand a problem. But they come clean to the tutor, where they expose, ‘Hey, I need help.’鈥

Cognitive science suggests that for students who aren鈥檛 motivated or who are uncertain about a topic, only will help. That requires a focused, caring human, watching carefully, asking tons of questions and reading students鈥 cues. 

Jeremy Roschelle, a learning scientist and an executive director of Digital Promise, a federally funded research center, said usage with most ed tech products tends to drop off. 鈥淜ids get a little bored with it. It’s not unique to tutors. There’s a newness factor for students. They want the next new thing.鈥 

There's a newness factor for students. They want the next new thing.

Jeremy Roschelle, Digital Promise

Even now, Nitta points out, research shows that big commercial AI applications don鈥檛 seem to hold users鈥 attention as well as top entertainment and social media sites like YouTube, Instagram and TikTok. dubbed the user engagement of sites like ChatGPT 鈥渓ackluster,鈥 finding that the proportion of monthly active users who engage with them in a single day was only about 14%, suggesting that such sites aren鈥檛 very 鈥渟ticky鈥 for most users.

For social media sites, by contrast, it鈥檚 between 60% and 65%. 

One notable AI exception: , an app that allows users to create companions of their own among figures from history and fiction and chat with the likes of Socrates and Bart Simpson. It has a stickiness score of 41%.

As startups like offer 鈥測our child’s superhuman tutor,鈥 starting at $29 per month, and publicly tests its popular Khanmigo AI tool, Nitta maintains that there鈥檚 little evidence from learning science that, absent a strong outside motivation, people will spend enough time with a chatbot to master a topic.

鈥淲e are a very deeply social species,鈥 said Nitta, 鈥渁nd we learn from each other.鈥

IBM declined to comment on its work in AI and education, as did Sesame Workshop. A Pearson spokesman said that since last fall it has been 鈥嬧媌eta-testing AI study tools keyed to its e-textbooks, among other efforts, with plans this spring to expand the number of titles covered. 

Getting 鈥榰nstuck鈥

IBM鈥檚 experiences notwithstanding, the search for an AI tutor has continued apace, this time with more players than just a legacy research lab in suburban New York. Using the latest affordances of so-called large language models, or LLMs, technologists at Khan Academy believe they are finally making the first halting steps in the direction of an effective AI tutor. 

Kristen DiCerbo remembers the moment her mind began to change about AI. 

It was September 2022, and she鈥檇 only been at Khan Academy for a year-and-a-half when she and founder Khan got access to a beta version of ChatGPT. Open AI, ChatGPT鈥檚 creator, had asked Microsoft co-founder Bill Gates for more funding, but he told them not to come back until the chatbot could pass an Advanced Placement biology exam.

Khan Academy founder Sal Khan has said AI has the potential to bring 鈥減robably the biggest positive transformation鈥 that education has ever seen. He wants to give every student an 鈥渁rtificially intelligent but amazing personal tutor.鈥 (Getty)

So Open AI queried Khan for sample AP biology questions. He and DiCerbo said they鈥檇 help in exchange for a peek at the bot 鈥 and a chance to work with the startup. They were among the first people outside of Open AI to get their hands on GPT-4, the LLM that powers the upgraded version of ChatGPT. They were able to test out the AI and, in the process, become amateur AI before anyone had even heard of the term. 

Like many users typing in queries in those first heady days, the pair initially just marveled at the sophistication of the tool and its ability to return what felt, for all the world, like personalized answers. With DiCerbo working from her home in Phoenix and Khan from the nonprofit鈥檚 Silicon Valley office, they traded messages via Slack.

Kristen DiCerbo introduces users to Khanmigo in a Khan Academy promotional video. (YouTube)

鈥淲e spent a couple of days just going back and forth, Sal and I, going, 鈥極h my gosh, look what we did! Oh my gosh, look what it’s saying 鈥 this is crazy!鈥欌 she told an audience during a recent at the University of Notre Dame. 

She recounted asking the AI to help write a mystery story in which shoes go missing in an apartment complex. In the back of her mind, DiCerbo said, she planned to make a dog the shoe thief, but didn鈥檛 reveal that to ChatGPT. 鈥淚 started writing it, and it did the reveal,鈥 she recalled. 鈥淚t knew that I was thinking it was going to be a dog that did this, from just the little clues I was planting along the way.鈥

More tellingly, it seemed to do something Watson never could: have engaging conversations with students.

DiCerbo recounted talking to a high school student they were working with who told them about an interaction she鈥檇 had with ChatGPT around The Great Gatsby. She asked it about F. Scott Fitzgerald鈥檚 famous , which scholars have long interpreted as symbolizing Jay Gatsby鈥檚 out-of-reach hopes and dreams.

鈥淚t comes back to her and asks, 鈥楧o you have hopes and dreams just out of reach?鈥欌 DiCerbo recalled. 鈥淚t had this whole conversation鈥 with the student.

The pair soon tore up their 2023 plans for Khan Academy. 

It was a stunning turn of events for DiCerbo, a Ph.D. educational psychologist and former senior Pearson research scientist who had spent more than a year on the failed Watson project. In 2016, Pearson that Watson would soon be able to chat with college students in real time to guide them in their studies. But it was DiCerbo鈥檚 teammates, about 20 colleagues, who had to actually train the supercomputer on thousands of student-generated answers to questions from textbooks 鈥 and tempt instructors to rate those answers. 

Like Nitta, DiCerbo recalled that at first things went well. They found a natural science textbook with a large user base and set Watson to work. 鈥淵ou would ask it a couple of questions and it would seem like it was doing what we wanted to,鈥 answering student questions via text.

But invariably if a student鈥檚 question strayed from what the computer expected, she said, 鈥渋t wouldn’t know how to answer that. It had no ability to freeform-answer questions, or it would do so in ways that didn’t make any sense.鈥 

After more than a year of labor, she realized, 鈥淚 had never seen the 鈥極K, this is going to work鈥 version鈥 of the hoped-for tutor. 鈥淚 was always at the 鈥極K, I hope the next version’s better.’鈥

But when she got a taste of ChatGPT, DiCerbo immediately saw that, even in beta form, the new bot was different. Using software that quickly predicted the most likely next word in any conversation, ChatGPT was able to engage with its human counterpart in what seemed like a personal way.

Since its debut in March 2023, Khanmigo has turned heads with what many users say is a helpful, easy-to-use, natural language interface, though a few users have pointed out that it sometimes .

Surprisingly, DiCerbo doesn鈥檛 consider the popular chatbot a full-time tutor. As sophisticated as AI might now be in motivating students to, for instance, try again when they make a mistake, 鈥淚t’s not a human,鈥 she said. 鈥淚t’s also not their friend.鈥

(AI's) not a human. It鈥檚 also not their friend.

Kristen DiCerbo, Khan Academy

Khan Academy鈥檚 shows their tool is effective with as little as 30 minutes of practice and feedback per week. But even as many startups promise the equivalent of a one-on-one human tutor, DiCerbo cautions that 30 minutes is not going to produce miracles. Khanmigo, she said, 鈥渋s not a solution that’s going to replace a human in your life,鈥 she said. 鈥淚t’s a tool in your toolbox that can help you get unstuck.鈥

鈥楢 couple of million years of human evolution鈥

For his part, Nitta says that for all the progress in AI, he鈥檚 not persuaded that we鈥檙e any closer to a real-live tutor that would offer long-term help to most students. If anything, Khanmigo and probabilistic tools like it may prove to be effective 鈥渉omework helpers.鈥 But that鈥檚 where he draws the line. 

鈥淚 have no problem calling it that, but don’t call it a tutor,鈥 he said. 鈥淵ou’re trying to endow it with human-like capabilities when there are none.鈥  

Unlike humans, who will typically do their best to respond genuinely to a question, the way AI bots work 鈥攂y digesting pre-existing texts and other information to come up with responses that seem human 鈥 is akin to a 鈥渟tatistical illusion,鈥 writes Harvard Business School Professor . 鈥淭hey鈥檝e just been well-trained by humans to respond to humans.鈥

Researcher Sidney Pressey鈥檚 1928 Testing Machine, one of a series of so-called 鈥渢eaching machines鈥 that he and others believed would advance education through automation.

Largely because of this, Nitta said, there鈥檚 little evidence that a chatbot will continuously engage people as a good human tutor would.

What would change his mind? Several years of research by an independent third party showing that tools like Khanmigo actually make a difference on a large scale 鈥 something that doesn鈥檛 exist yet.

DiCerbo also maintains her hard-won skepticism. She knows all about the halting early decades of computers a century ago, when experimental, punch-card operated 鈥渢eaching machines鈥 guided students through rudimentary multiple-choice lessons, often with simple rewards at the end. 

In her talks, DiCerbo urges caution about AI revolutionizing education. As much as anyone, she is aware of the expensive failures that have come before. 

Two women stand beside open drawers of computer punch card filing cabinets. (American Stock/Getty Images)

In her recent talk at Notre Dame, she did her best to manage expectations of the new AI, which seems so limitless. In one-to-one teaching, she said, there鈥檚 an element of humanity 鈥渢hat we have not been able to 鈥 and probably should not try 鈥 to replicate in artificial intelligence.鈥 In that respect, she鈥檚 in agreement with Nitta: Human relationships are key to learning. In the talk, she noted that students who have a person in school who cares about their learning have higher graduation rates. 

But still.

ChatGPT now has 100 million weekly users, according to . That record-fast uptake makes her think 鈥渢here’s something interesting and sticky about this for people that we haven’t seen in other places.鈥

Being able to engineer prompts in plain English opens the door for more people, not just engineers, to create tools quickly and iterate on what works, she said. That democratization could mean the difference between another failed undertaking and agile tools that actually deliver at least a version of Watson鈥檚 promise.听

An early prototype of IBM鈥檚 Watson supercomputer in Yorktown Heights, New York. In 2011, the system was the size of a master bedroom. (Wikimedia Commons)

Seven years after he left IBM to start his new endeavor, Nitta is philosophical about the effort. He takes virtually full responsibility for the failure of the Watson moonshot. In retrospect, even his 25-year timeline for success may have been naive.

鈥淲hat I didn’t appreciate is, I actually was stepping into a couple of million years of human evolution,鈥 he said. 鈥淭hat’s the thing I didn’t appreciate at the time, which I do in the fullness of time: Mistakes happen at various levels, but this was an important one.鈥

]]>
Exclusive: For Busy Teachers, AI Could Crack Open the Dense World of Ed Research /article/exclusive-phonics-learning-styles-teachers-confounded-by-education-research-may-soon-turn-to-new-ai-chatbots-for-help/ Wed, 06 Sep 2023 11:15:00 +0000 /?post_type=article&p=714153 As students across the U.S. enter their first full school year with access to powerful AI tools like ChatGPT and Bard, many educators remain skeptical of their usefulness 鈥 and preoccupied with their potential to .

But this fall, a few educators are quietly charting a different course they believe could change everything: At least two groups are pushing to create new AI chatbots that would offer teachers unlimited access to sometimes confusing and often paywalled peer-reviewed research on the topics that most bedevil them. 

Their aspiration is to offer new tools that are more focused and helpful than wide-ranging ones like ChatGPT, which tends to stumble over research questions with competing findings. And like many kids faced with questions they can’t answer, it has a frustrating tendency to make things up.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


Tapping into curated research bases and filtering out lousy results would also make the bots more reliable: If all goes according to plans, they鈥檇 cite their sources.

The result, supporters say, could revolutionize education. If their work takes hold, millions of teachers for the first time could routinely access high-quality research and make it part of their everyday workflow. Such tools could also help stamp out adherence to stubborn but ill-supported fads in areas from 鈥渓earning styles鈥 to reading instruction.

So far, the two groups are each feeling their way around the vast undertaking, with slightly different approaches.

In June, the International Society for Technology in Education introduced , a tool built on content vetted by ISTE and the Association for Supervision and Curriculum Development. (The two groups merged in 2022.) ISTE has made it available in to selected users. All of the chatbot鈥檚 content is educator-focused, and it鈥檚 trained solely on materials developed or approved by the two organizations. 

Richard Culatta

Now its creators say that within about six months, they expect that the tool will also be able to scour outside, peer-reviewed education research and return 鈥減retty understandable, pretty meaningful results鈥 from vetted journals, said Richard Culatta, ISTE鈥檚 CEO.

鈥淭here’s this big gap between what we know in the research and what happens in practice,鈥 he said. One reason: Most research is published in a format that 鈥渋s just totally inaccessible to teachers.鈥

Case in point: A set of by the Jefferson Education Exchange, a nonprofit supported by the University of Virginia鈥檚 Curry School of Education, found that while educators prefer research they can act on 鈥 and that鈥檚 presented in a way that applies to their work 鈥 only about 16% of teachers actually use research to inform instruction.

So he and others are building a digital tool, 鈥減urpose-built for educators by educators,鈥 that can translate research into practice, using 鈥渧ery practical language that teachers understand.鈥

For instance, a teacher could ask the chatbot, 鈥淲hat does the research say about creating a healthy school culture?鈥 or 鈥淲hat鈥檚 the evidence for teaching phonics to developing readers?鈥 One could also ask it to suggest activities that are appropriate for middle school students learning about digital citizenship.

Joseph South, ISTE鈥檚 chief learning officer, said teachers want the latest research, but are up against formidable obstacles. 鈥淭hey have to find the article in the journal that happens to relate to the thing that they want to do,鈥 he said. 鈥淭hey have to somehow understand academic-speak. They have to have the time to read this, and they have to translate it into something useful.鈥

While ChatGPT can comb through journals it has access to, translate and summarize the research, he said, it鈥檚 not reliable. The typical chatbot 鈥 and thus the typical end user 鈥 doesn’t know whether the results are from a credible, peer-reviewed journal or not, and it may not necessarily care.

Joseph South

鈥淲e do, though,鈥 he said. 鈥淪o we can do that filtering and let the AI do its magic.鈥

As with its beta version, the new chatbot will also cite the sources used to generate each response. And it鈥檒l let users know when it simply doesn鈥檛 have enough information to return a reliable response.

Developers are still in the early stages of deciding what academic journals to include. For now, they鈥檙e experimenting with a handful of key research articles, but will expand the chatbot鈥檚 range if initial prototypes prove helpful to educators.

Culatta and South, both veterans of the U.S. Department of Education, have spent years working on the research-to-practice problem, offering, in effect, translation services for research findings. 鈥淲e’ve spent so much work trying to figure out how to do it and it’s just never really worked,鈥 he said. 鈥淚t’s just always been a struggle. And we actually think that this could be the first for-real, sustainable, scalable approach to taking research and getting it into language that actually could be used by teachers.鈥

Daniel Willingham

, a professor of psychology at the University of Virginia and a well-known translator of education research, said his limited experience with ChatGPT has shown that when asked about a subject where there鈥檚 general consensus, such as “What is the effect of sleep on memory?” it produces helpful results. But it isn鈥檛 very good at synthesizing conflicting findings.

It鈥檚 also inconsistent in its willingness to reveal, in Willingham鈥檚 words, that 鈥溾業 really don’t know anything about that.鈥 And so it, you know, just .鈥

A paid ChatGPT subscriber, Willingham said he gets 鈥渞eally useful鈥 results only about 20% of the time. 鈥淏ut it requires plenty of verification on my part. And this is all within my area of expertise, so it’s not very hard for me to verify.鈥

Tapping 鈥榃hat Works鈥

ISTE isn鈥檛 the only organization pushing to make education research more widely accessible via chatbot. The Learning Agency, a Washington, D.C.-based consulting firm, is also testing a of a bot designed to offer answers to education research queries.

Unlike ISTE鈥檚, the agency鈥檚 tool taps an already existing, if finite, resource: the U.S. Department of Education鈥檚 What Works Clearinghouse, or more specifically its , a curated collection of materials developed by the department鈥檚 Institute of Education Sciences.

鈥淲e were inspired to basically create a special version of ChatGPT that was exposed to more high-quality educational data and research evidence on what works,鈥 said Perpetual Baffour, the group鈥檚 research director.

In a sense, she said, much of the work had already been done, since the library, though limited, exists to translate research findings into more digestible forms for educators. The result is a prototype that offers what Baffour calls 鈥渆vidence-based education advice鈥 on topics from adolescent literacy to dropout prevention and school turnarounds.

Perpetual Baffour

Baffour noted that the app development tool allowed the agency to create a 鈥渟imple but robust鈥 prototype within a day.

At the moment, the version they鈥檙e testing is 鈥渧ery basic,鈥 Baffour said. The agency is still considering what it鈥檒l look like in the future. 

鈥淚t only knows as much as the Doing What Works Library tells it,鈥 she said. So queries about topics that are in the library return rich results. But asking it about topics that aren鈥檛 can be problematic. 

For example, ask it about myths around the aforementioned learning styles and it defaults to a more general knowledge base scraped from Wikipedia articles, transcripts of recorded conversations and materials from 鈥渄ifferent corners of the Internet,鈥 Baffour said. 鈥淎nd as you can imagine, those sources might not have the most up-to-date and accurate information about education,鈥 Baffour said.

Indeed, a query about the topic returns this: 鈥淭he myth concerning learning styles is that there is one specific style that works best for everyone. This is not true, as different people learn in different ways and have different preferences for how they learn. Additionally, there is no evidence that suggests that focusing on one particular learning style is more effective than focusing on multiple styles.鈥

Not exactly accurate or helpful.

In the first place, the widely believed 鈥渕yth鈥 holds that people with different learning styles learn best when their preferred mode of learning is indulged 鈥 not that one style works best for everyone. At a more basic level, while many people may express preferences for ways to take in new information and study 鈥 receiving instruction verbally, for example, instead of via pictures 鈥 scientists have yet to find good evidence that material tuned to these preferences . 

Unfortunately, at the moment the agency鈥檚 bot doesn鈥檛 confess whether it knows a lot or little about a topic. Baffour said they want to change that soon. For now, however, that鈥檚 just an aspiration.

鈥淚 think you’re more likely to get a confident chatbot producing inaccurate information than you are to get a self-aware chatbot admitting its false and incomplete knowledge,鈥 she said. 

Willingham, the UVA researcher, said a useful education-focused chatbot would not just have to incorporate reliable findings, but put them in context. For example, an answer to a query about the evidence for phonics instruction would properly note that, while the record is fairly strong, a lot of mediocre research and 鈥渉yperbolic claims鈥 made in support of alternative methods serve to cloud the overall picture 鈥 a delicate but accurate detail.

鈥淗ow is an aggregator going to negotiate that?鈥 he said. 

Asked if he thought a chatbot might soon replace him, Willingham, the author of and a that translate learning science into plain English, said he wouldn鈥檛 make any predictions. 

鈥淚 was never much of a futurist, but I hocked my crystal ball 15 years ago,鈥 he said.

]]>
Teen Mental Health Crisis Pushes More School Districts to Sue Social Media Giants /article/teen-mental-health-crisis-pushes-more-school-districts-to-sue-social-media-giants/ Fri, 31 Mar 2023 12:30:00 +0000 /?post_type=article&p=706803 The teen mental health crisis has so taxed and alarmed school districts across the country that many are entering legal battles against the social media giants they say have helped cause it, including TikTok, Snap, Meta, YouTube and Google.

At least eleven school districts, one county, and one California county system that oversees 23 smaller districts have filed suits this year, representing roughly 469,000 students. 

Two others in Arizona are considering their own complaints, one superintendent told 社区黑料. Eleven districts in voted to pursue similar litigation, as did . Many others across the country are on the verge of doing the same, according to a lawyer representing a New Jersey district.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


鈥淪chools, states, and Americans across the country are rightly pushing back against Big Tech putting profits over kids鈥 safety online,鈥 Sen. Richard Blumenthal, co-sponsor of the , bipartisan Kids Online Safety Act, told 社区黑料. 鈥淭hese efforts, proliferated by harrowing stories from families amid a worsening youth mental health crisis, underscore the urgency for Congress to act.鈥 

Algorithms and platform design have 鈥渆xploited the vulnerable brains of youth, hooking tens of millions of students across the country into positive feedback loops of excessive use and abuse of Defendants鈥 social media platforms,鈥 Seattle Public Schools claimed in the first suit filed this January.

Districts in Washington, Oregon, Arizona, New Jersey, and , , as well as say tech companies intentionally , exacerbating depression, anxiety, tech addiction and self-harm, straining learning and district finances. 

But the legal fight, whether tried or settled, will not be easy, outside counsel and at least one district leader said. 

鈥淲e don’t think that this is a slam dunk case. We think it’s going to be an uphill battle. But our board and I believe that this is in the best interest of our students to do this,鈥 said Andi Fourlis, superintendent of Arizona鈥檚 largest district, Mesa Public Schools. 鈥淚t鈥檚 about making the case that we need to do better for our kids.鈥 

Just how badly Mesa鈥檚 teens are hurting is laid out in detail in court filings: More than a third are chronically absent, 3,500 more were involved in disciplinary incidents in 2021-22 than in 2019-20 and the district has seen a 鈥渟urge鈥 in suicidal ideation and anxiety. 

Buried in the 111-page lawsuit, a high school senior鈥檚 video essay illustrates the painful impacts of social media addiction: risky or self-destructive behavior, disconnection from friends.

Simultaneously, and lawmakers are proposing bills to make platforms safer. Senate are underway, featuring parents whose children died by suicide. TikTok鈥檚 CEO this month to address concerns about exposure to harmful content. President Joe Biden flagged 鈥,鈥 in his last State of the Union Address.

Both legislative and legal efforts are after similar goals: changing the algorithms and product design believed to be hurting and kids. Through lawsuits, districts also seek financial compensation for the increased mental health services and training they鈥檝e 鈥溾 to establish. 

鈥淭he harms caused by social media companies have impacted the districts鈥 ability to carry out their core mission of providing education. The expenditures are not sustainable and divert resources from classroom instruction and other programs,鈥 said Michael Innes, partner with Carella Byrne, Cecchi, Olstein, Brody & Agnello, a firm representing New Jersey schools.

Previous complaints against opioid and e-cigarette companies, which levied public nuisance and negligence claims as districts鈥 social media filings do, resulted in multimillion dollar settlements. 

But some legal experts say there鈥檚 a key distinction in this case: Big Tech companies aren鈥檛 the ones producing content on these platforms, individuals are. Companies have some hefty . 

鈥淪chool districts are not in the business of suing people 鈥 the threshold for initiating litigation is very high,鈥 said Dean Kawamoto, a lawyer for Keller Rohrback, the Seattle-based firm representing four districts, and thousands of others in Juul litigation. 

鈥淚 do think it says something that you’ve got a group of schools that have filed now, and I think more are going to join them,鈥 Kawamoto added. 

Some outside counsel are . 

鈥淚 think there are questions about whether the litigation system is even a coherent way to go about this,鈥 First Amendment scholar and Harvard Law professor Rebecca Tushnet told 社区黑料. 鈥淚t’s very hard to use individual litigation to get systemic change, excepting in particular circumstances.鈥 

The exceptions, she added, have clear visions and specific outcomes, like requiring a doctor on-call for safer prison conditions. Those kinds of metrics are difficult to name when it comes to algorithms and mental health. 

What precedent (or lack thereof) tells us

Social media companies鈥 lawyers are likely to assert free speech protections early and often, including in initial motions to dismiss.

鈥淭he conventional wisdom is that if motions to dismiss are denied in cases like this, [companies] are much more likely to settle 鈥 reality is actually a little more mixed,鈥 Tushnet said, adding if the claims come after business models, companies fight harder. 

An added challenge is proving causal harm 鈥 that social media companies have caused student depression, anxiety, eating disorders or self-harm. The link is one that neuroscientists and researchers are , though experts say there鈥檚 an urgent need. 

鈥淭his is a watershed moment where schools can really roll up their sleeves and do something because 鈥 not that they haven’t been in the past 鈥 but because it’s so obvious. It’s right in front of them. It’s impacting students鈥 education,鈥 said Jerry Barone, chief clinical officer at Effective School Solutions, which brings mental health care to schools. 

About 13.5% of teen girls say Instagram makes thoughts of suicide worse; 17% of teen girls say it makes eating disorders worse, according to Meta鈥檚 leaked internal research, first revealed in a via .

Even if districts are able to provide proof, they may not ever see a judgment made. 

Public nuisance claims in tobacco and opioid mass torts were more successful in 鈥渋nducing settlements, rather than in courthouse outcomes,鈥 according to Robert Rabin, tort expert and professor at Stanford University. 

While he鈥檚 not 鈥渄ismissive鈥 of districts鈥 efforts, 鈥渢he precedents don鈥檛 supply clear-cut support for the claims here.鈥濃

The interim

As lawyers work out the details, students are left in the balance. Some are skeptical the suits will amount to anything at all, at least in their adolescence. 

鈥淲hy do you guys waste so much time on these useless things that you know get nowhere, when you can do it with things that you know will get somewhere?鈥 said Angela Ituarte, a sophomore at a Seattle high school. 

Many young people interviewed by 社区黑料 described their social media use like a double-edged sword: affirming, a place where they learned about mental health or found community, particularly for queer students of color; and simultaneously dangerous, a place where they connected with adults when they were 14 and saw dangerous diets promoted.

Social media, Ituarte said, makes it seem like self-harm and disordered eating, 鈥渁re the solution to everything. And it’s hard to get that out of those algorithms 鈥 even if you block the accounts or say you’re not interested it still keeps popping up. Usually it’s when things are bad, too.鈥

In a late February letter to senators, Meta touted a promising initiative to on one for extended periods. Only 1 in 5 teens actually moved to a new topic during a weeklong trial. 

To curb cyberbullying, users now get warnings for potentially offensive comments. People only edit or delete their message 50% of the time, according to the company鈥檚 responses to Senate inquiries. 

Meta, YouTube and Google did not respond to requests for comment. TikTok told 社区黑料 they cannot comment on ongoing litigation. The company has just started requiring users who say they are under 18 to enter a password after scrolling for an hour.

In a statement to 社区黑料, Snap said they 鈥渁re constantly evaluating how we continue to make our platform safer.鈥 Snap has partnered with mental health organizations to launch an in-app support system for users who may be experiencing a crisis, and acknowledged that the work may never be done. 

The process has only just begun. If the suits move to trial, some districts will be chosen as bellwethers to represent the many plaintiffs, tasked with regularly contributing to a lengthy trial. 

Still, there鈥檚 no doubt in Fourlis鈥檚 mind. 

鈥淪ometimes you have to be the first to step forward to take a bold leap so that others can follow,鈥 she said. 鈥淏eing the superintendent of the largest school district in Arizona, what we do often sets precedents, and I have to be very strategic about that responsibility.鈥

Disclosure: Campbell Brown, Meta鈥檚 vice president of media partnerships, is a co-founder and member of the board of directors of 社区黑料. She played no role in the editing of this article.

]]>