Guidelines for the use of AI in K–12 schools have been released by organizations in at least 28 states and the District of Columbia.
According to an analysis by AI for Education, an advocacy group that trains teachers in AI literacy, more than half of the states have established school rules that define artificial intelligence, establish best practices for utilizing AI systems, and more.
According to Amanda Bickerstaff, CEO and co-founder of AI for Education, teachers and kids require a great deal of state-level assistance for navigating the rapidly evolving technology, even if the Trump administration has attempted to relax federal and state AI regulations in an effort to spur innovation.
Academic integrity is what most people consider when it comes to the use of AI in schools, she added. Providing basic safety standards about responsible use and giving people the chance to learn what is proper are two of the main concerns we’ve noticed and the main reasons for the drive for AI advice at the state and district levels.
The goal of North Carolina, one of the first states to release AI guidelines for schools last year, was to research and define generative artificial intelligence for possible classroom applications. Resources for teachers and students who want to learn how to effectively engage with AI models are also included in the policy.
Some jurisdictions stress ethical aspects for certain AI models in addition to classroom instruction. Georgia released more guidelines in June detailing ethical considerations for educators to take into account prior to implementing the technology, following its original framework in January.
Guidelines for AI in schools were also released this year by Maine, Missouri, Nevada, and New Mexico.
According to Maddy Dwyer, a policy analyst at the Center for Democracy & Technology’s Equity in Civic Technology team, a nonprofit dedicated to advancing civil rights in the digital age, states are stepping up to fill a crucial void left by the lack of federal laws.
In a recent blog post, Dwyer noted that many of the frameworks are excluding important AI subjects, such community participation and deepfakes, or manipulated photographs and videos, while the majority of state AI advise for schools concentrates on the possible advantages, hazards, and necessity of human oversight.
To ensure that the deployment of AI is meeting children’s needs and improving rather than diminishing their educational experiences, I believe that states’ ability to close the existing gap is essential, she said.