DALL·E-2024-09-27-11.36.19-

Subscribe to Our Newsletter

Stay updated with the latest in AI training, compliance insights, and new course launches—delivered straight to your inbox.

    calender
    September 27, 2024
    account
    John Rood

    AI Compliance Obligations for EdTech: AI Regulation in Education

    I spent over a decade in the EdTech world. One of the biggest challenges we all face selling into institutions, whether that be K-12 or higher ed, is complying with the host of information security and other compliance requirements. I’ve got bad news – that’s about to get harder.

    AI Regulation

    Of course, if you are in EdTech, you’ve been watching AI closely. What you may or may not have been watching as closely is the host of regulations that are about to fall on both vendors and deployers of AI products.
    Let’s jump in.
    • Lack of internal policies
    • Untrained employees making risky prompts
    • No visibility into how models use data

    Upcoming Requirements

    Let’s start with how regulators are thinking about the issue. A consensus is beginning to emerge as to how many different global regulators are going to converge thinking about AI. This emerging consensus has a couple features:

      • A “risk-classification” approach. These laws identify certain AI uses that are simply banned, some AI use cases that are “low risk” and have light or no compliance requirements, and “high-risk” systems. “High-risk” AI systems will have significant compliance obligations
      • High-risk systems will be required to have a substantial set of AI governance and compliance documents including:
        • A “risk management” system, requiring numerous internal safeguards
        • A data policy (and, in the EU, substantial limits as to collection of personal data)
        • Impact assessments performed on their technology (in the education context, likely primarily for bias)

      Is EdTech AI defined as “high-risk?” Let’s look at the laws. (Note: I’m not a lawyer, definitely not your lawyer, and am not providing legal advice.) 

      Relevant Regulations

      The Colorado “AI Act” (SB24-205) says: 

      “High risk artificial intelligence system” means any artificial intelligence system that, when deployed, makes or is a substantial factor in making, a consequential decision….”

      “Consequential decision” means a decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of …education enrollment or an education opportunity.” [Several other high-risk systems are listed]

      The EU AI Act includes among the list of high-risk systems (Annex III):

      Education and vocational training:

      (a)

      AI systems intended to be used to determine access or admission or to assign natural persons to educational and vocational training institutions at all levels;

      (b)

      AI systems intended to be used to evaluate learning outcomes, including when those outcomes are used to steer the learning process of natural persons in educational and vocational training institutions at all levels;

      (c)

      AI systems intended to be used for the purpose of assessing the appropriate level of education that an individual will receive or will be able to access, in the context of or within educational and vocational training institutions at all levels;

      (d)

      AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests in the context of or within educational and vocational training institutions at all levels.

      The way I read it, again not providing legal advice, is that quite a few common EdTech applications fail into that bucket, including:

      • Anything touching admissions 
      • Anything touching financial aid decisions
      • Grading in some contexts
      • Standardized testing in many contexts 

      Geography

      Either of the Colorado or EU law is likely to cover the majority of higher education institutions; there are no limitations on, for example, number of people affected – so in theory if you use AI in admissions on one student located in Colorado, you have violated the law. 

      Now certainly, K-12 doesn’t have the same geographic scope. That said, be aware that virtually every “blue” and “purple” state are considering substantial regulation of AI systems. 

      How This Will Go

      I’ve been working in AI compliance for several years, which has given me time to see a complete “regulation cycle.” Here is how this went for a law in NYC requiring independent bias auditing of AI systems by employers. 

      • NYC passes a law requiring employers to conduct a bias audit
      • Employer called their vendors. Vendors say the law doesn’t cover them for one legal reason or another
      • Employer’s general counsel sees that there is still risk
      • Employer call their vendor and say they won’t renew their contract until the vendor gets the audit
      • Vendor gets the audit (and pays for it)

      So, if you are a edtech AI vendor and show this to your legal counsel, they may tell you that the law doesn’t specifically apply for a certain reason. But I can tell you from experience that almost regardless of the actual wording of the law, if your customers, the institutions’ general counsel, believe that there is even some risk, they are quite likely to require you to comply. 

      Note also that both Colorado and the EU place compliance obligations on both the developer and the deployer of high-risk AI — so you, the vendor, also may have a legal compliance obligation.

      AI compliance is at least as much of a sales and marketing requirement as it is a legal requirement. 

      In some cases, employees are experimenting with generative AI without any guidance—putting sensitive data, customer interactions, and intellectual property at risk. This growing disconnect between adoption and accountability leaves businesses exposed at exactly the moment when regulators are starting to pay closer attention.
      In some cases, employees are experimenting with generative AI without any guidance—putting sensitive data, customer interactions, and intellectual property at risk. This growing disconnect between adoption and accountability leaves businesses exposed at exactly the moment when regulators are starting to pay closer attention.
      What should I do?
      If you are an EdTech vendor, you should start preparing the set of AI governance documents that are now required by Colorado and the EU. This is a substantial operation. I liken it to implementing an IT security program like SOC2, but this is harder. Plan for a 2-4 month process.

      And yes – Proceptual helps companies implement their suite of AI governance requirements.

      John Rood

      John is a sought-after expert on emerging compliance issues related to AI in hiring and HR. He has spoken at the national SHRM conference, and his writing has appeared in HR Brew, Tech Target, and other publications. Prior to Proceptual, John was founder at Next Step Test Preparation, which became a leader in the pre-medical test preparation industry before selling to private equity. He lives in the Chicago area and is a graduate of Michigan State University and the University of Chicago.

      Related Post May You Also Like

      Explore more insights, case studies, and practical tips on navigating AI. Stay ahead with our latest thought leadership.

      HR Brew: Colorado’s New AI Law

      August 17, 2024
      John Rood

      We were pleased to be featured by HR Brew in their article on Colorado’s new comprehensive AI Law.

      Interview with Attorney Rob Szyba

      July 14, 2024
      John Rood

      We at Proceptual had the pleasure of discussing AI and employment law with attorney Rob Szyba of law firm Seyfarth...

      How Should HR and Compliance Leaders Think About ALLLLLLL the Emerging AI Regulation?

      August 25, 2023
      John Rood

      Well, it’s really happened — 2023 is the year of the proposed AI regulation in hiring and HR. That “proposed”...