Artificial intelligence tools are showing up in more corners of retirement plan operations: record-keeping, investment advice, participant communications. That’s creating new questions for plan sponsors and fiduciaries about how to meet their ERISA obligations while the technology keeps shifting underfoot.
A recent Bloomberg Law article takes a look at the emerging risks, drawing on perspectives from several benefits attorneys. The consensus: the same fiduciary principles apply, but the way they play out is changing fast.
“AI is clearly increasing the expectation and the ability of plan sponsors and fiduciaries to test the advice they’re getting” from the tool or from outside advisors using it, said Stephen Rosenberg, head of the Wagner Law Group’s ERISA litigation practice.
The gray areas are real. Who’s liable if an AI tool produces inaccurate information? How transparent should vendors be about their use of AI? What happens when a hallucination—AI-speak for a confident but wrong answer—slips into investment research or fee comparisons?
David Levine, a principal at Groom Law Group and founder of the AI tool PlanPort, put it plainly: “When something goes wrong, who’s on the hook and who owns it? Are there biases? Are there other challenges?”
ERISA’s duty of prudence requires fiduciaries to act with the care and diligence a knowledgeable person would use to protect participants’ interests. Using AI tools may complicate that standard, attorneys say, since the logic behind AI outputs can be opaque—and fiduciaries who use the tools without proper understanding could be introducing legal risk.
Marcia Wagner, founder of The Wagner Law Group, said the guiding legal principles remain the same regardless of whether AI is involved. But acting prudently is context-specific and still requires human judgment. Mr. Rosenberg added that plan managers will need to stay vigilant about AI hallucinations: “They’ve got to figure out how they’re going to make sure that it’s right.”
On the vendor side, Melissa Ostrower, co-leader of the employee benefits practice group at Jackson Lewis, said fiduciaries will need to ask direct questions about whether service providers are using AI, how they’re using it, and who bears responsibility if something goes wrong. Service provider contracts should be updated to reflect any AI use, and fiduciaries need to consider how participant data is handled when it’s fed into these systems.
“Prudent today may not be prudent even in six months or a year because the technology is moving so quickly,” Ms. Ostrower said.
The SPARK Institute, a trade group representing the retirement plan industry, recently hosted a workshop on AI and has established a new AI governance committee to help members navigate these questions. Executive director Tim Rouse said plan sponsors have been increasingly asking how AI is being used and under what circumstances—exactly the kind of oversight fiduciaries should be exercising.
The bottom line: AI may be new, but the fiduciary expectations aren’t. Document your processes, ask your vendors the hard questions, and don’t assume the technology is right just because it spits out a credible-sounding answer.