‘Splintered’ AI regulations could harm pursuit of advancements, understanding of where the guardrails lie

2 weeks ago 6

Ads by AdZippy

As government regulatory agencies in North America, Europe, and Asia contend with the legal, safety, national security, and ethical questions raised by the advancements of artificial intelligence, business leaders say the disparate rules are presenting some short-term complexities.

“The big story is that everyone is pursuing AI regulations and when you have 100 standards, you have no standards,” says Danny Tobey, chair of the AI and data analytics practice at law firm DLA Piper. 

The European Union is the furthest along, establishing regulatory rules that were approved by the region’s parliament and could be implemented as soon as 2025. The 27-nation bloc is leaning on a risk-based approach that could ban the technology in some extreme cases and would require approval before going to market for some “high risk” AI systems. China and India are among the countries considering their own paths to regulate AI.

Executives say they expect to lean on existing frameworks already established for other forms of technology, which can then be applied to the regulations that emerge for AI. At Juniper Networks, they hire people to “basically watch compliance laws around the world,” says Bob Friday, chief AI officer at the company, which sells networking and cybersecurity products in 175 countries.

Friday acknowledges that AI presents new complexities. “If you build a plane or a car, there are plenty of regulations to make sure that car or that plane is safe for the public,” says Friday. But AI has cognitive reasoning skills. “That’s not totally deterministic,” he adds. “These systems don’t have consistent behavior.” 

DLA Piper anticipates Europe will likely be the most stringent on AI, mirroring the region’s approach to other technologies. As such, the firm pushes clients to plan for general principles that can be applied broadly: keep a human in the loop, test AI both before and after launch, and offer clear explanations about the technology whenever possible. 

“We’re developing baseline approaches for a lot of our clients who are multinationals because it’s too much to have multiple control systems within one company,” says Tobey.

“There’s a lot of commonalities when I think about the way we approach data use more generally,” says Elise Houlik, chief privacy officer at TurboTax and CreditKarma software provider Intuit. “Being very forthright with what you’re doing, giving the right notice, giving the right transparency understanding where consumer choice should be involved. That kind of thing ports over very well into the AI space.”

Intuit routinely meets with policy makers to discuss AI, with the goal of ensuring regulatory language is clear and won’t limit innovation. The tension around AI is mostly around consumer apprehension about the technology, Houlik says. People want to know when they are engaging with AI, if they can opt into or out of the technology, and clearly understand just how their data is being used. 

“And then the next layer down would be, ‘Okay, I’ve convinced you this is valuable, and now it’s my job to make sure it’s secure, make sure you are getting the best results possible, and make sure the right data is being pulled in and used at the right point in time,’” says Houlik.

Stateside, during the 2024 legislative session, at least 40 states, Puerto Rico, the Virgin Islands, and Washington have introduced AI bills, ranging in focus on content related to elections, child pornography and other criminal uses, usage for healthcare decisions, and transparency of data usage. 

“If I had to treat each state like a little country, that would be like 50 little countries,” says Friday, who adds that such splintered regulations could create another layer of costs for business to keep track of.

Regulators also have to sort out who would be liable for any infractions. They must determine answers for questions such as: Would the manufacturers of AI face fines or enforcement actions if a large language model they created were to run afoul of regulation in China or the U.S.? Or would it be the company that deployed the technology? How would such regulation impact large firms that are creating their own proprietary AI models while also leaning on those created by tech giants like Microsoft or Google.

There’s also murkiness when it comes to the risk-based approach that’s being pursued in Europe. Experts say that while low risk AI is the easiest to agree upon, what is deemed higher risk versus medium risk will be harder to determine. 

When Europe aimed regulation at social media companies like Meta, other technology providers found themselves in the crosshairs. “We had to comply with all the rules of privacy in Europe,” says Friday, even though networking companies weren’t the intended target of those regulations.

Tom Siebel, the founder and CEO of C3.ai, isn’t a fan of any of the regulatory talks coming out of the U.S. or European markets. “What they’re proposing to do is criminalize science,” says Siebel. “I don’t think they’re well considered, and I think the people writing them have no idea what they are saying.”

With the number of algorithms being published in the world today in the millions, he worries that regulators would be unable to keep up with the volume. And Siebel doesn’t believe they would be able to read the algorithms and determine if they were safe or not, even if regulators had advanced time to look at the technology. 

Siebel acknowledges that governments must act on AI but advocates for legislation rather than regulation. And within the private sector, he says that ultimately the CEO should be held accountable for the safety of the AI they are creating or using.

“Do I think that we need to put rails on the use of AI?” asks Siebel. “Absolutely.”

Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.

Read Entire Article