Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Craig Hale

'The risk for SMBs is not reckless use of AI, but invisible workflow change': Legal firms are falling behind when it comes to setting rules for AI use

Artificial intelligence India.

  • 43% of organizations still have no plans for AI policies, report finds
  • At the moment, workers are adopting AI more quickly than companies are writing policies
  • Nexos.ai calls for SMBs to get basic policies in place – they can evolve from there

Even though 70% of legal workers are already using general-purpose AI for work, 43% of organizations say they still don't have formal AI policies in place (and no plans to create them).

New research from Nexos.ai has revealed the biggest risk relating to AI tools could actually be coming from a lack of visibility and governance.

And SMBs are generally the most at-risk simply due to their nature of having fewer resources – both in terms of workers and procedures.

AI is mostly going unmanaged

Nexos.ai found workers regularly pasting contracts, NDAs or legal correspondence into public chatbots to save time, putting sensitive information at risk. While enterprise-grade AI products promise maximum data security and no customer data training, public versions aren't so tight.

Data security (46%) was cited as legal teams' biggest concern, ahead of ethical issues (42%) and legal privilege (39%), but how workers are interacting with public chatbots isn't tallying up with concerns.

Nexos.ai also noted that SMBs may already have legal AI workflows in use without them being formally established and recognized, because AI adoption happens gradually and without governance, leaving companies playing catchup to govern the correct and safe use of AI after employees have already started to use the tools.

"The risk for SMBs is not reckless use of AI, but invisible workflow change," Head of Product Zilvinas Girenas wrote.

But it doesn't need to be difficult – the report explains that a basic AI policy doesn't need to be complex. Defining approved tools, banning use cases and pinpointing sensitive data restrictions could suffice – or at least, they could be better than current governance scenarios.

Looking ahead, Nexos.ai suggests that companies start off with a simple AI policy to keep sensitive data out of unapproved tools. Prior to widespread AI adoption, the report calls for companies to approve tools before teams adopt them, but once it's implemented, Nexos.ai still recommends that humans have oversight before AI-generated content is used in legal applications.

"If those tools get embedded before the company has defined approved use, data boundaries, and review steps, efficiency arrives faster than governance," Girenas concluded.


Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.