Organizations are increasingly banning the use of generative AI tools such as ChatGPT citing concerns over privacy, security and reputational damage.
In a new report published by BlackBerry, 66% of organizations it surveyed said that they will be prohibiting the infamous AI writer and others at the workplace, with a further 76% of IT decision-makers accepting that employers are allowed to control what software workers can use for their job.
What's more, 69% of those organizations implementing bans said that they would be permanent or long term, such is the risk of harm the tools pose to company security and privacy.
AI conflict
However, there is also a conflict, as just of half (54%) of organizations also acknowledge that powerful AI like ChatGPT could boost productivity, thanks to its ability to accomplish a range of tasks much quicker than a human could.
And whilst ITDMs agree with the right to ban such tools, 66% also thought that such bans amounted to "excessive control" over corporate and BYO devices.
When considering the use of generative AI for cybersecurity purposes, a different picture was revealed. 74% were in favor of using them for this reason, perhaps in an effort to combat the use of AI by attackers, since anyone can access to these tools, and even those without technical skills can develop and deploy malware with relative ease.
Give the advantages that AI tools like ChatGPT can confer, Shishir Singh, CTO of Cybersecurity at BlackBerry, advises a more measured approach:
“Banning Generative AI applications in the workplace can mean a wealth of potential business benefits are quashed. As platforms mature and regulations take effect, flexibility could be introduced into organizational policies. The key will be in having the right tools in place for visibility, monitoring and management of applications used in the workplace.”
No doubt companies have been spooked by stories of workers leaking sensitive data to ChatGPT - most notably employees at Samsung, who entered information pertaining to confidential meetings and technical data into the Large Language Model. This information is now in the OpenAI servers, the developers of ChatGPT, and there is no way for the electronics giant to delete it now.
In order to alleviate the fears around private data being leaked, Microsoft is planning a more secure version of the GPT model, which it says will not send company data to the public-facing OpenAI servers.
- Here is the best endpoint protection to secure your firm