What you need to know
- According to a new report by CNBC, Microsoft employees were temporarily restricted from accessing ChatGPT.
- Microsoft stated that it restricted access due to security and privacy concerns.
- A Microsoft spokesman cites that the block was a mistake and that it occurred while the company was testing endpoint control systems for LLMs.
- Microsoft recommends using Bing Chat Enterprise and ChatGPT Enterprise, which sport better security and privacy.
Per Microsoft's Work Index Report, 70% of the employees who participated in the survey cited that they were ready to adopt the technology and incorporate it into their workflow to handle mundane tasks.
It's obvious that Microsoft employees use AI to handle some tasks within the organization, especially after extending its partnership with OpenAI by making a multi-billion dollar investment. However, an emerging report by CNBC cites that Microsoft employees were briefly restricted from accessing ChatGPT on Thursday.
According to people familiar with the issue, Microsoft decided to briefly restrict access to the AI-powered tool due to "security and data concerns." Microsoft issued the following statement pertaining to the issue:
"While it is true that Microsoft has invested in OpenAI, and that ChatGPT has built-in safeguards to prevent improper use, the website is nevertheless a third-party external service. That means you must exercise caution using it due to risks of privacy and security. This goes for any other external AI services, such as Midjourney or Replika, as well."
While speaking to CNBC, Microsoft indicated that the restricted ChatGPT access was a mistake, which occurred while the company ran an array of tests on large language models.
It's evident that there's a lot of concern revolving around the technology's safety and privacy. President Biden issued an Executive Order addressing most of these concerns, but there's still an urgent need for guardrails and elaborate measures that will help prevent generative AI from spiraling out of control.
This news comes after OpenAI confirmed a ChatGPT outage caused by a DDoS attack. The outage prevented users from leveraging the chatbot's capabilities fully, furnishing them with error messages instead.
Microsoft is on top of the AI security situation
In June, a cybersecurity firm issued a report citing that over 100,000 ChatGPT credentials had been traded away in dark web marketplaces over the last 12 months. The security firm further indicated that the attackers leveraged info-stealing malware to gain access to these credentials and recommended that users consider changing their passwords regularly to keep hackers at bay.
Another report cited that hackers were warming up and using sophisticated techniques, including generative AI, to deploy malicious attacks on unsuspecting users. Keeping this in mind, it's not entirely wrong for Microsoft to restrict the use of AI-powered tools, especially over security and privacy concerns.
What are your thoughts on AI safety and privacy? Let us know in the comments.