Samsung has apparently forbidden its staff from using well-known generative AI tools like ChatGPT, Google Bard, and Bing considering security concern.
According to Bloomberg, the Korea-based corporation informed workers at one of its largest divisions about the new policy on Monday. According to Bloomberg, the prohibition was implemented due to concerns that the data utilised by AI systems is stored on other servers where it may wind up being made public.
Samsung informed personnel that interest in generative AI systems like ChatGPT has been rising both within and externally. “While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about the security risks presented by generative AI.”
With the release of OpenAI’s ChatGPT in November—a chatbot with a potent AI engine that can create software, conduct conversations, and write poetry—generative AI gained widespread attention. Microsoft uses ChatGPT’s GPT-4 technology base to improve Bing search results, provide email writing advice, and support for creating presentations.
The introduction of the new policy coincides with growing anxiety over the harm posed by AI. A public statement requesting prominent artificial intelligence labs to halt the development of AI systems was signed by hundreds of industry leaders and AI researchers in March. The letter cited “profound risks” to human society.
According to the memo, the new restriction was implemented as a result of Samsung engineers mistakenly disclosing internal source code by uploading it to ChatGPT.
“HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency,” the memo stated. “However, we are temporarily limiting the use of generative AI until these measures are prepared.”
According to Bloomberg, the new regulations prohibit the use of generative AI systems on Samsung-owned computers, tablets, and phones.
A request for comment from Samsung was not immediately met.