Companies are attempting to set policies and processes for how to use the technology at work as artificial intelligence (AI) has hit the corporate sector like a sledgehammer. One of the most crucial issues is how to safeguard confidential corporate information and prevent sharing it with ChatGPT. Samsung recently learned that staff members had used the AI chatbot to divulge confidential corporate information on three consecutive occasions. Samsung tightened controls by issuing a company-wide directive capping staff ChatGPT prompts at 1024 bytes in order to stop data leakage. However, as OpenAI expressly keeps all incoming prompt data, businesses could have no choice but to openly prohibit the use of ChatGPT until appropriate compliance standards are established.
Samsung now joins the growing list of well-known businesses that have already put restrictions on employee usage of the ChatGPT platform, including Amazon, Walmart, and J.P. Morgan Chase. Samsung has already responded by limiting ChatGPT uploads to 1024 bytes per user, and it is looking into who was responsible for the breach. In order to avoid more humiliating errors, it is also thinking about developing its own internal AI chatbot. Samsung could decide to recall some of its exposed data, but this is doubtful.