Data privacy is critical for Enterprise AI adoption

When using Hyperscaler AI services, companies do not know how their data is being used in ongoing training and AI operations. Their data is also accessible by third party administrators worldwide
INADVERTENT SHARING OF CONFIDENTIAL INFORMATION

Samsung Employees Leak Sensitive Data to ChatGPT

Samsung engineers accidentally leaked confidential information, including source code and meeting recordings, to OpenAI while using the ChatGPT chatbot for work-related tasks. (Credit: Mashable.com)

Accidental Leaks
Samsung engineers shared confidential source code, requested code optimization from ChatGPT, and uploaded meeting recordings for note transcription, unintentionally exposing sensitive information
Limited Uploads Enforced
In response to the leaks, Samsung has restricted ChatGPT upload capacity to 1024 bytes per user
Internal AI Chatbot
Samsung is exploring the development of its own internal AI chatbot to mitigate future risks associated with external AI models like ChatGPT
CONCERNING TRENDS

Almost 40% of workers share sensitive information with AI tools, without employer’s knowledge (Study)

A recent cybersecurity report found concerning trends in how workers interact with artificial intelligence (AI).  While 65% of respondents expressed concern about AI-related cybercrime, 38% admitted to sharing sensitive work information with AI tools without their employer's knowledge. This risky behavior highlights the lack of training on safe AI use: over half (52%) of employed participants reported receiving no training in this area.

Study size: Polling over 7,000 individuals across the United States, UK, Canada, Germany, Australia, India and New Zealand

Learn more
PROTECT INTELLECTUAL PROPERTY

Data Privacy Benchmark Study

The annual Cisco Data Privacy Benchmark Study is one of many research-based, data-driven publications collectively known as the Cisco Cybersecurity Study Series. This double-blind study is based on a survey of over 2600 security professionals in 12 countries around the world.

Generative AI puts AI capabilities in the hands of many more users, and 92% of organizations said they see it as a fundamentally different technology with novel challenges and concerns requiring new techniques to manage data and risk. Among the top concerns cited were that the use of GenAI could hurt the organization’s legal and intellectual property rights (69%), the information entered could be shared publicly or with competitors (68%), and that the information it returns to the user could be wrong (68%).

Get THE PAPER