Samsung engineers accidentally leaked confidential information, including source code and meeting recordings, to OpenAI while using the ChatGPT chatbot for work-related tasks. (Credit: Mashable.com)
A recent cybersecurity report found concerning trends in how workers interact with artificial intelligence (AI). While 65% of respondents expressed concern about AI-related cybercrime, 38% admitted to sharing sensitive work information with AI tools without their employer's knowledge. This risky behavior highlights the lack of training on safe AI use: over half (52%) of employed participants reported receiving no training in this area.
Study size: Polling over 7,000 individuals across the United States, UK, Canada, Germany, Australia, India and New Zealand
The annual Cisco Data Privacy Benchmark Study is one of many research-based, data-driven publications collectively known as the Cisco Cybersecurity Study Series. This double-blind study is based on a survey of over 2600 security professionals in 12 countries around the world.
Generative AI puts AI capabilities in the hands of many more users, and 92% of organizations said they see it as a fundamentally different technology with novel challenges and concerns requiring new techniques to manage data and risk. Among the top concerns cited were that the use of GenAI could hurt the organization’s legal and intellectual property rights (69%), the information entered could be shared publicly or with competitors (68%), and that the information it returns to the user could be wrong (68%).