Fullscreen Menu - Background

Subscribe to SME News Search for an article Our amazing team

Ground Floor, Suites B-D, The Maltsters,
1-2 Wetmore Road, Burton upon Trent
Staffordshire, DE14 1LS

Background
Posted 12th September 2024

Plugging Security Gaps with AI without Creating Additional Risk

AI has the potential to transform every aspect of business, from security to productivity. Yet companies’ headlong, unmanaged, rush to exploit innovation is creating unknown and understood risks that require urgent oversight, argues Mark Grindey, CEO, Zeus Cloud.

Mouse Scroll AnimationScroll to keep reading
Fixed Badge - Right
plugging security gaps with ai without creating additional risk.


Plugging Security Gaps with AI without Creating Additional Risk
Cyber security concept and digital data protection with a locker in a blue background created with a generative ai technology

AI has the potential to transform every aspect of business, from security to productivity. Yet companies’ headlong, unmanaged, rush to exploit innovation is creating unknown and understood risks that require urgent oversight, argues Mark Grindey, CEO, Zeus Cloud.

Business Potential

Generative AI (Gen AI) tools are fast becoming a core component of any business’ strategy – and one of the most powerful areas of deployment is IT security. Gen AI has a key role to play in addressing one of the biggest challenges within current IT security models: human error. From misconfiguration to misunderstanding, in a complex, multi-tiered infrastructure, that includes a mix of on premise, public and private cloud deployments and multi-layered networks, mistakes are easy to make. 

With hackers constantly looking to exploit such faults, with common attacks targeting known weaknesses, AI is fast becoming a vital tool in the security armoury, providing companies with a second line of defence by seeking out vulnerabilities. The speed with which AI can identify known vulnerabilities and highlight configuration errors is transformational, allowing companies to both plug security gaps and also prioritise areas of investment. It is also being used to highlight any sensitive data within documents – such as credit card or passport numbers – that require protection; and providing predictive data management, helping businesses to accurately plan for future data volumes.

Unmanaged Risk

With ever expanding data sources to train the AI, the technology will only become more intuitive, more valuable. However, AI is far from perfect and organisations’ inability to impose effective control on how and where AI is used is creating problem after problem. Running AI through internal data resources raises a raft of issues from the quality and cleanliness of the data to the ownership of the resultant AI output. Once the commercially available AI tool, such as Copilot, has viewed a business’ data, it can never forget it. Since it can access sensitive corporate data from sources such as a company’s SharePoint sites, employee OneDrive storage, even Teams chats,  commercially sensitive information can be inadvertently lost because those using AI do not understand the risk. 

Indeed, research company Gartner has urged caution, stating that: “using Copilot for Microsoft 365 exposes the risks of sensitive data and content exposure internally and externally, because it supports easy, natural-language access to unprotected content. Internal exposure of insufficiently protected sensitive information is a serious and realistic threat.”

Changes are required – firstly to company’s data management strategies and secondly to the regulatory framework surrounding AI. Any business using AI needs to gain far more clarity regarding data exposure: can data be segregated to protect business interests without undermining the value of using AI or inadvertently undermining the quality of output by providing insufficiently broad information? Once used, who has access to those findings? How can such insight be retained internally to ensure confidentiality?

Regulatory Future

Business leaders across the globe are calling for AI regulation but as yet there is no consensus as to how that can be achieved or who should be in charge. Is this a government role – but if each government takes a different approach the legal implications and potential costs would become a deterrent to innovation.

Or should the approach used to safeguard the Internet be extended to AI, where key policy and technical models are administered by the Internet Corporation for Assigned Names and Numbers (ICANN)? Do we need AI licenses that required AI certified individuals to be in place before a business can run any AI tool across its data? Or simply different licensing models for AI tools that clarify data ownership, for example by using a tool within its own tenants within a client account to reduce the risk of data leak? The latter would certainly be a good interim stop gap but, whatever regulatory approach is adopted it must be led by security engineers, impartial individuals who understand the risks; and who are not influenced by potential monetary gain – such as those who have committed to the Open Source model.

There are many options – and changes will likely result in a drop in income for AI providers. But given the explosion in AI usage, it is time to bite the bullet and accept that getting the right solution can be uncomfortable. It’s imperative to quickly determine the most efficient approach that is best for both the industry and for businesses, an approach that accelerates innovation while also protecting commercially sensitive information.

Categories: Business Advice, Technology


You might also like...
The Importance of Hiring a Professional Cover Letter WriterBusiness Advice21st March 2023The Importance of Hiring a Professional Cover Letter Writer

As a small business owner, one of the most important decisions you’ll make is hiring the right employees. The success of your business depends on having a team of talented, motivated individuals who can help you achieve your goals.

Why the 4-Day Working Week is no ‘Magic Bullet’Business Advice22nd March 2022Why the 4-Day Working Week is no ‘Magic Bullet’

There has been much talk of a 4-day working week. In this article, CharlieHR discusses how a 4-day workweek could help some companies, and what their findings were when the company tried it themselves.

SME News Media Pack

Every quarter we offer a new issue of SME News which is published on our website, shared to our social media following and circulated to in excess of 78,000 individuals from various sectors across the UK SME marketplace.

  • TickExpand your reach.
  • TickGrow your enterprise.
  • TickSecure new clients.
View Media Pack
Media Pack - Bottom Slant Gradient
we are sme.
Arrow