Insight and analysis on the information technology space from industry thought leaders.
Why Future-proofing Cybersecurity Regulatory Frameworks Is Essential
Learn why and how to design adaptable regulatory frameworks in the age of AI.
November 12, 2024
By Ghazi Ben Amor, Zama
Last month, the UK government introduced new regulatory advancements to enhance cybersecurity in AI models and software. Designed to fortify digital systems against hacking and sabotage, these are changes intended to foster confidence in the use of AI across various industries — confidence that is very much needed.
In fact, given the cost of cybercrime is already projected to reach $13.82 trillion by 2028 — and could grow at an even higher rate as the new generation of cybercriminals gain access to increasingly sophisticated AI — trust in the tech is understandably beginning to weaken.
While these new measures represent significant progress in addressing current cybersecurity challenges, there are still questions and concerns around the future adaptability and efficacy of regulatory frameworks, particularly among the developer community.
In fact, in a recent survey among developers across both the UK and U.S., a total of 72% said that regulations made to protect privacy are not built for the future, with 56% believing that dynamic regulatory structures — which are meant to be adaptable to tech advancements — could pose an actual threat. A particularly alarming aspect is the security risk associated with AI systems that require vast datasets for training, which often include sensitive personal information. With this in mind, changing or inconsistent regulations could create vulnerabilities or gaps in how this sensitive data is protected, increasing the risk of data breaches or misuse.
As regulations evolve, ensuring the security and privacy of the personal information used in AI training looks set to become increasingly difficult, which could lead to severe consequences for both individuals and organizations.
The same survey went on to reveal that 30% of developers believe that there is a general lack of understanding among regulators who are not equipped with the right set of skills to comprehend the technology they're tasked with regulating.
How to Design Adaptable and Effective Regulatory Frameworks
With skills and knowledge in question, alongside rapidly advancing AI and cybersecurity threats, what exactly should regulators keep in mind when creating regulatory frameworks that are both adaptable and effective?
It's my view that, firstly, regulators should know all the options on the table when it comes to possible privacy-enhancing technologies (PETs). While some PETs are already being utilized to minimize the risk of data breaches, others are evolving as I write, with immense potential in terms of securing sensitive data and protecting privacy. Knowing the advantages and limitations of each helps create a flexible approach in adopting them rather than trying to create one policy to cover all at once. For example:
Authentication technologies: Multifactor authentication (MFA) — commonly integrated by developers into authentication systems to provide an additional layer of security — is used in applications ranging from online banking to enterprise software. Biometric authentication is another advanced and secure method also in use today that utilizes unique physical traits such as fingerprints or facial recognition. Additionally, looking ahead, the adoption of federated identity mechanisms, such as FIDO (Fast Identity Online) or OpenID Connect, hold promise. These mechanisms not only enhance security, but also streamline user authentication processes across various platforms, offering a unified and secure approach to identity management.
End-to-End Encryption (E2EE): This tech provides robust security by ensuring data is encrypted from sender to recipient, preventing unauthorized access even by service providers. However, implementing E2EE can be complex and resource-intensive, often requiring significant computational power and sophisticated key management. Because E2EE prevents service providers from accessing the data, it can also hinder their ability to assist with data recovery or comply with legal requests for information, a situation that could be an issue in cases of criminal investigation or data recovery.
Fully Homomorphic Encryption (FHE): Although FHE is still considered to be in its early stages of being fully realized, it has made significant advancements in recent years. A type of encryption that supports data processing without requiring decryption, it is a perfect combination of AI and data security, as it allows organizations to wield the power of the technology without compromising privacy expectations for users. For example, financial institutions can use FHE to confidentially train fraud detection AI models across banks without exposing any personal data, and healthcare providers can perform predictive diagnostics without exposing the private information of their patients.
Multi-party Computation (MPC): This technology complements FHE by providing an end user with the ability to decrypt encrypted data after checking that he or she has the right to access such data. MPC allows a quorum of designated entities to engage in a collaborative protocol that will reach a consensus on the access control before re-encrypting the data from the public encryption key of the protocol to the public encryption key of the end user, thus granting the end user access to the clear data. Each entity in the quorum has only a piece of the private decryption key of the protocol and is therefore unable to decrypt any data on its own. Furthermore, the clear data is never made available to anyone but the end user itself.
The Importance of Working Together, with Ongoing Evaluation and Adaptation
Once regulators have a good — and current — understanding of PETs — of which there are many more — the next step is for policymakers to ensure regulations don't stifle technological advancement while still protecting against cyberthreats.
To craft nuanced and effective privacy policies that evolve alongside technological advancements, it's key to remember that they don't operate in a vacuum. There is no expectation for them to be the only ones responsible for this. Instead, policymakers should be working alongside the creators of the technology — who in turn should start designing their tech while keeping in mind existing frameworks rather than expecting new ones to adapt.
Incorporating continuous learning within the organization is also crucial, as well as allowing employees to participate in industry events and conferences to stay up to speed on the latest developments and to meet with experts. Where possible, we should be creating collaborations with the industry — for example, inviting representatives of tech companies to give internal seminars or demonstrations.
It's my strong belief that all of the above should be factored in as we integrate increasingly complex systems like AI, IoT, and advanced data analytics into our daily lives, and as the potential for cyberthreats grows.
By future-proofing regulations, we can ensure that we're not constantly playing catch-up with cybercriminals but are instead proactively protecting our digital infrastructure. By adopting a dynamic and adaptive regulatory framework, we can better safeguard sensitive data, protect user privacy, and maintain public trust in digital technologies.
About the author:
Ghazi Ben Amor is vice president of Corporate Development at Zama. Ghazi has been working in cybersecurity for more than 20 years with roles spanning engineering, strategy, investment, and finance. At Zama, he is heading partnerships' development with a focus on cloud providers, hardware accelerators, and financial institutions.
About the Author
You May Also Like