What the Recent AI Regulations Mean for Your Business

November 6, 2023 | 6 min read


AI has got everyone talking, even the government. Here is what you need to know about embracing the forefront of technology while keeping your business safe

President Biden issued a landmark Executive Order on Artificial Intelligence (AI) in late October. The intent of the order is to ensure that America is on the forefront of embracing what AI has to offer while also managing the inherent risks of the rapidly advancing technology. The Executive Order puts forth standards for AI safety and security while protecting Americans’ privacy and promotes innovation and competition by encouraging tech companies to develop tools to ensure their AI systems are safe, secure, and trustworthy.

Why It Matters

A White House fact sheet on the order outlines the steps companies and the government will be directed to take to encourage responsible AI, including asking developers for transparency by requiring them to share internal testing information, and to build in safeguards for protecting Americans from algorithmic discrimination and privacy concerns. There are also cybersecurity concerns as it relates to the vast amounts of data collected to train and inform the AI systems.

“Fundamental changes are coming to all industries, including technology and cybersecurity due to the increasing use and promises of AI,” said Milan Patel, global head of managed security services (MSS), at BlueVoyant. “The most advanced companies have been and will continue to use AI to support their mission. In cybersecurity, industry-leading vendors, such as BlueVoyant, are using AI and machine learning to help clients find threats faster, backed by human-led expertise. At the same time, as defenders use AI more, so will the bad actors. The defenders must stay on guard for new AI-fueled attacks, and even use AI to fight these new AI attacks."

Why We Are Starting to See Government Regulations

With the introduction of ChatGPT in November 2022, talk of AI was suddenly everywhere. Along with the promise of increased efficiency and business transformation came concerns about ethics, safety, and security that could accompany the use of AI. These concerns led governments to recognize the need to step in and produce guidelines and regulations around the development and testing of AI tools.

With the task of promoting technological innovation and optimizing the opportunity of AI while also mitigating risk by ensuring that AI technology is developed ethically and securely, the Executive Order is a great next step that follows the earlier released the Blueprint for an AI Bill of Rights and builds upon the commitments the Administration received from the leading AI companies back in July.

An Executive Order is not a permanent law, but it is a step in the right direction toward providing standards around generative AI and comes as Congress is holding forums with AI experts to inform future legislation, and Vice President Harris prepares to participate in a U.K. AI summit later this week.

“AI offers much promise when it comes to efficiency, but like any new technology must be monitored to avoid abuse,” said Austin Berglas, global head of professional services. “We welcome efforts to provide standards around the use of AI to help ensure it fulfills its promise without becoming a source for data leaks or an aid to cyber criminals. Given BlueVoyant’s long history and deep expertise in AI, we are excited to be helping clients safely implement AI into their enterprise with proper protocols to avoid abuse."

Global Implications and Regulations

Just like cyber crime crosses borders, so too will AI. The Biden Administration says it is working with governments around the globe to see their AI guidance. From the White House announcement: "The Administration has already consulted widely on AI governance frameworks over the past several months—engaging with Australia, Brazil, Canada, Chile, the European Union, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK. The actions taken today support and complement Japan’s leadership of the G-7 Hiroshima Process, the UK Summit on AI Safety, India’s leadership as Chair of the Global Partnership on AI, and ongoing discussions at the United Nations."

BlueVoyant is closely following these developments as they are taking place across the world. As a global company, with regional staff in all of the major geographies, we are ready to act as strategic advisors to help companies build and/or implement secure AI strategies and 24x7 monitoring programs for AI infrastructure within their business.

“Global collaboration is needed when it comes to AI and security, as the best defense against abuse is collaboration between the public and private sectors,” said Robert Hannigan, head of international business for Europe and the Middle East. “It is best to bake security into AI now as enterprises are implementing it more broadly. If enterprises wait, they may struggle to prevent vulnerabilities and attacks.”

What Does the Growth of AI Mean for Private Businesses?

AI brings the ability to create efficiencies by automating tasks and supporting business decision-making. AI’s ability to distill mountains of complex data down to actionable insights means that businesses will be able to use AI automation to make quicker decisions, improving business operations, and enhancing the customer experience.

The downside of that is security risk. AI systems rely on vast amounts of data, often including sensitive and personal information. Businesses must ensure that proper security measures are in place to reduce the risk of unauthorized access or data breaches. Many AI services use company and potentially customer data to improve experience. This is data cyber criminals would like to access so companies must ensure they do a risk analysis, and have strong cyber hygiene.

In addition, AI must also be closely monitored to make sure nefarious actors do not change the parameters, which could compromise security. That is why AI needs to be backed by human-led expertise to ensure it properly fills its role.

The Importance of Building an AI Strategy with Security in Mind

Every organization is different. Where and how to use AI within your business are questions an in-house security team needs to address based on your security needs, AI skills, cybersecurity expertise, and depth of knowledge of your security stack and platform. Your security team needs to ensure that:

  • Data protection mechanisms, such as encryption, access controls, and secure data storage, are implemented and hardened.
  • Regular security assessments and audits to identify and rectify vulnerabilities in AI systems are conducted in order to reduce risk.
  • Regular red team testing is conducted to fortify the AI model from attacks.
  • Strong model training and anomaly detection are in place to thwart and/or detect suspicious behavior.
  • Behavioral analytics are implemented to detect any suspicious activities from misuse of credentials internally.
  • Continuous monitoring is in place to detect and mitigate unauthorized AI model tampering.

In addition to the above, companies need to ensure that their AI models and activities are in compliance with the new laws and regulations being developed globally. BlueVoyant is prepared to work with your legal and compliance teams to conduct security assessments, establish governance frameworks, and manage and monitor your internal and external environment to protect your data and mitigate legal and regulatory risks. With the right security measures and a proactive approach to risk management, AI processes can deliver on its promise of increased efficiencies and faster business decision-making without exponentially increasing risks to your business operations or reputation.

If you are interested in learning more about BlueVoyant's offerings, please contact us or your BlueVoyant representative.