Data Management

The Growing Need for AI Governance

Teresa Wingfield

August 14, 2023

artificial intelligence governance

It may not seem like it, but Artificial Intelligence (AI) has been around for a long time. The earliest successful AI program was written in 1951 by Christopher Strachey who later became director of the Programming Research Group at the University of Oxford.

ChatGPT Is a chatbot that uses AI to perform a variety of tasks, including engaging in conversation, generating text, and writing code. The chatbot had more than 100 million active users in January 2023, just two months after it launched, making it the fastest-growing consumer application in history. As open AI becomes more readily available to the masses, there is growing concern about unethical applications of AI.

The Need for AI Governance

The purpose of AI governance is to address various issues concerning the ethical use of AI, including transparency, bias, privacy accountability, and safety. The goals are to use AI in ways that help maximize benefits while minimizing harm and social injustice. AI Governance will also play a critical role in helping organizations comply with regulations and laws. Here’s a summary of what to expect in the United States, Europe, and Canada.

While there’s no comprehensive federal legislation on AI in the United States, municipal and state laws are zeroing in on how AI systems process personal data. For example, employers in New York City can’t use automated employment decision tools that rely on AI to screen job candidates unless the tools are audited for bias. Companies also must notify candidates if a tool is used in the hiring decision process. Colorado now requires insurance companies to prove that their AI algorithms and predicate models do not result in unfair discrimination in automated underwriting. The U.S. Chamber of Commerce provides a state-by-state artificial intelligence legislation tracker.

In Europe, the General Data Protection Regulation (GDPR) mandates that organizations should not use algorithmic systems to make significant decisions that impact legal rights without human supervision and that individuals are entitled to meaningful information about the logic and algorithmic system uses. The European Union proposed the Artificial Intelligence Act (AIA) on April 21, 2021. The proposal was passed in 2023, and creates three categories of AI systems—limited risk, high risk, and unacceptable risk—and sets different requirements for AI systems depending on which risk category they belong to.

In Canada, the proposed Artificial Intelligence and Data Act (AIDA) is designed to help ensure that AI systems are safe and non-discriminatory. The Act would hold businesses accountable for how they develop and use AI.

What Your AI Governance Strategy Should Cover

Increasingly, your organization will be under pressure from customers to demonstrate that it uses AI responsibly. Your efforts will need to increase and mature to accommodate AI regulations that are poised to expand in the not-too-distant future. It’s wise to prepare now. Here are a few steps to help you get ready.

  • Assign responsibility for AI governance. This could be your Chief Privacy Officer or a dedicated AI Governance Officer.
  • Understand AI use across your organization, including third-party solutions.
  • Evaluate how your organization’s use of AI impacts humanity, including employment, privacy, racism, and many other concerns.
  • Develop prohibited use cases.
  • Develop codes of conduct and ethical guidelines for data engineers, data scientists, data analysts, and front-line workers.
  • Make sure your company is complying with AI regulations wherever it conducts business.
  • Audit AI algorithms and models for bias, especially when they are used in areas that could result in racial and economic inequities.
  • Create key performance indicators (KPIs) to measure success, focusing on metrics that matter for bias, discrimination, fairness, and explainability.
  • Continuously monitor progress and take corrective actions as required.

About Actian

Customers trust Actian for their AI needs because we provide more than just a platform. We help organizations make confident, data-driven decisions for their mission-critical business needs while accelerating their business growth. Using the Actian Data Platform, companies can easily connect, manage, and analyze their data.

teresa user avatar

About Teresa Wingfield

Teresa Wingfield is Director of Product Marketing at Actian where she is responsible for communicating the unique value that the Actian Data Platform delivers, including proven data integration, data management and data analytics. She brings a 20-year track record of increasing revenue and awareness for analytics, security, and cloud solutions. Prior to Actian, Teresa managed product marketing at industry-leading companies such as Cisco, McAfee, and VMware.