Feeling behind on AI? An AI Governance Team puts you a step ahead

ChatGPT has triggered a gold rush of companies planning some implementation of a large language model (LLM) to transform their business. The driver of many of these initiatives is “We have to (fill in the blanks) with this, or we’ll be left behind.”

Sounds familiar?  Consider this: 

  • At the VentureBeat Transform conference a few years ago, Chris Chapo, SVP of data and analytics at Gap, observed that about 87% of AI projects never make it into production. That’s a huge hit to the bottom line, both in outright expense and opportunity cost.
  • In October 2020, Gartner reported that only 53% of projects make it from prototype to production. Getting an AI project from proof of concept to prototype is a challenge which may explain the difference between Chris Chapo’s estimate and Gartner’s.
  • A 2022 KDnuggets poll revealed that the majority of data scientists say that only 0 to 20% of models are ever deployed. In other words, most say that 80% or more of their projects stall before deploying an AI model.

There’s real value being made through generative AI that isn’t always released because it just sits on a shelf.  So how does a company use AI to get ahead of the competition but not get tripped up by a failed implementation? 

In my experience, the lack of coordination and collaboration between executive management, product development, engineering, and data scientists, and, critically, access to the necessary data, are the main drivers of failure. All of these problems can be mitigated by the formation and involvement of an AI Governance Team at the ideation stage and project milestones.

An AI Governance Team keeps initiatives on track

An AI Governance Team is a group of stakeholders composed of representatives from the various company departments who have been given the mandate to vet projects that have advanced beyond the ideation phase, but before they are greenlighted for investment in a prototype, and to reexamine them at specific milestones to confirm they are worthy of continued investment. 

In my experience, the team must include representatives from data science, engineering, legal, security, finance, and of course, marketing and sales. Bring in any one of them after work starts on a project, and there’s a good chance the results will be the project’s delay or cancellation. 

Product & Design: In most organizations I’ve worked with this group “owns” the roadmap. They typically have overall responsibility for refining the use case and connecting the effort to real value. A product manager will likely have the responsibility of monitoring progress from ideation through deployment.

Data Science: This seems obvious, but I’ve seen instances where the data science team was told what to produce after the project was put on the roadmap. In the most egregious case, creating the proposed AI model was simply impossible with the available data, in-house or public. After multiple rounds of “do it” and “we can’t,” the proposal was dropped. Two valuable members of the data science team resigned in the process.

Engineering:  I can’t over emphasize the need to carefully examine the engineering requirements before greenlighting an AI project.  Almost certainly, the most expensive components of getting a project from ideation to deployment will be meeting requirements for accessing and preparing data, integrating the models with existing applications, monitoring and managing the model performance.

Legal: At the very least, I recommend including Legal during the initial evaluation phase and as components like ChatGPT or other third-party models are being considered. Some food for thought is this extract from the OpenAI Terms of Use: “You will defend, indemnify, and hold harmless us, our affiliates, and our personnel, from and against any claims, losses, and expenses (including attorneys’ fees) arising from or relating to your use of the Services, including your Content, products or services you develop or offer in connection with the Services, and your breach of these Terms or violation of applicable law.”

Security: Their signoff is a prerequisite to deployment. Even the CEO is unlikely to (or simply can’t) override it. 

Finance: AI projects tend to be expensive. Often, after you factor in the engineering costs, you’ve identified during the Guild review, way more costly than when the idea was first proposed. Someone from finance can help model the impact on the bottom line.

Marketing and Sales: After finance has tallied up the anticipated costs and, hopefully, factored in what I’ve found inevitable cost overruns, you should have a reasonable estimate of what you’ll need to charge. Will your customers or prospects buy it?

I think a good analogy here is a steel plate with rivets that connects two beams in a long span. If any of the rivets are missing, the span is weakened. Too many missing rivets and the span can fail catastrophically. I suggest you apply this analogy to the formation of the AI Governance Team with each of the above team members representing a “rivet” in your span that connects AI ideation to deployment.

Team Moderator

It’s also important when creating the Team to add a moderator to the mix for the first few meetings. This participant has the experience necessary to help members from disparate disciplines understand each other and their roles in the Team. The ideal candidate will have the skills and expertise in business/product, engineering and data science. 

This individual may exist in your organization. If not, consider engaging a third party to provide that leadership and objectivity. 

While ChatGPT and other LLMs are shiny new AI objects, and the subject of overwhelming hype, they are not the only AI game in town. There are many ways other AI tools can and are being used to contribute to corporate success. That makes having an AI Governance Team the ideal way to avoid stagnation, the high cost of failed projects, and the legal and security risks associated with poorly deployed AI projects.

 

Share this article