Insight with Foresight Webinar: How do I integrate AI into my business?

In this webinar, John Jarosz chats with data scientist, journalist and member of the Sightglass Network, Gary Biggs, about how to integrate AI into your business practices.

In our Insight with Foresight session, we explored how companies can move beyond AI experimentation and begin embedding it meaningfully into their business. The key: start with business priorities, not the technology itself. We outline a staged approach—identifying clear, ROI-driven use cases, assessing data maturity, piloting in focused areas, and scaling what works.

True integration requires more than tools. It calls for executive sponsorship, cross-functional collaboration between product, data, and business teams, and a foundation of strong data governance. Just as importantly, it demands a culture shift: one that supports disciplined experimentation and ongoing learning.

AI is not a one-time initiative. It’s a strategic capability that, when aligned to business goals and customer needs, can transform how companies deliver value. At Sightglass, we help organizations bridge vision and execution—ensuring that AI investments create lasting, measurable impact.

 

John: Hi Gary, good morning!

Gary: Good morning! Thanks for having me.

John: Thanks for joining us for Sightglass’s first Insight webinar. It’s great to have you.

Gary: Thank you, it’s great to be here. And it’s really exciting to get to work with you again.

John: Same here. It was such a pleasure working with you at UKG and getting to know you in the San Francisco office. It’s rare to meet a data scientist who not only has deep expertise in AI but also in business management.

Gary: That’s kind of you to say.

John: Honestly, my favorite job title of yours might still be “Chief Wizard.”

Gary: [laughs] That was a good one. I’ve tried to retire that title, but it keeps coming back.

John: Well, you bring a kind of curiosity and experimentation to your work that really fits the name. Before we dive in, do you want to share a little bit about your background for those who don’t know you?

Gary: Sure. I’ve worked in data science for about 15 years, across a range of industries—from HR tech to fintech to healthcare. Most recently, I’ve been leading AI and data efforts at startups, helping them scale responsibly while still pushing the envelope with what AI can do.

John: That’s great. One of the things we were excited to talk with you about today is that balance—how to explore and experiment with AI without getting caught up in the hype. Because right now, it feels like every company is trying to do “something” with AI, but not all of them are asking whether it’s the right thing.

Gary: Exactly. I’ve seen a lot of organizations jump into AI projects without fully understanding the problem they’re trying to solve. There’s this pressure to have an AI story, even if it doesn’t align with their product or customer needs.

John: Right, and it can actually slow things down—or worse, distract teams from more meaningful work.

Gary: Totally. One thing I always tell teams is to start with their core data questions. What do you wish you could know or predict about your customers? What decisions would be better if they were more data-driven? If you can answer those questions, then you can figure out whether AI is the right tool.

John: I love that. It’s very aligned with the way we think at Sightglass—starting with the problem, not the technology. Do you have an example of a team doing this well?

Gary: Sure. One company I worked with was in the healthcare space. They were getting pressure from the board to “add AI,” but they took a step back and looked at their patient data first. They realized they had a major gap in understanding patient no-shows. So instead of building a big flashy AI feature, they started with a basic prediction model—nothing fancy—that helped clinics staff more effectively. That small improvement saved them hundreds of thousands of dollars a year.

John: That’s a great example. It’s not about doing the most complex thing—it’s about doing the most useful thing.

Gary: Exactly. And honestly, sometimes the most useful thing isn’t AI at all. It might be a dashboard or a new workflow or a simple rules engine. The key is not to lose sight of the outcome you’re trying to drive.

John: That’s such good advice. We’ll come back to this, but I want to ask you a little more about the hype. We’re seeing terms like “gen AI” thrown around a lot—how do you cut through that noise?

Gary: It’s tough. The language is evolving so quickly, and marketers are racing to label everything as generative AI. I think the most important thing is to stay focused on fundamentals. What’s the model actually doing? Is it generating content? Making predictions? Classifying things? If you understand that, the labels matter less.

John: I like that. It’s a bit like going back to first principles.

Gary: Exactly. And that’s why I think teams need strong product managers in the loop. PMs can help ground the work in customer value. They’re often the ones asking, “So what?”—and that’s the question everyone should be asking right now.

John: Yes! That “so what?” is so important. We talk a lot about that when teams come to us wanting to “explore AI.” We’ll say, “Great—what happens if this works?” And if they don’t have a good answer, it’s probably not worth building yet.

Gary: That’s a great litmus test. You can get a lot of clarity just by pushing on outcomes.

John: Okay, last question before we open it up—what’s one thing you wish more teams knew about working with AI?

Gary: I’d say: don’t underestimate the importance of data quality. Everyone wants to play with models, but if your data is messy or incomplete, the model’s not going to help you. Cleaning and structuring your data might not feel like innovation, but it’s often the highest-leverage thing you can do.

John: That’s a great note to end on. Thank you so much for joining us, Gary—this was awesome.

Gary: Thanks, John. It’s been a pleasure.

Share this article