Artificial intelligence (AI) isn’t slowing down and for many financial institutions, that’s exactly the challenge. The pressure to act is real, but so is the risk of getting it wrong. Many financial institutions have started piloting AI solutions at various levels of their organization, while others have taken a more conservative approach, preferring to wait and see the results of their peers. Regardless, to move forward, leaders need more than ideas. They need a clear, defensible way to operationalize AI — a strategy that balances innovation with control.
We sat down with Kim Reed, Senior Director of Risk at Alkami, to unpack what she’s seeing across the industry and how banks and credit unions can take a structured, confident approach to AI in banking.
Molly Irelan: Kim, you talk to a lot of banking leaders who are either evaluating Alkami or are actively navigating their AI journey. When financial institutions ask you about AI, what are they really trying to solve?
Kim Reed: When financial institutions ask about AI, it usually comes down to three core pressures, and they tend to show up all at once. The first is how to better serve account holders. Teams are looking for ways to make interactions more relevant, immediate, and helpful. That’s where conversations around virtual assistants, personalization, always-on fraud detection, and proactive engagement come in. There’s a clear desire to compete on experience. The second is capacity. Most financial institutions aren’t sitting on excess resources. They’re trying to do more with the teams they have. So the question becomes: how do we reduce time spent on repetitive or manual work and shift that effort toward higher-value activities? AI starts to unlock that, especially in areas like customer support, back-office processing, and internal workflows. The third, and often most important, is confidence. Leaders are asking: How do we do this in a way that’s safe? Defensible? Aligned with expectations from regulators, our board, and our own policies? This goes beyond what AI can do and starts to lay the foundation of how it will be used at the organization.
Molly Irelan: Where does the hype around AI fall short compared to what teams actually need?
Molly Irelan: What’s a common misconception about AI that slows organizations down?
Molly Irelan: Where should financial institutions actually begin if they’re just getting started with AI in banking?
You need both control and operational perspectives. Risk, compliance, and legal are thinking about protection. Product, technology (tech), and business teams are thinking about opportunity. You need both in the room to map out a list of considerations and priorities.
In February, the U.S. Department of the Treasury released guidance that gives a common language for risk, controls, and outcomes. They also create a level of defensibility if you’re ever challenged on your approach.
Include things like a policy and responsible use guide to define clear expectations around data usage, access, and oversight. This doesn’t need to be overly complex, but it needs to exist and align with existing data classifications and standards. Without it, every decision becomes harder than it should be. With it, you can protect your organization from what’s approved/what’s not, as well as encourage employees to be innovative in their use of AI solutions.
Molly Irelan: What does a strong AI adoption playbook look like in practice?
This is your foundation. Define how decisions are made, who owns what, and how risks are evaluated.
Every use case should tie to a specific business outcome. Are you reducing fraud losses? Improving response times? Increasing engagement? If you can’t define success, you won’t know if AI is working or be able to measure the return on your investment.
This is where many teams struggle. You need trained users who understand both what AI can do and where it can fail. Without that, you introduce inconsistency and risk. Focusing on the pre-determined use cases also allows you to curate important lessons learned and share that knowledge across the organization as AI use is expanded among teams.
AI systems change over time. New tech emerges. Data changes. Outputs evolve. You need processes in place to validate results, track performance, and adjust when needed. Like any piece of tech, this isn’t a one-time implementation. This should be an ongoing operational capability that creates efficiency gains for the organization.
Molly Irelan: How should bank and credit union teams prioritize their first AI use cases?
The best starting point is where the impact is tangible, data is understood, and controls are manageable. That’s why many financial institutions begin with internal use cases — things like document review or internal communications — where the risk is lower and outcomes are easier to measure. Where teams sometimes get into trouble is jumping in too quickly without having the structure to support them.
Molly Irelan: What differentiates a strong AI use case from a risky one?
Molly Irelan: How should leaders think about AI governance without slowing innovation?
This creates consistency and reduces the burden on governance teams over time.
Not every use case needs the same level of scrutiny. A low-risk productivity tool shouldn’t go through the same process as a high-risk decisioning system.
Create templates, questionnaires, and clear workflows so teams aren’t starting from scratch every time.
Teams should know what they can approve independently and what needs broader review. Similar to the risk-based tiering approach, teams should understand what level of scrutiny a new AI tool should fall into by its characteristics, such as the dataset it has access to, whether it’s an open or close-looped system, etc.
Molly Irelan: What are the biggest compliance concerns right now when it comes to AI?
Financial institutions want to know exactly where their data is going and how it’s being used. Is it staying within controlled environments? Is it being used to train external models?
If a regulator asks how a decision was made, can you clearly show the process—from design to deployment to monitoring?
It’s not only about your vendors but also their dependencies on third parties. Are you aware of how AI is being used across that ecosystem? Are you familiar with how your contracts were established? Did you contract through your vendor or directly with their partners? Understanding these aspects is paramount in ensuring tools adding to your tech stack adheres to the risk framework you’ve established.
Molly Irelan: How can financial institutions move forward with confidence while regulators are still catching up?
Molly Irelan: What does responsible AI look like in day-to-day operations?
Molly Irelan: What’s the real cost of waiting on piloting AI?
Molly Irelan: What would you say to teams that feel behind in their AI journey?
AI doesn’t create advantages on its own. Instead, it succeeds or stalls based on how well it’s operationalized. Leading financial institutions are building a repeatable way to evaluate, deploy, and manage AI — grounded in governance, aligned to real use cases, and supported by clear accountability. However, there’s one piece that underpins all of it: data. Without a strong data foundation that is unified and accessible, AI becomes inconsistent, difficult to scale, and harder to trust. With it, AI becomes something entirely different. A new mechanism in your organization’s growth engine; turning everyday interactions into insight, and insight into action.
