With the explosive growth of AI, how is the municipal market harnessing the capabilities to provide more transparency, efficiency and growth opportunities?
Transcription:
Transcripts are generated using a combination of speech recognition software and human transcribers, and may contain errors. Please check the corresponding audio for the authoritative record.
Abhishek Lodha (00:08):
All right, everyone, we're going to kick off with the next panel. I know what you're all thinking: another AI panel. At this point, Lynn, I think we're going to talk about giving out CPE credits for AI endurance at the Bond Buyer, but hopefully, don't worry, we will try to make this worth your time. My name is Abhishek Lodha. I'm Director of FinTech Strategy at Assured Guaranty. I still don't know what FinTech strategy means, but I like it in my title. A lot of my career has been around bridging the gap between technology and public finance and how we can scale and make our business more efficient—whether it be credit research, trading, portfolio management, and now insurance. I also work closely with the MSRB, GASB, and NFMA on the implications of technology and its shifting landscape on our market. Exciting times. We already heard a bunch of interesting insights from the previous panel, so we'll try to one-up that. I'm going to kick off by saying obviously AI isn't a magic bullet that improves or fixes bad processes. Our goal today is to demystify AI, break it down, focus on real problems, and get to a practical playbook or guidance around how institutions across the spectrum can onboard AI or technology within their firms. So, why don't we kick it off with introductions. We have Matt and Anton here. Matt, why don't you go first?
Matthew Smith (01:49):
Sure. Matthew Smith. I used to trade muni bonds algorithmically in the secondary markets, and I left that job in 2022 to make it easier for other people to do that. My company, Spline Data, creates yield curves and a pricing model that tries to predict the bid and offer of individual muni bonds for different sizes. The idea behind it is people can just plug and play, build on top of that, allow us to handle the grunt work that comes with managing a pricing model, and allow them to focus on what really exaggerates their edge or their alpha in the market. More recently, we co-opted that model to try and predict where new issues come to market—predicting scales. You can go in there, tell it what your deal is, and the idea is that it's an independent second set of eyes that looks at as much data as possible to guide you in the right direction and ideally multiply the revenue you're making, either on the underwriting side or on the advisory side, and make everybody's jobs a little easier.
(03:04):
Anton?
Anton Aboukhalil (03:05):
Thanks, Abhishek. Thanks, Matt. I'm Anton Aboukhalil. I am on week nine with Assured Guaranty and quite excited to be here today. Before that, I was at TIAA-CREF, a large financial institution, driving the AI-first transformation over there. I was leading the AI Center of Excellence and guiding that overall journey. Before that, I was at Liberty Mutual where I led teams to drive forward an AI-based transformation in the insurance lifecycle. And before that, I worked with Abhishek six years ago, and before that I was at Fidelity, again in the early days of electronic trading in the muni sector. I'm just quite excited to be here today. Thank you.
Matthew Smith (03:54):
We all used to trade against each other.
Abhishek Lodha (03:59):
Our models were pitting against each other, so we finally are here talking about them. So let's start with the topic of unpacking, demystifying AI. The term gets thrown around like this one singular magic tool, but you have AI, GenAI, Machine Learning, NLP. Just break it down for the audience. What are all the different aspects of Artificial Intelligence—key buckets? And please use plain English if you can.
Matthew Smith (04:30):
I'll start. One thing that I've said to everybody that I've talked to in the past three and a half years is that AI is a very generic term for a lot of different things, and a lot of the things you're probably already familiar with. My go-to example is that linear regression technically counts as AI, as long as you're solving it with a computer and not on paper. I focus on Machine Learning—our company focuses on Machine Learning—which is the more numbers-oriented side of AI. Machine Learning has been around since the eighties, maybe even sooner than that. I think a long time ago I saw a mention of a neural network as early as the fifties, but as the last panel mentioned, the cost of compute has gone down significantly. It's a lot more economical to solve these problems with computers.
(05:35):
Machine Learning has grown substantially over the last few years and it continues to do so. Machine Learning drives pretty much everything on the backend of AI as you traditionally think about it. The ChatGPTs and the Geminis of the world are all probabilistic models to some extent; it just depends on whether you're predicting words or numbers. At the end of the day, the breakdown of the problem is pretty straightforward: you define your problem, you define what success is, and then you figure out a formula to solve it. You could throw it in a computer and have it iteratively try to find the best answer for you. Simple-ish terms for Machine Learning.
Anton Aboukhalil (06:37):
I would say you touched on most aspects. I would add that the definition of AI has evolved from the 1950s into the seventies and eighties where you had these expert systems, which were bunches of if-else statements: "if this, do that; if this other condition, do that." Nowadays, we don't consider this artificial intelligence; it's more Robotic Process Automation (RPA), which still has value. Again, AI is not the solution to all problems. It's one tool out of a panoply of potential tools that can help you address a problem, and RPA could still be one tool to help solve your problems.
Matthew Smith (07:24):
What'd you say? RPA?
Anton Aboukhalil (07:25):
Robotic Process Automation.
Matthew Smith (07:28):
No acronyms on this panel.
Abhishek Lodha (07:31):
We'll try to create a glossary after this and send it out in emails. I think that's a great idea, but help me understand that better: how is Machine Learning different from LLMs that everyone talks about nowadays?
Matthew Smith (07:44):
It's not really different; it's the application. To say "Machine Learning" suggests that there's a singular model, that there's one formula dictating everything, but that's usually not the case. The way that life and technology and legal documents evolve is they get incrementally more complicated over time. The way we use Machine Learning in our complicated way is that we have multiple models. Let's say we have one model that predicts the price of a muni based off the coupon—a linear regression. Then we have another model that predicts price based off the maturity. We can combine those together in a stack of models where the combination of the two is better than any one individual model. From the AI standpoint, they use uninteresting models to try and predict the next word in the sequence. When you type a question into ChatGPT, it's finding the probabilistically most likely answer based on the output of a bunch of different models. There are super high-level models that supervise the rest, and super low-level models that try to predict probabilistically what comes after the word "and" or "a." What do you think, Anton?
Anton Aboukhalil (09:28):
There are a lot of potential applications for Machine Learning. Who here still has questions around Machine Learning and its applications? I wouldn't want to belabor the point.
Abhishek Lodha (09:47):
People are shy or they're just experts in Machine Learning.
Anton Aboukhalil (09:52):
One thing from all the media buzz is that there's Generative AI, but there's also the more traditional Machine Learning approaches; both still have their strengths and weaknesses. It's important to make sure that, for example, for pricing models or anything numerical, OpenAI will be pushing you towards ChatGPT or these Large Language Model applications. But when you test them in practice, they're not great at doing operational mathematics and they make a lot of mistakes. One can still rely on Machine Learning techniques that have been very solid for the last 30 years. It can still drive value from a forecasting perspective, whether it's rental forecasts for real estate teams, pricing, or building models to predict which bonds are likely to be comparables or what potential clients are likely to buy what specific bond flavors.
Abhishek Lodha (11:11):
You spoke about Machine Learning and its use case around numerical data and predictive analytics. What about GenAI? How is that different? Where are the use cases people should think about?
Anton Aboukhalil (11:24):
When you think about unstructured data, you think about documents, legal documents, and large 300-page official statements. There's a lot of benefit there to help the human analyst get the critical piece of information they need within this large document to help them make an investment decision. That has been a tremendous high-ROI opportunity where one can build tools to enable the credit analyst to sift through large documents, extract the data they need, and also have a conversation. They can use Generative AI to drill down to the answer they need. We've seen that deployed for a team of 25 muni analysts, and they've seen at least 25 to 30% efficiency. The human is liberated from sifting through files; they have the information at their fingertips and can rely on their strengths with higher-order cognitive skills to make investment decisions that the computer is still not capable of.
Abhishek Lodha (12:54):
Got it. Letting research analysts be research analysts and not data wranglers. Exactly. On that same note, and obviously we saw the previous panel talk a lot about different applications of AI within their tools on the primary market and trading side of things—Matt, where else are you seeing Machine Learning, AI, or GenAI being used in our market in public finance today?
Matthew Smith (13:21):
For GenAI specifically, most of it revolves around data extraction—whether that be from source data, summarizing what you already have, or finding more clever ways to interact with software in natural language rather than having to click things. The beautiful thing about GenAI is it can interpret meaning to some extent. Prior to GenAI, the predecessor was OCR (Optical Character Recognition), which was a model's ability to look at an official statement and pull out words in memory that were originally scanned in. That was huge because then you could start assigning numbers or probabilities to words. But if you throw a 120-page OS into OCR and get a text block, you're still looking at a 120-page OS.
(14:31):
I've read a lot of OSs in my life—not because I wanted to, but because I had to. In the simplest implementation, one of my guys, Sav, does it all the time. If he wants to figure out what's in an OS, he'll literally just drag and drop it into ChatGPT and ask questions. That evolution of OCR has also allowed the reverse to happen. When you do that, you're extracting information out of the OS and summarizing it, but you can also extract the raw details. We use GenAI in a limited capacity, but one thing we use it for is to programmatically extract data from the OS. We drop it in and tell the LLM, "Hey, we want this information in this format," and it'll return the most likely answer. It's not perfect, but it's better than me reading and writing all this stuff down. As it exists right now, it is very useful for data extraction and summarization—really just a time-saving tool.
Abhishek Lodha (15:46):
I'm curious, what's not a good case for GenAI or Machine Learning in our market? Is there something that comes to mind?
Matthew Smith (15:56):
For GenAI in particular, it's numbers. Anton said it: GenAI is just not very good at repeatable math. It can solve the math benchmarks they trained it on, but at the end of the day, it's not good for judgment. Neither Machine Learning nor GenAI is a substitute for human judgment. All of these ML models we use for our company are totally useless if I didn't know about munis, because we'd be throwing spaghetti at the wall. There's a lot of effort that goes into extracting intuitively what is important in pricing a muni bond. Everybody could guess coupon, but how has California been trading relative to the rest of the market for the last 180 days? And how do I put that in a way that a computer can understand?
Abhishek Lodha (17:02):
Actually, let's unpack that a little bit more. So Anton, in your view, regarding the role of subject matter experts (SMEs) when it comes to the deployment of AI or ML technologies within firms—how do you think about the man, the machine, and the biases?
Anton Aboukhalil (17:18):
Sure. A few years ago, I was working on a muni research effort between MIT and a large insurance company. They had advanced researchers work on muni analytics for two years. They went through a multi-hour presentation on the results they've obtained. They worked in a silo without having a muni SME in that partnership. As a result, close to six figures were wasted as a result of the lack of an SME.
(18:26):
Just having Machine Learning or AI work by itself without that complex and nuanced business context will lead to simply unusable results. Lesson learned: the human is critical to take a generic off-the-shelf Large Language Model and craft a product that drives a positive ROI in your specific process. You can take the off-the-shelf product, but then you need product engineering as well as human refinement to ensure that the tip of the spear is as sharp as possible.
Abhishek Lodha (19:27):
We've had conversations today with a bunch of people about this topic. A lot of firms end up taking an off-the-shelf solution and implementing it in a pilot or beta format, getting some early success, only to realize later that it doesn't have wider application. That's because they don't have a tight feedback loop with the subject matter experts. You're not going to trust your intern on every data point they collect or analysis they do, nor your junior analyst. Why would you trust AI to do that without prompting and fine-tuning?
(20:11):
Actually, that brings up—you mentioned ROI. Let's talk about the stakeholders in the room. You've got issuers, bankers, underwriters, FAs, asset managers; everyone has different use cases in their workflows. Broad strokes: how do firms think about going from pilot experiments to actually building live, ready-to-deploy code within their companies when it comes to AI or even generic technology? How do they balance resources between a startup that lacks resources and a larger firm that has significant red tape?
Matthew Smith (21:08):
I think the more resource-constrained you are, the better, honestly. If you're looking at a company with four people, how hard is it for one person to try something out and see if it adds value? I think where you run into real problems—and what happens inevitably—is that large companies often fall behind. Speaking of AI, Google's trying to listen in. What happens to large companies is they have large visions, which is not necessarily bad, but it can definitely slow down the process. Their first instinct is, "Oh, we need a buyer committee and we need to evaluate everything that's out there," and they go through all these approval processes and budget a certain amount when they don't even know what problem they're solving yet.
(22:05):
One thing we've seen from our customers is that we'll have a product champion—usually the most junior person on the desk—reach out to us on their own time. They're curious and ask for sample data. 80% of the time that bubbles up into a pet project. They build a pseudo-algo or trading strategy and show it to their boss. The boss says, "Well, actually there's some value there," and they push that forward. Eventually, the tech infrastructure needed to support buying and using data from us grows as the data becomes more integrated into their processes. If anybody's asking for my advice, it's that you should try to foster an environment where people are free and curious to try these things and that they can be stakeholders in the resulting projects. To me, it's like an anthill. All the ants go out and look for sugar, and when they find it, all the ants go in that direction.
Anton Aboukhalil (23:38):
On my side, regarding larger organizations, red tape is a challenge, but I believe starting at the top with a top-down change management strategy has worked well in the past. It's making sure that there's buy-in from the upper echelons of leadership because you're a large ship, and if the captain is not directing the ship toward a destination where AI is part of the ultimate goal, you might never get there. I would say meeting with business verticals to identify the major strategic initiatives and pain points—what are the top-of-mind business priorities? Then try to identify which ones can be solved with AI and which ones won't. Focusing on the ones that ultimately drive revenue or high ROI is important to justify the future investment.
(25:11):
It's important to have funding. Often, the CEO says, "We'd like to innovate," but not having specific AI funding will lead the effort to fizzle. In parallel, make sure you have the right governance and processes in place to mitigate the risks that AI presents. I don't know if you've heard about Samsung: when ChatGPT came out, employees copy-pasted trade secrets there, so it memorized them and made them available to the internet. Air Canada built a chatbot that started going rogue and giving significant discounts to clients; it ended up in court. Hundreds of lawyers have been fined because they used ChatGPT and it hallucinated cases that never existed. There are a lot of landmines. Finally, I would say educating and upskilling the workforce is a journey for the entire enterprise. It's important for almost everybody to be part of that journey: to be educated on what AI is, the risks associated with it, and how they can be AI-aware, AI-ready, and AI-powered.
Matthew Smith (27:52):
You hit a couple of interesting points. I'm more "bottom-up," you're more "top-down," but I think they are—to use your analogy—the bow and the stern of the same ship. The most important point you made is tech literacy in general. If the top is tech-literate, the bottom is tech-literate, and vice versa. It's really important to keep up with what's going on in the world. People are generally befuddled by Python. It's pretty easy to learn about it at the very least now. I liken it to VBA. At first, Excel was this new tech. Then VBA came along and some people figured out how to automate their spreadsheets. Then Python comes along and the VBA people move to Python. But there's still this crowd that never changes with the world in figuring out these tools. They are tools; they're not life problem solvers. The people who keep up with the best tools are going to have the biggest gains from an efficiency standpoint.
Abhishek Lodha (29:36):
I'll try to sum up some of the points. It's not AI-first; it's problem-first. You focus on articulating what the problem is, what the business needs are, and you try to quantify the ROI. But as you're doing that, you're also working under a hypothesis. Is AI the right solution or not? That is an experiment by itself. You need a top-down cultural change where people believe this is a capital investment—a business transformation that is ongoing. That downstream enables power users at the grassroots level to find a sandbox where they can experiment, be wrong, improve upon it, and figure out what works for their enterprise. That's how you start with a small budget, build your value prop, and scale it over time.
(31:11):
How do you stay tech-literate in general? You mentioned you need to keep up; we heard in the last panel as well. The landscape changes very quickly and we all have our day jobs to do. So, how do you keep up?
Matthew Smith (31:16):
The easiest way is to find a colleague who is tech-literate and talk to them about what they're doing. Most people are in a position where there's "the tech guy." I did it in my old job—I'd give a project I didn't want to figure out to the tech guy. But it's a great learning opportunity to sit through and say, "How would you solve this? Walk me through your steps, your process." Also, reading the news and trying to understand what you're reading. If you don't understand, research it if it's interesting to you. I want to hit on problem framing if I can. I don't have a tech background. I went to school for finance at a state school and learned how to code after the fact. I'm not particularly bright; I just did it often enough to where it stuck.
(32:34):
One of the most catalytic events in my career was not really learning how to code; it was learning how to frame problems. I learned that by working with quant researchers. Everything they think about is numeric. You can be working on the most qualitative problem and they will figure out a way to break it down into numbers, probabilities, and envision an outcome. My old job instilled this into me: I think about everything in terms of expected value. Whether it's a business decision or a piece of code, you come up with a value if it succeeds, a probability if it succeeds, and multiply them together to get your expected value. That's how you quantify decision making. Once you figure out what your problem is and you can break it down, then you go searching for the answer. You might end up at GenAI, or Python, or Machine Learning. But if you don't know what problem you're solving, how is anything going to help you? It seems obvious, but a lot of people think about the solution before the actual problem.
Anton Aboukhalil (34:04):
From a tech literacy perspective, I know a couple of CIOs who dedicate one hour every single week, despite their busy schedules, solely to upskill and learn, for example, AI. There is tons of material out there. Amazon, Google, and other large tech companies have made amazing material very accessible. There are certifications you can do. The rate of progress within AI over the past couple of years has been unmatched in the history of human technological innovation. It's difficult to follow every single thing, but if you focus on high-level core capabilities and concepts, there is plenty of online material.
Abhishek Lodha (35:30):
That's a great point. I know we have only a few minutes left. I did want to underline something that Matt said about problem framing. When I started my career in munis, I was a mechanical engineer with no finance background and didn't understand muni credit research. My first mentor told me: "Munis is like organic chemistry; there's always an exception around the corner." Now that I think about it, it's more like chaos theory. There is a pattern as long as you can define it correctly. It's not always going to work, but if it works for a significant part of your workflow, that's still efficiency and value. Let's come back to people, upskilling, and career evolution. The statement "AI is going to take away jobs"—I personally don't believe in it, but how do you guys think about it? Role shifting, skill shifting—how should people in our market think about it?
Anton Aboukhalil (36:38):
Sure. Who here has seen the movie Hidden Figures? It's a story about the early space program with NASA, and they actually had people called "computers." To calculate an orbit, you would go to a room with a team of 30 human computers and give them the problem. That was in the forties, fifties, and sixties. With the advent of electronic computers, of course, those jobs were eliminated, but the people who were the actual computers saw their jobs change. They became engineers, mathematicians, and designers. Throughout history, technological advancement has shifted the human from an operator to more of a supervisor.
(38:02):
That has also happened with autopilots. In 1947, airplanes were taking off from Canada and crossing the Atlantic with no human intervention. You can imagine pilots were panicking, thinking their jobs would be gone. But 80 years later, there are more airplanes and pilots than ever, but every single pilot knows how to use the autopilot. I see AI as an assistant that will keep enhancing our capabilities. It will take away the boring, repetitive manual processes so we can focus on the more complex parts of the job, like the takeoffs, landings, and emergencies.
Matthew Smith (39:05):
The only thing I would add is that I lose my job every day. A piece of my job goes away and I'm super thankful for it. If I have to think about sales tax in some random state one more time, I'm going to lose my mind. My career still exists because I keep pursuing new jobs with that time I save. I don't have to do sales tax anymore? Great. I can work on this new model I've been thinking about, or try to take care of the backlog. Maybe that becomes my new job. If you lose your old job, you can find a new one. It doesn't have to be a 100% about-face, just a different way of doing a job.
Abhishek Lodha (40:09):
We're almost at time. I'm going to open it up for any questions.
Audience Member 1 (40:21):
You touched on this a little bit, but I want to make it more narrow: the human element. Bringing it back to what Anton talked about with the MIT example—do you think the human element is a necessary component in AI tools for all markets, or especially for the muni space? How important is that human element in all of this?
Matthew Smith (40:53):
I would say very. My inspiration for starting this company was the corporate bond market. In the corporate bond market, there are some very accurate models. In my old job, I helped build some of those, yet we still weren't making any money. If we can predict the bid and offer, why weren't we making any money? Well, it boils down to: you might be able to perfectly predict where things trade, but you have to figure out which of those are profitable. You only have a certain amount of capital. Capital is a great analogy for either your time or your internal resources. You have to figure out where to allocate capital within this perfectly predicted universe, and you have to be able to figure out where your predictions are wrong.
Anton Aboukhalil (41:59):
It depends on the problem. In Munis, again, it's highly nuanced and very complex. There are always those exceptions Abhishek was mentioning. Having the human in the loop is critical.
Matthew Smith (42:14):
To put that in credit terms: if you can perfectly predict a downgrade, what does that mean for price? "Great, this is going to get downgraded tomorrow." Someone still has to interpret that information.
Abhishek Lodha (42:27):
To recap everything, the one takeaway I've picked up is: start small, stay curious, forget the buzzwords, and solve real problems. I'm going to end with a lightning round. One habit leaders should start tomorrow to build their companies to be more tech-ready?
Matthew Smith (42:55):
It seems like a hacky response, but I go back to tech literacy. Genuinely spend time trying to figure out how code works. You don't have to learn how to code; just start by learning how computers work. It seems trivial, but that's how you start thinking in terms of binary, logic, and how you get to the answers with the technology you have.
Anton Aboukhalil (43:23):
Tech literacy is key for leaders as they look at the horizon for major disruptive forces. We've seen it with electricity, we've seen it with the internet, and we're seeing it now with AI. In the past, there were companies that decided not to use electricity or the internet—we've seen where those companies are today, like Blockbuster. As leaders, it's important to keep an eye out on these major forces and ensure you bring your organization along that educational journey.
Matthew Smith (44:26):
And don't forget to hire—or don't discount hiring—a programmer, even if you think you don't need one. If you hire a programmer and they're bored, they will automate everything you're doing already. They'll figure it out because that's what they do. There's very little downside to hiring someone with tech literacy if that's the avenue you want to take.
Abhishek Lodha (44:45):
A lot of this also answers my second question, which was a simple way to level up as a non-technical person this year—which is tech literacy again. I'm going to stop there and not keep you from your break. Thank you so much for sticking around. I still see eyes open, so that's great. Let's caffeinate up. Thank you.
Technology Tools Amid AI Expansion
Published November 4, 2025 2:40 PM
|
Updated February 6, 2026 12:12 PM
45:10