Interview with Clara Durodie from Cognitive Finance.ai

On this episode of the podcast, we speak with Clara Durodie of Cognitive Finance.ai. She is the author behind “Decoding AI in Financial Services - Business Implications for Boards and Professionals”.

On this episode of the podcast, we speak with Clara Durodie of Cognitive Finance.ai. She is the author behind “Decoding AI in Financial Services - Business Implications for Boards and Professionals”.

Google Play / Apple Podcasts / Spotify

SPEAKERS

Clara Durodie, Brent Sanders, Mark Percival

 

Brent Sanders  00:06

So, Clara, thank you so much for joining us. We've been following some of your interviews and some of your work online and wanted to introduce you to our audience. You were, as we're talking right before we started, you're a reference from Stine Rasmussen, when we talked to Stine, and she said, hey, you have to talk to Clara, get her on the show. And so we've done that. And so, again, thank you for coming on. And I want to just start by giving an opportunity to tell us a little bit about your background and where you're coming from.

 

Clara Durodie  00:36

So my background is in the financial services industry, more specifically, I worked in asset and wealth management for a good number of years in the front office, and then I also served on the board of directors. And, then I decided in 2014, that I wanted to get more into studying artificial intelligence applications in financial services. So what I did, I resigned from my very comfortable, cushy, senior executive position from a great company, great colleagues. So I resigned. And I spent one year studying what now is an emerging field. And it's called neuro economics. And that is the intersection of artificial intelligence neuroscience. With application in my case, the interest was in financial services. So I that's what I did for almost a year I the interest I had in this was very clear, in that we cannot deliver technology which our users and consumers and clients can trust, if we do not understand how we can design it in such a way that it builds trust, rather than destroys trust, in financial services guy may be old school, but I do believe that trust is the founding value in our industry. And if we cannot deliver trust through technology, then we are losing business. So that was my angle. And then I studied intelligence, human intelligence and artificial intelligence as well, of course, and then I found my research and analysis and data size consultancy. And since then 2017 I've been advising the Board of Directors, regulators, I've been influencing the influences as someone puts it the other day. So this is my job. And I'm, that's what I do for a living. I am the intersection of deployment of artificial intelligence, for business growth and profitability. And in order to do that, my submission, my thesis of my entire work, is that we need to deliver ethical AI, and embedded from the investment stage, investment in startups all the way through to the implementation stage in real business.

 

Brent Sanders  03:11

Yeah, yeah, that's, that's great. And so, as a piece of background, I've been dying to talk to folks about ethics as it relates to automation. And it's not too different from AI. Right. It's not a different subject altogether. So I'd love to dive into that subject. You know, when you are working on this strategic and leadership and board level, I'm curious, like when you come into an organization, what's the biggest misconception that you deal with about artificial intelligence?

 

Clara Durodie  03:44

So first of all, I think there is a misconception regarding deployment of automation. And obviously, the more advanced forms of AI in order to cut costs. And I would say, That's not why we do it, it's important to do it, it's immediate, the low, lowest hanging fruit. But if it's not done for the purpose of actually building on that automation long term, then it becomes just the cost, which can then have to be somehow covered. So that's the biggest misconception. The other misconception I come across quite frequently is that AI is going to steal our jobs automation. When are we going to be left out of jobs? That's not true. Again, I remember I've been in this public speaking circuit since 2015. When I left my corporate job, and since then, I remember it was in 2006 16 hours, or 17 hours and I was invited by Accenture to speak to their clients. And I remember someone put their hand up and said, we're going to be left out of jobs, what are we going to do? And I said to that person, I said, we're not going. We left out jobs, new jobs will be invented. And one of them is AI ethicist and then in 2017, so that's not a long time ago in human years, right. But in artificial intelligence technology is, it's, it's a long time. So people scratch their heads, I said, like, what, what is an AI emphasis. And I also said to them, there will be jobs like algorithms, forensic specialists, for instance, someone who can unpack an algorithm and understand how it works. So a lot of new jobs are coming. And we have three years later, four years later, very, very quickly pushed forward by the current pandemic to move into the digital space. And we're discussing AI ethics, we're discussing forensics, we're discussing regulations around AI, a couple of things are happening very quickly, otherwise.

 

Brent Sanders  05:58

Yeah, yeah. I mean, so we don't need to jump into current events. But, you know, there's a EU artificial intelligence bill that's very recent. I'm curious, any initial reactions, thoughts around, you know, how that's coming together and how we're seeing regulation, kind of step into the forefront?

 

Clara Durodie  06:18

So I have an overarching theory regarding regulation. And please Bear Bear with me, I am, I'm someone who has been formed. And I've worked in a highly regulated space, which is financial services, right. And I have two nuggets of wisdom, I would say about regulation. The first one is that regulators, the government authorities, need to implement regulation, because someone has abused, the law has abused the lack of regulation. So we, everybody, the mid largest majority of us, we need to pay for those people who have abused the system. So this is the first piece of I would say Clara's wisdom, the second piece of wisdom is that when you regulate something, if you really have to regulate it, actually, from a business standpoint, you need to look at it as, as your working space, when you know, what he can do is actually easier to build a model to build projections to build a business model. It's just, you know, that's your playground, that's where you are, you can't do more than that. So actually, it becomes easier, I would argue, to work in the space, which, sooner or later, anybody who would have half an understanding of what automation means to our industry, they would have accepted and appreciate that regulation will come, it was imminent. 

I remember the book I published in as I started writing my book in 2018. And I finally finally published it in 2019. So this is before the pandemic, it was published in November 2019. And I wrote the chapter on AI governance. And I remember I wrote that about guidance. And back that time two years ago, it was everything about AI guidance. How can we deploy AI within this guidance, no one wanted to even touch the word regulations and AI. And I remember in my book, there's a paragraph and I say, guidance is here today, but regulations are imminent. So in order for business leaders, you know, the for anybody who runs companies, whether it's pure automation or not automation, or whether they use automation software, they buy it, you need to do what what is like basic common sense, I would say, self regulate, because in the know that regulations are coming. So the clients we worked with, we started helping them build this framework of regulations and primarily looked into what is the most ethical thing to do with designing deployment, automating. How can we do this in such a way that it's morally upright, okay? While we remain sound as a business and remain profitable, because we've seen since the climate change and investments and ESG and all that we've seen in ESG investments that actually investing in that space is very profitable. It's a good business. So the whole of mankind now is turning to realize the unexpected, which is actually if we do the right thing, if we are ethical human beings in the way we engage with our clients. And we build a relationship not only with clients, but also without people and companies. 

We run without employees without stuff, actually, we can actually create a good environment and actually be profitable. And I have every single client we work with since 2016, they've had the same piece of advice from us. And they're in a good position. Now, they can, whether they're in the US selling in Europe, or whether they're in Europe selling products, technology in the in the UK, they're in a good place, because the only thing they need to do is just to tweak a little bit, the framework they working within, and they're there. But those who've been cutting corners, those who have been abusing the loopholes of unregulated space, knowingly or unknowingly, will have to pay the price because now they have to walk back and change technology, and clients will have to walk back and update. So it's a lot of why rebuild something when you can build it right from the beginning, when you know the rules of the game, which is common sense. 

The morally upright, just do the right thing. You know. And in financial services, greed, we've been told, for many years. It's good, but actually, it's not good. Alright. And I also am like in my, in my time going through my economics degree, and then fine as and then masters and all that. I was brought up thinking like, Milton Friedman is, right, the business of business is business. This is the main, you know, everybody knows this. Right? Right. 50 years on Milton Friedman actually has been proven wrong. No, the business of business is not business. It's actually doing the right thing. And now we're talking stakeholder capitalism, which is a twist, which is a fancy word for just doing the right thing from the beginning. You know, don't cut corners. Greed is no good.

 

Brent Sanders  12:03

Yeah, no, I love that. I mean, we got to touch on Gordon Gekko. And this is a wonderful, wonderful response. So I guess, there's always this thing that you read in to touch on your response is, I feel like there's a general idea that AI is going to get wildly out of control. But to your point, using, I feel like there's a little bit of scaremongering tactics where, you know, all of a sudden your jobs are going to be taken. I mean, we know I mean, we're, we use these tools, we use automation tools, we know whether there are massive gaps. And there are interesting ways of retooling and retraining. And I like the idea of using common sense, you know, using your best judgment from a, like, these tools are just not yet that complicated that you're they're going to start being sentient and creating, you know, different versions of themselves. So I don't, I think that's one common misconception that we hear is that it's going to replace every employee, and then it's going to decimate the workforce decimate the economy long term, it's, it's I think the reality is, is going to be workforce training and retraining. And so I'm curious, you know, where do you start when working with a company or advising, like, how are you seeing companies approach retraining? How are you seeing them, you know, look at their workforce and say, Hey, we're going to introduce automation, and it's going to displace some people. And we need to fill other roles. Are those roles sort of, do they just emerge over time? How are you seeing this unfold from your perspective?

 

Clara Durodie  13:37

So the way we do it is, so typically, we start with a client, discussing where they want to be as a business. from a business standpoint, what do they want to achieve in terms of the number of clients reaching geographical expansion? Or what do they want to do to grow the business by number of stuff, what metrics business metrics they want? Do they want to achieve in order to remain profitable, or indeed grow their market share? So this is the number one misconception. Again, you asked me earlier about misconception This is another misconception about deployment of automation or deployment of more advanced forms of automation on on machine learning approaches or deep learning, they always been applied to solve a current problem instead of actually having a blueprint to answer the question with technology, with automation, with training staff, where do we want to be as a business, business first business objectives and then technology comes as a tool, people stuff, they come as a support to deliver on that strategy. What I've seen so many times and this is an expensive mistake, people make that go like, because I've heard that many times the lowest hanging fruit just, let's automate whatever so we can show the board that actually works. And then we'll work it out later, we'll see what else we can automate. That's a piecemeal approach. It works and delivers, it cuts costs, you know, but then what do you do with the people you fire when you know the job has been automated? So as a business, typically, this is the conversation we have with our clients. So what are you going to do with these people? Most of them know your business. They've been with you for so many years. They know how you operate, they know your clients, they're part of your family, not, you know, that's the ethical consideration. What are you going to do with them, you're going to automate the jobs, fire them. And then later on, you hire new people, retrain them? No, it doesn't make sense. So an AI strategy covers the employment and the retraining component, as part of how to keep people going, how to keep the staff on board. That's our approach. And in doing so, when it comes to the training piece of the way we run it, an AI strategy has a rescaling retraining piece, on the workforce for the workforce, the way we do it, we recommend our clients to put everybody like everybody from the receptionist, all the way to the chairman of the board, through a basic AI course automation AI course, not for the purpose to learn how to code, that's not the purpose, the course is not designed for to teach them how to code, that's not the purpose is to literally make them explain to them what AI is. So instead of going in for any of us, right, when you're, you're more likely to be afraid of something you don't know. But when you understand how that thing goes, or personal individual, whatever it is, operates, what is it has it got three legs, Five Eyes, doesn't matter, you know, I know he's got five eyes, I'm not gonna be afraid of it, because I know, I know what to expect. I know how to handle it. And this is the piece which takes me back to my research in 2015, which is neuroeconomics, when we understand brain activity. And we, we try to see how technology all of this automation works and influences our brain. And by default, our decision making processes, right. So whether we make decisions at the board level, affecting everybody spending millions and billions in automation, and technology digitization projects, or we help the receptionist to understand what options they have going forward, retrain risk, because people are smart, you know, it just opened the door for them. And I've seen time and again, open the door a little bit, and show them the light. And then they'll become curious, right? Yeah, and I've seen, I've seen so many times, cases where very junior staff doing other things like being a receptionist, or the male office or, you know, back office stuff, they actually when they they, they saw what they can do with their careers, getting into analytics, the power of data, sharing with them the beauty of working with data, in order to make your life easier business, more successful, deploy the data, help clients help yourself in the job you do. People become curious. And then it's a host of free courses online, you know, where people Oh, and, you know, it's that's all you need to do as an employer, it just opened the door a little bit, pique their curiosity, give them the basic of what AI is, and then they'll know their way. Sometimes our clients work with us. We deliver in house training for the staff, you know, either for non technology people or for the data scientists, that can be done. There's so many suppliers, that's not a problem. We're not short of online courses on data science, so AI, but the core concept and I'm going to wrap up after this is that this kind of training from the receptionist all the way to the chairman of the board, irrespective of their background, one puts one common denominator in terms of what the organization wants to convey to everybody in that company. The second thing, it allows people to understand what AI is and that two things. One, they're not afraid that they're going to lose their jobs for the more for the less senior ones. But for the most senior people, it helps them know how to use this tool. This technology, automation, is a business tool to grow the business to make the right decisions to keep their business profitable. And my commitment, my thesis is that, you know, they don't need to learn how to code as much as we don't need to learn how to build an engine for a car in order to drive a car and keep it safe on the road. Right?

 

Brent Sanders  20:39

That's a great analogy.

 

Clara Durodie  20:40

We just need to know how to drive the car. Where's the key? How do you lock those? How do you start the engine? How do you hit the brakes? How do you do to keep the car safe on the road? What are the rules? So that's what we need, that's what we need to do in terms of retraining our staff and the workforce is not firing people is not the right thing. I would say. Obviously, there are, you know, special cases. And I'm not a I'm not a great believer in like, oh blanket, you know, generalizing, of course. But as a common sense rule. Here's an example. Why firing people who've been with you for so long. It's actually cheaper to retrain them than to hire new people, because you have to go through the cost of hiring new people, the cost of induction, those people.

 

Brent Sanders  21:29

I was posed this question recently: is it ethical? Or we, you know, is it ethical to release these resources back into the workforce? Is it you know, and it was so funny how it was phrased? Because it's so dehumanizing, calling a person a resource, right? It was like, okay, well, if you phrase it like that, I kind of know what answer you want. But it sticks with me as a question as somebody who focuses on automation is, is it ethical to do? I don't know, you know, I don't I don't know what the right answer is, it kind of depends on your value system to the society you believe in. But the easier answer, and the one that I agree with you on is, what's the smarter thing for a business to do, you know, the businesses going to be much better suited with to your point, retraining, you know, putting the training you've already invested in to good work, which just onboarding employees is difficult? And I think we'll see this, we'll see if it's a global trend, but at least in the US right now, you know, it is very difficult to find new employees. And so I think that'll be one thing that we look to, in all of our automation work is, you know, how do we make the existing people just how do we force multipliers, everyone is coming to us asking, we just need to scale we can't get the resources. It's not a matter of eliminating jobs. This isn't labor arbitrage that we were running into with me, you know, maybe that's a market condition. But I really don't think so. I hope that this is something that actually just makes Junior employees, people with less specialized training more powerful, right, is giving them power tools, essentially. So yeah, wonderful response, I think it's a great perspective.

 

Clara Durodie  23:10

Very quickly picking I'd like to pick up on the point you made about treating human, you know, stuff as resources are referring to them as resources, you'd enjoy if you if you read my book, you'll enjoy that were in the section where I talk about applications of AI in the personnel, it's part of the business, I do make a point that actually I'm not comfortable in referring to staff as human resources, hate. So I was so pleased that he made that comment, because that's actually the most valuable capital A business has is the human capital. All right. And all this automation coming through and everything else, we need to understand that they are the most valuable asset we have in the business. And it's up to us as leaders, as people who sit on the top of businesses and run it, run them, it's up to us whether we make the most of that resource, or we just dispose of it. Just so next, you know, in two years time or in one year time, we need to go back. We need to go back and spend more money to bring more of this human capital resource.

 

Mark Percival  24:28

I know I know when Brent refers to them in an automation project, I think it's what do you call them “the help”?

 

Brent Sanders  24:34

I do know, I do know such things.

 

Mark Percival  24:38

So one thing that I think is interesting about when we talk about AI is Brent and I do a lot on the automation side. And there's obviously it's somewhat easy to explain what we do when we build a solution if it doesn't involve AI, because that's a very clear cut. You know, it goes to this inbox. It opens up this email. It takes us a yes process. But then there's this issue. And you kind of touched on it really briefly in the beginning, but there's this issue of you involving AI and it gets fuzzier. And you mentioned a word that you mentioned forensic. And it made me think about sort of explaining later on how something happened. When it's a simple automation project that doesn't involve AI, it's very easy to see where something went wrong. When it involves AI, you start to get into this fuzzier space of like, well, we trained a model. And based on that it came up with this like, you know, risk profile for a customer or it did something that's not as easy to explain as in an old process where you would throw somebody at it and say, like, here's the rules. How do you kind of handle that and this industry, this, I think that's going to continue to be an issue. I don't think that's going away, I think AI as it continues to grow.

 

Clara Durodie  25:45

So there are two components, of course. So here's something else, which we've been doing with our clients since 2016, we would help them build an audit framework for everything, any other any model they would build, or they would, at the point of procurement, they would require an audit of how that model was built and how that model was trained. Right? So the auditability piece has always been very important to us. And we've always I've always advocated it, you know, you can't explain something where you don't have any, any trail of what happened in it. Okay. Yeah. So it's the same thing when you go buying foods, you know, and you buy, I don't know, let's call it some soup in a can. And if you don't you read the label, right? So you need to see what they put in that soup, right? I'm just simplifying things. It's just maybe it's not the right analogy. I've never thought about and never used this. But it just came to me. So you look at the ingredients, because quite frankly, you're interested to know whether you're allergic to one of the ingredients, whether you like it or whether you know what's in it. So it's the same principle with building models. You need to know what went in it, What's in it? How do they build it, and even more, so some of them for some models, depending on the type of outcomes or where they're applied. With some models, we actually go all the way to the quality of the data. And we look into what they used, how they managed was the data management strategy for, for that company. So yeah, so that's the auditability piece. The second point, second piece is about regulating the explainability component, because that's what your question is all about. How do you explain it? So you either do it, you build that audit trail, you have solid data management, in house, you understand where you get your data, you look at it, you know what you've done? And then you kind of understand or you can, you know, with a high level of comfort, you can say, Yeah, I know why the outcome is this way. Or at least I know how I can go back and trace it, and see what happened, though, there is some anomaly. But then there's also a second way to deal with explainability is what the European regulator has decided to do, and I think is rightly so they actually put systems in place and required this audit ability to be in place they they have become what I have, what do you guys have in the States is the FDA for AI? You know, it's regulated, people are thinking that the regulations are not welcome and style, you know, and quite, I've heard quite a lot of criticism being addressed to Brussels to this initiative. And every time I try to explain to people that it has to be done, because if we don't have the systems in place, we are here to regulate them because other people abuse the system, right? And that's why we have those regulations. So, the second way, point to the second way to deal with explainability is through following one the regulator and trying to enhance what the regulator's spirit of the law wanted to convey. And obviously, there are some technical options as well. And I wrote a great extent, in my book, I see that I spoke to a couple of companies in New York, I see that they there are people trying to create an explainability piece which can be you know, then there are there are attempts to try to build this explainability either built in or like an addition to a model. So there are ways but either way, if you explain the outcome, and then you can't correct it or you don't know where you need to go to correct it. Then it means that you failed on the auditability piece, which is why the regulator in European in Brussels wanted that piece to be very clear, which is traceability and auditability of the process.

 

Mark Percival  30:07

Yeah, and I wonder if that's gonna change with, at least in the US, there's a lot of the services that offer AI as a service. And that kind of has this other obscuring layer, where I'm kind of passing my next part of the piece of the process off to some AI service provider, and they're taking their role and then handing it back to me. And I wonder if in the future, we'll see something like a sock two, or some type of compliance where I can ask for their audit ability or their record or where they've actually gone and made these, these, you know, these their own audits.

 

Clara Durodie  30:40

I think that's where we are going, whether in this, I mean, in Europe, we are already there. So it's a case that the regulations have been drafted, and now they're in the European Parliament to be approved, and probably in a year and a half, they will become live. And I think, for any state to, they would somehow embrace this in one shape or another, you know, again, I come back to the food analogy. You can't keep a society healthy, if you feed them stuff, which you'd no one knows what's been put into. And quite frankly, would you like to eat? That? Is the same question, would you like to use a system even if you know how it works, and then you take as a business, you take full responsibility for the outcome, and he can't even scratch your head and put an answer to your client in this like it doesn't make sense.

 

Mark Percival  31:38

Yeah. Going further on that, do you see a world where we're gonna see more? You know, I don't know, external AI services that are essentially leased to you or provided to you versus kind of rolling your own in the organization?

 

Clara Durodie  31:54

Well, I think there are attempts to provide this kind of form of all encompassing solutions. But I've always said that they will be as successful as their creators understand the business environment they're being deployed in.

 

Mark Percival  32:10

Yeah, that's a big piece.

 

Clara Durodie  32:12

So my point is that, I think there's a again, it's a, it's thinking, probably, I'm what I'm saying now, I'm like, I don't know, a few years in advance from where we are now in terms of expectations. And but this is the signal. And again, this is the logical trend. I see from a business perspective using AI, right? Yep. And I also work with startups, which build technology for the financial services industry or for the healthcare industry. And it's very clear that you cannot automate something, if you do not understand in detail how that business model operates. What is the revenue model for that business? What is the operational model, whether you understand how they make decisions, in terms of, you know, at operational level, so you can build and understand this? So it's, it's more than understanding whether I have to move this from left to right is actually, what are the nuances? What are the what's the business context in which I'm deploying this. So I'll give you an example. In financial services, under this big umbrella, we have so many different business models we have, for instance, I'll give you an example. Investment Banking, it's a different business model to wealth management, they make money in a different way. They operate in a different way, they hire people in a different way, they engage with customers in a different way. So this is just an example. And the same thing happens in other industries. So yeah.

 

Brent Sanders  33:53

I think this is one of the things where we struggle with and I know just even around automation, forget, even I would include AI as part of automation when I use the word automation. But when we struggle to, to pitch or to come into an organization that's not currently adopting these tools, we really struggle with this contextual benefit, what would you get out of it? Because, you know, we know there are broad benefits that we can offer. And we find ways to work around it. And the only way that we've been able to do it is to say, Hey, we're going to pitch a and many other companies may try to do like a proof of concept will do a way for us to intensively get involved with the business and we need at least a couple of weeks to do that. You know, we'll do a quick three week engagement trying to deliver something of value in it because it's like without the context of how you operate your business. You know, talking about automation in broad strokes. It sure can, it can answer emails or perform certain vague tasks, but I find it very difficult to sort of pick An idea to somebody without really knowing it is even mentioned, before we get into the company will understand what they do and how they make money. But until we see how they do it, and in detail what the steps are, it's very hard for us to have a realistic pitch to them about what the impact of AI could be.

 

Clara Durodie  35:21

So I completely agree with you. And I can see the challenges. And I've seen the challenges many times when I saw technology companies trying to pitch some of our clients, and we would be invited to sit in those meetings and, you know, ask the hard questions. And we would do that. And it's a difficult spot to be in. I completely agree. And when you try to sell a piece of tech to different business models, what really can you do apart from trying to bring in your team, someone who used to work in that space? Right. But someone who has like industry experience not like, you know, marketing experience in a bank, or, you know, right, you know, I mean, there's just someone who actually, you know, rolled their sleeves and, you know, made things happen, to keep the business profitable, understands cost centers, profitability centers, revenue centers, so all of that that's where those are the pain points for any business, right? So, yeah, that's one one piece. So just like bring someone in who, who understands the actual environment where you want to deploy that, that tech. And the other thing is, I remember I said this to one of the startups I mentor, and I said to them, we need to sit down and you need to learn how to ask questions. Sometimes people, it's not that people are not willing or potential clients are not willing to give you the answers. It's just you need to prompt them, you need to ask the right questions in order to do so. It is the art of getting as much information and asking the unthinkable question sometimes, you know, things which are for an MCs, I'll give you an example. So what I used to work, in the front office, in financial services in wealth management, as any practitioner, you have some heuristics, you know, you do things without even thinking, after years and years of working, doing something you just like, you don't really you don't put second thought in something because you've done it so many times. You don't think that it's an important detail to let a technology provider? No. But as a technology provider, you need to know how to ask those questions in order to get those details about how people operate and do things without even, you know, putting a second thought that should I do this or not, you know, there. It's an art. That's all I can say.

 

Brent Sanders  38:04

Yeah, absolutely.

 

Clara Durodie  38:06

It's an art. And it's, you need to know a little bit of psychology, you need to understand how to pose and formulate questions in such a way that you help people give you the information you need. So you can design what they need.

 

Brent Sanders  38:22

That makes a ton of sense. And I'm curious when you are working with a company that has maybe run a pilot or maybe dabbled within either automation or AI, I'm curious when you're, you're laying out, like the plan and how long this may take. I mean, how do you frame a strategic plan about automation or AI at the board level? I mean, because it's something that obviously is going to take time, and it has so many dependencies and interconnected pieces, that it's hard to set expectations to a certain extent, and it's hard to set. You know, I always say, to frame it the right way. Here's how you navigate that.

 

Clara Durodie  39:07

So the first typically, the first question I ask is a board or a C suite? like where do you think you'll be as a business? in five years? What do you think you will be? And where do you think you would like to be? In terms of giving me some business metrics, you know, how much profit? Do you want to increase your profitability by how much? How are you going to go? And you do that by going to different jurisdictions, scaling, attracting new clients, maybe expanding your product range? Well, what do you want to do in terms of business growth and expansion? And what do you want to do? Do you want to protect your market share? Do you want to expand it? So are you offensive or defensive in your approach? What would you want to achieve right? And sometimes clients say, oh, I'm happy with 2 million clients, I have no plans, we don't have as a board any plan to go to the states or to scale or July we are happy. Because why? Because in five years, we want to sell the business. That's if that's the strategy, then that's fine. So it means at that point, it means that they have no interest to expand, it means that you have to help them protect the market share, and do as best as they can to use technology to look after their clients. And then we drill down from that, and say, how do we do that? What kind of technologies they're available to us? In order to be able to deliver this business objective to look after our clients? What does that mean, to put everything on mobile? What does it mean to our clients? And then we work with the marketing team, and we segment the client base, and we try to understand that client base to help them, to help them deliver through technology, the best they can, to keep their clients happy. But that's how we operate. We don't work with what's the lowest hanging fruit? I heard this. Yeah, I heard this so many times. It's just me, when I hear this, it's just that I also say like, please don't repeat it. Okay. Just stand there, dare repeat, it is like, doesn't work. So we don't operate like this. Like, let's, let's just automate whatever we can. So we can prove to the board that actually works. And actually, we work with clients at the board level, and we tell them, do not expect or do not put pressure on your team to show you that machine learning models work. Of course, they will work. But to what extent was the objective you wanted to automate something but what's next, it's going to work is going to do exactly what you expect it to do. Because the people who sell it, they're very good at what they do. They'll do it. But for you as a business, what's next? What do you want to do? So again, for us, it's proven time and again that it works better when you go from the business objective down. And then you help clients decide what they want to do, and then pick the right technology to help them deliver on those objectives.

 

Brent Sanders  42:28

Make sense? Sounds like it just takes the business direction and runs with it, it makes a ton of sense, sensible.

 

Clara Durodie  42:37

And Brent, quite frankly, you know, if you want to have, you know, an unlikely scenario, you know, but let's just for the sake of it, runners, right? If you want to pitch a business, like a piece of tech to me, as a business, and I used to be there, and I'm not talking from an armchair consultant position, because I used to be front office and you know, sign off procurement, and investment propositions and plans. I just want to know that it makes sense for me, and it's going to make money for me as a business. So when you come to me and tell me, my my tag does this, and the other one and whatever, you have to put in the context of me, how's it going to work for me? And when you start asking the deep questions like, what's your business strategy, so we can pick the right format, or shape our product? or deliver the POC in such a way that it makes sense to you? That's the power of it, you know, because there's no such thing as a blanket for everybody. Because, yeah, of course, everybody has emails, right. But what does it mean in an organization? I know organizations who just actively reduce email exchange, and they have a policy that email should not be longer than this unless it's an absolute number of words, unless it's absolutely urgent. And you don't reply to all unless you have to?

 

Brent Sanders  44:07

Yeah, that's great. I think it's funny, you know, whenever you're, you're pitching something, understanding what the regardless of whether you're in a consulting role, or even you're selling anything, just understanding what's actually important, the business is what I'm getting. And I've heard this as like a, it always is presented in the context of sales, but it's like going to, let's say you're selling machinery and you're going to a plant manager, and you say, Oh, my machines are going to break down. You know, they'll take your, you have five breakdowns a year, we'll get it down to two. And, you know, if the foreman or the plant manager is incentivized if as long as I, you know, have less than two, you know, I hit my bonus, it's like understanding those mechanics of you know, what is actually important. Is it okay to have breakdowns? Is it okay for certain metrics to be met versus other ones? I mean, I guess when you talk at that board level, it's helpful because you're going to get a clear direction. It's not being filtered through management through their incentive programs and things like that. And so great insights.

 

Clara Durodie  45:09

But again, even at the board level, it's a skill to be able to ask the right questions. So you, you help people dig into the information they need to form their own decisions. The way we work with our clients, again, we just do not provide them ready made answers or what we think is best for them. What we do provide them is frameworks and questions to help them think because they know their business better than we do. And they know their board and the dynamic of that board better than we do. But we give them the framework so they can make the right and ask the right questions. Again, one of the reasons for which I put in my book, at the end of each chapter, I have a set of questions to ask at the board level. That's a sign that I am a great believer in formulating the right questions in order to get the right answers to help either the business make better decisions, or a product, vendor technology vendor to provide the right context for that particular potential client. I hope it makes some sense.

 

Brent Sanders  46:22

No, it makes perfect sense. It's great. What do you know what's, you know, what's the profile of a company that is typically a good fit to bring you in on the board level? Like what does that typically look like to anybody and everybody? Or is there a profile you're best suited towards?

 

Clara Durodie  46:37

We typically work with financial services, incumbents, so the companies which we used to work and be around for a long time, but we also advise and work with fintechs. Because fintechs are becoming part of financial services, right. And I think over time, it will become, everything will become a FinTech as opposed to financial services. But we help FinTech providers not only create products, which are suitable, and easy to embed in traditional industry players. But also we help some fintechs which, depending on their profile, think on their own feet in terms of product development strategy, go to market strategy. What's next because sometimes, again, when you think you know an industry, and you're completely parallel to what a real practitioner lives and breathes every day, then there's no chance to sell that FinTech product ever.

 

Brent Sanders  47:44

Interesting. That's great. Clara, anything you wanted to promote? While you have an audience here, I mean, your book is definitely out there. And available, right? It's now available, I believe, on your website. That's where I saw it, but probably also things like Amazon.

 

Clara Durodie  47:59

So the first edition of the book is no longer available on Amazon, right? So we have only a few copies left, and they can only be ordered from our website, cognitive finance.ai. And, more importantly, I think, I'm not a great believer in promoting things, you know, just like I'm in that regard. I'm a great believer in word of mouth, and people talking about the work I do, because they enjoyed this, and they found value in it. But what I would like to mention here is my weekly newsletter. I write a newsletter on the business and governance of artificial intelligence in financial services. And it goes out every week. It's called decoding AI, like the book. And it's, it's free to sign up to the free version of it. So it has helped a lot of people understand what the industry is. So if you're a product provider to financial services, you need to understand the pains that are going through. So that's an interesting newsletter to guess every week, because you understand how we struggle as an industry. What are your challenges in terms of strategy, governance, what kind of technology is out there? What kind of investments are the main pieces I talk about every week, right? But also if you work in financial services, and you want to understand and have a good range of understanding how the regulations are moving, how strategy should be considered that the mind newsletter would be irrelevant reading for those people. So you either try to sell to the industry or you're an industry practitioner of both of these tribes. The newsletter would be a great addition every week.

 

Brent Sanders  49:54

Yeah, yeah. Excellent. I've been following the newsletter. It's excellent. I'm always excited for it to hit the inbox. So Yeah, well thank you so much, Clara for coming on and giving us your insights around AI automation and this whole topic we would love to have you back on in the near future.

 

Clara Durodie  50:10

Well, it's a pleasure. Thank you very much for having me.

 

Brent Sanders  50:13

Great. Thank you very much.





Keep in the Loop

Get the latest news and exclusive releases

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.