Interview with Marshall Sied / CEO of Ashling Partners

Marshall Sied is the Founder of Ashling Partners, a professional services firm that drives efficiency gains and process improvements through Intelligent Process Automation and RPA. They work with leading intelligent process automation technologies to drive continual process improvement and better employee engagement for their clients.

We discuss Marshall's experience implementing RPA and automation for their clients.

Google Play / Apple Podcasts / Spotify
Interview with Marshall Sied

SPEAKERS

Marshall Sied, Mark Percival, Brent Sanders


Mark Percival  00:02

Hi welcome to another its automic RPA podcast this week we're joined by Marshall Sied, its Brent and Mark again as a host, and we'll just jump right into it.


Brent Sanders  00:11

Mark, you want to start off with should we just kind of dive into a background, I mean, Mark you can give us a background of how you got into the space. And you know, just from the ground up, introduce yourself to the audience.


Marshall Sied  00:22

Sure, happy to run it, Mark. Appreciate you guys having me on this podcast today, always looking forward to speak with folks that are a part of this Intelligent Automation revolution. As Brent mentioned, this is Marshall Sied. I'm a co-founder of Ashling Partners, really a service provider in the space. So we represent many best of breed what we would classify best of breed technologies in the space, kind of in that broader what we classify as hyper automation, which is certainly something that I think we'll probably talk about today. To answer your first question, we got into the space, myself and the other co-founder, Dan Sweeney, you know, basically through just searching for something that we thought would return value quicker, in a shorter amount of time, we came from kind of that monolithic, enterprise application ERP era. And at the end of the day, it was hard to show process improvement and return on that investment. Because those projects were so dang long, people just got very fatigued by the end of them. And at the end of the day, people's work did not change all the time, right, whoever was at the end of that invoice process, they just had a different UI that they were basically interfacing with. So when RPA kind of came into the mainstream, and it's been around for a long time in different forms, but when it really came to the mainstream, several years back, myself and Dan both quit what we were doing and kind of jumped right in and found it as some partners.


Brent Sanders  01:59

That's great. That's great. It's really interesting background from hearing about ERP projects, or large scale projects that are taking too long, because that's something Mark and I have talked about constantly. And I'm curious to know, what was the first project like in taking on, you know, hey, we're gonna dip a toe in automation. And as you decide, okay, we're gonna, we have a use case, we have a solution, how did you guys stumble upon going towards this direction for your solutions?


Marshall Sied  02:35

When we really started the firm, with RPA, kind of in mind, right, which is kind of unique, I guess a lot of what you see is a lot of large service providers and consulting firms have kind of started to add it, whether it's like a BPO managed service offering, or they've added it to, you know, sometimes their enterprise application offerings, right. So kind of using it as a poor man's integration between workday and SAP, or whatever the case might be, right. I mean, we started this kind of use the kind of the digital transformation term, but we started this as digital natives in automation. Sure, because we thought this was more tangible, more, more concrete, and, you know, frankly, more more realistic than broader digital transformations, right? So sure, we work with a lot of organizations that do broader digital transformations, they do great work. But we thought this pillar was fantastic to be in. So we started it kind of with RPA in mind to begin and more holistic automation in mind, kind of from a breath perspective as we evolved, right? So, you know, we pretty quickly got into other technologies. And that's where we start to get into this conversation of what just six months ago was called Intelligent Automation. But you know, Gartner coined that few months back to hyper automation and what Gartner says usually becomes mainstream. So we maybe use that for this podcast, but you know, pretty quickly use cases evolved, you know, people saw the value with the happy path, and they wanted to take on more exceptions. They wanted to take on more value, more frequency, extend upstream or downstream their processes, you know, going from basically task automation to more orchestrated process automation. And just that natural inertia kind of pushed us into other emerging tech spaces like intelligent data capture and OCR and digital process automation and kind of that IPPMS sector. You know, it really, we really got it kind of pushed her naturally just based on trying to seek a business outcome and reverse engineer a solution from there.


Brent Sanders  04:38

Interesting. That seems smart and organic. I'm curious, where do you guys typically play like, is it you know, there are a couple of phases that we usually think of when we think of automation, from uncovering opportunities to development to sort of production management, you guys do the whole full lifecycle or do you specialize in a specific area of solution?


Marshall Sied  05:00

That's a great question. And I'm going to use your term organic quite a bit here, Brent, so, we really started, you know, dynamize. So the other co-founder and I are really kind of functional process guys. Right? That's, that's always been our background. Unlike you and Mark, not a software engineer by training, I certainly understand programming understand architecture and infrastructure. But our background is really process improvement. And we started the firm kind of to be a little more agnostic, a little more advisory, setting up centers of excellence. Now even on day one, we were doing broader intelligence series, just because we knew just saying focus on RPA was not going to be sustainable for a justifiably period of time. So we really started advisory. And then after some of those engagements, setting up roadmap, setting up governance, development standards, a framework around maintenance and support. Your clients basically asked us, like, can't you just build this now. So once again, kind of organically got pulled into that. So then we started bringing on, you know, our development leadership at that point. And our infrastructure leadership was close to follow because guess what this stuff can get messy if you don't understand access, and application release management schedules and, you know, release change management processes at large organizations. So just organically got pulled more and kind of started kind of in that plan advisory space got pulled into the build. And, you know, now it's kind of full blown lifecycle, right, we certainly represent kind of that rock. So what the industry term is, from a rock perspective, robotic operation center, so really that run side, and it's not kind of your, your traditional production support model. From our perspective, we think that's kind of where the innovation is going to happen. And kind of a continuous process improvement mindset. So you have your operations data, you have your KPIs in Iraq, and you're able to infuse automation within your production support, you're able to start to actually leverage machine learning models to improve these processes, you're able to kind of infuse process mining into that rock. So for us that plan build run is just a continuous, right? It's never over continues. It's not a waterfall approach anymore. So to answer your question, it's full lifecycle from an automation perspective now, but it certainly did not start that way.


Brent Sanders  07:28

Great, great. That's super interesting. In thinking about, you know, one of the areas that we think is most interesting is sort of the post launch phase, right? It's this once the center of excellence is up, what happens after a bot has gone live, you know, 369 months down the road? I mean, how do you guys think about sort of maintaining bots? And what are some tips you could offer our listeners in, you know, keeping a strong Center of Excellence? But, you know, beyond that, or like, what have you seen work for, you know, making sure about lives on for a long period of time, right, and doesn't need to be either monitored? I mean, I always go back to the metric of a lot of people put a metric against about saying, how many support resources per bot? And so what are, if you have any, or thoughts or tips around in reducing that number, you're not requiring a lot of people to sort of manage or support a given bot? And I'm curious to know, your thoughts on that?


Marshall Sied  08:34

Yeah, I do think this is kind of still evolving. Brent, just to be transparent. I don't think there is a silver bullet for this yet. Because the reality is, if you're not decommissioning some scripts, right, are kind of what sometimes gets interchanged with bots, but you know, really, we're talking about the scripts, the bots or the executor. So those scripts, if you're not decommissioning some of them, then you know, your business probably is staying static, right? I mean, systems change interfaces, change processes change, acquisitions happen. So you're always going to have Fallout. And we do want to track kind of box. We're decommissioning to make sure we understand the reason code behind those decommissions decommission digital workers. You know that that's going to happen, though. Right. But I guess your question is kind of how do you kind of sustain and, you know, reduce the total effort on monitoring kind of box in production? Right.


Brent Sanders  09:28

Yeah, I mean, I think there's this general assumption that we have around. bots are inherently fragile, right, versus other systems. They don't control their environment. They work against multiple other third parties or other environments. And so they're going to, they're going to break and it's like, yeah, how do you have you seen that unfold? And I'm curious, your, your thoughts. It sounds like what you're saying is like, be okay, with decommissioning which is, I think a really interesting insight.


Marshall Sied  09:59

Yeah, I think that's, that's a part of it, I think to your point. You know, it starts with how you build those bots too, right? If you have followed development standards that you have curated through your center of excellence and through your own internal policies, and you ensure that your development organization is adhering to those coding standards, and those technical design and architectural design standards, but I think that's a part of the puzzle. Right, I think the other piece is being okay, with decommissioning, just like you have attrition with your workforce, you know, sometimes that just is natural, right? That needs to happen in a business to continue to change. Same thing with your digital workers, your boss, right. And I think I think another aspect is making sure you're delineating the difference between a resource to support your current production script and bot, and a resource to make minor enhancements that don't justify building a new bar. So the reason why that's important is that minor enhancements tend to get kind of chucked up into that same general bucket. But if you're looking at kind of, you know, having this more agile, iterative approach, using like continuous integration, continuous deployment pipes, you need to think about that for minor enhancements, right? Because that will lower your total costs to support that, and also to make minor enhancements. And, you know, once again, if you're not making minor enhancements, then you're probably not expanding your percentage of automation across the process, because the business will come back with additional ideas, once they see how great this works. Right, then that's that's kind of the user adoption, change management aspect of these emergent programs.


Mark Percival  11:44

You mentioned, and I and I've seen, I think I've seen this in the marketing material. But you mentioned sort of a self sustainability on the client side. And getting to that point, where do you see clients succeed with self sustainability? And where do you see them sort of like fail?


Marshall Sied  11:57

Yeah, I mean, great question, because there's a lot of kind of marketing smokescreens out there. Ah, no, I never seen a client successful, fully self sustainable, frankly, and I've never seen a client successfully outsourcing this to an external software company or service provider. Right? So there always needs to be an astronaut opinion. Because, yeah, you know, you kind of need to think on the edge of the box, because that's, that's where some of these great ideas are gonna happen. And in order to be on that edge, you need external insights, you need internal insights. So I think it's more about how do we create more supply internally, with our own resources, whether that's retraining cross training folks that have a certain background and pedigree in programming and change management and process discovery, process redesign, process re-engineering? How do we bring some new blood into the organization? And then, while we're increasing that, how do we also increase our demand so that we also have some type of a strong external relationship for somebody who does this every single day for a living? This stuff is changing so fast, nobody can keep pace, right? We do this every single day of our lives seven days a week, and we have trouble keeping pace with it, right? Just because everything is changing and morphing so fast, because the market is recognize, you know, the business value here. Right, but this is kind of a new era of automation. So to answer your question, I think it's always going to be a hybrid. Now, depending on the organization, the industry, the maturity of that organization, and industry, you know, reusable assets that are in the marketplace, from a script and a component perspective, you know, that might, that might change, that mix might change over time, but it's never going to be 100%, in sourced, 100%, outsourced in my mind.


Brent Sanders  13:41

That's a really interesting perspective. You know, one thing that it's sort of a nuance that we we've been asking our guests about that, you know, based on their experience, there tends to be, you know, one or two models of RPA in an organization, whether it's federated versus centralized meaning, you know, for example, you may see a common use case, or RPA, being adopted by a finance department or accounting department, and the CFO has rolled out a handful of bots, and IT finds out about it sort of at the end of the the process, and that that's something we hear about where it's, you know, what I would call that federated, right? There's a sort of bespoke process that begins within a non technical unit of the organization and then eventually migrate to this sort of centralized model where an IT department sets up an automation department and, you know, creates first class resources for automation. I'm just curious, what you've seen, like what's been successful what, what works and what doesn't work, any war stories you've seen around that, that mix of, you know, really what I'm trying to get at is like, how does somebody in accounting How do they what's the best way for them to successfully Get their bot through to IT and bridge that gap between, you know, different part of the organization that's really worried about kind of two different things.


Marshall Sied  15:09

Yeah, yeah, it's a great question. Yeah, honestly, we've seen all shapes and sizes. And we've recommended, you know, different shapes and sizes based on culture based on, you know, funding models that an organization really wants to adopt, based on use cases and kind of the intake and prioritization of some of those use cases, because they needed to meet a certain threshold, a certain net, net present value or whatever, whatever the business case process was, right, you kind of have to make it somewhat bespoke. But, you know, kind of recipe for success. In my perspective, just to try to oversimplify a happy path would be on day one always centralized, right? It's what happens day one plus, that, you know, really starts to kind of take the form of that organization's specific DNA. Day one usually was, usually when it's successful, it's usually a mix of finance and accounting leader plus an IT leader that have partnered, right, so they've centralized basically their to these business units. And it's, and they've carved out some ways to get some, you know, big ROI, payback wins. And from there, they start to expand. And so that's where we spend a lot of our time from an operating model perspective, talking about what happens after day one, day one is pretty default. You know, you can download a marketing slick on that, right. But what happens when with this process that used to just be in accounts payable in finance starts to go into curious, because we're extending this into a full blown source to pay procure to pay automation, that has a different context, in regards to the conversation we have to have, from a development standards from a Sox and compliance standards from a communication to vendors, standards perspective, things, things start to morph. Right. I think, on the other side of that spectrum, we've also seen it where IT really meets the governance. And what ends up happening is, velocity sometimes suffers, right, governance succeeds, right, but velocity suffers. So the payback period, the great, shiny business case that a finance leader did kind of suffers at that point. So it's really trying to find kind of a healthy balance between. And I think another trend that's kind of always Top of Mind in this industry, is the whole concept of citizen development. And you really can't do that unless you have strong governance, and you kind of have started from a centralized perspective, you can't start federating, right? That's, that's a recipe for disaster, it might work for a little bit kind of in the shadows, if you will, but it's never going to be kind of enterprise great capability at this point.


Brent Sanders  17:53

Have you seen, you know, it's funny, I kind of wrote a short blog post last week about what I believe is a fallacy of the citizen developer. And it may be out of, you know, lack of what I've seen, but have you seen organizations, you know, firsthand roll outs of that citizen developer program where, you know, people that otherwise wouldn't be writing code have started creating bots with some of these low code or no code toolkits?


Marshall Sied  18:23

Well, firstly, I should have, I should have done my homework and read that blog rant, because that's, that's, that is a hot topic here. So I would have loved to hear your perspective. So you know, there is a path, I will leave it at that, that there is a path. But I do think that asking somebody who's never never scripted, or coded or had any exposure to programming in the past, you know, beyond VB scripting is sometimes a challenge, I think you really have to think about a guided training path for somebody like that. And once again, you're gonna have Fallout, right? We've done training where there's a bunch of prerequisites out there. And, you know, there's a bunch of homework assignments. And you know, some people just aren't as interested to do it, right, they'd rather leverage automations from somebody, maybe that's as a quote unquote, citizen developer in their business unit, and be consumers of those automations. But then wouldn't necessarily want to build those automations themselves. So I think it's about tearing professional developers, citizen developers and overall consumers of automation, and just being realistic about those guided training paths.


Mark Percival  19:33

And you talked a little bit about that retraining aspect and how do you actually engage on that you typically start with a company and say, Hey, you know, here's some people you can we think would be, here's sort of our our guideline for who fits this role, you know, and find those people or is it more is the Mark Warner, kind of organic where they come to you and they say, hey, I want to develop for this and you say, Oh, well, here's, here's sort of the the best practices. 


Marshall Sied  19:54

I think there's I think there's a push and pull, Mark. I mean, I don't think there is a perfect equilibrium on that. I mean, when I have seen this work successful, and I have seen citizen development programs work well. But it takes time, right? It doesn't happen on day one. And so usually the COE, he has some type of like kind of push campaign, you know, building awareness, automation, bots are not supposed to be scary, they're supposed to take away the work you didn't want to do that you complained about doing on the weekend anyway, for the most part. And so if you have that RPA awareness campaign, you do it the right way you do your first wave of intake and automation candidate prioritization correctly, usually got some people that start to pull you, right, so you kind of flip the paradigm to being pulled into a lot of different business units into a lot of different business processes. That's where you get some kind of the organic, kind of hallway discussions where, hey, I really think this stuff is cool. I think I would like to learn more, is there a way that I can self learn and be better? You better have a layered learning approach, right? You better have a self service self paced approach. And then once they kind of achieve that milestone, you kind of gamify it, yeah, then they might get classroom training, right. And then from there, maybe they have some technical screening problems that they have to do. And it's just as important to do that from a tool and technology perspective, and a coding perspective, as it is a standards perspective. Because what you don't want is you don't want somebody that continues to want to migrate their code into a production environment, if it's going to be used by more than just them. And every, every time your code reviewer and your COE, he needs to kind of kick it back to them because they didn't follow the standards, right? They didn't make it usable. They didn't use the right naming convention, whatever the case might be. And so that's a bottleneck you kind of want to get ahead of.


Brent Sanders  21:53

It's interesting, I think what you're, you're saying, or at least what I'm taking away from it is, is it when implemented successfully can create some inertia or momentum for automation in the organization?


Marshall Sied  22:03

There's no question. I think, just being realistic about it, though, right? I mean, you don't want to open it up to everybody. And then you also want to make sure you're not frustrating folks that are very interested in showing promise, right, maybe getting lowered complexity, automation capabilities, while you keep the higher complexity ones for the COE and partners.


Brent Sanders  22:27

Yeah. Interesting. Great. Yeah. So Marshall, can you tell us? You know, how do you see the industry as a whole evolving? I'm curious what your thoughts are over the next year over the next couple years? Obviously, it's a frothy market, there's a lot of software solutions jumping up and a lot of redundant solutions. I'm curious, you know, where you see differentiation happening, where you see things consolidating. Give us a sense for your perspective for the future of RP and automation?


Marshall Sied  22:57

Yeah, sure, Brent. I, uh, I guess the disclaimer is that I do not have a crystal ball. So this is just kind of one person's input. So hopefully, nobody makes Ben's face toughest. But, you know, I think we've already seen the convergence happen, right? I think a lot of organizations, you know, some of the largest software companies in the world, Microsoft SAP, they've recognized that they're a little late to the kind of core when I'll call fundamental RPA market. And so they're, they're playing catch up in these spaces, right, that obviously have huge client install bases. But you know, some of the core RPA platforms are already, you know, years ahead from an enterprise security perspective, from kind of just a flexibility and ease ability from a UI and coding perspective. So I think you're going to continue to see kind of some of these bigger names, continue to, at least from a marketing perspective, push what they're trying to build. And I also think it's just kind of a convergence across what we're talking about with hyper automation, right? It started with, you know, really business rule driven scripting. And that is really, in essence, what fundamental RPA has done. And it's not discounting the value, the value is still critical, because a lot of what we do in our processes is you can write down on a sheet of paper, right? There's no question. There's a ton of value there. But you've seen some of these, these kind of fundamental core RPA vendors, software issuers begin to expand, you know, what they are classifying as hyper automation. So if you look at the bottlenecks that typically have hindered, you know, basically the deployment of bots, just to simplify this, it's really process discovery, process understanding and process redesign. And then some aspects of testing. From my perspective, right the build itself while it takes skill, you know, once you have it, once you kind of understand what you're building and you have your requirements, you know, you can you can knock those out, you can pick up the velocity over time. You can Use reusable assets, or code library like Azure, DevOps, git or Bitbucket, or whatever. So you can kind of operationalize some of the build. The bottlenecks are really kind of taking advantage of the ability to redesign a process to achieve a higher business outcome. And so categories like process mining, and process understanding and process modelers like blue works live from IBM. And there's a lot of categories out there right now, that I think are going to see the positive residual effects of kind of the RPA craze, if you will. And one Mark has already seen it, right. So if you look at use cases that involve semi structured data, so what's traditionally been called optical character recognition, OCR, what we call IDC, at ashling, intelligent data capture, I mean, they have seen a Renaissance. I mean, this is technology that's been there. 2030 years, certainly gotten more sophisticated. They're using machine learning models, and some scenarios for certain recognition patterns of certain documents. But for the last three, four years, they've seen an explosion, right, because a lot of use cases involve RPA. But also upstream, you need to capture, you need to turn data, turn an invoice document, as an example, a semi structured into structured data, so that the bots can execute do what they do best and execute. So you've already seen it in kind of the IDC area, I think you're gonna continue to see that with process understanding, right? So process mining, process modeling, essentially. And then, you know, trying to try and I think some of these vendors will get more like the testing space, too. Because testing always goes longer. In every project I've ever been there, right? It just happens, things come up. You can't. You can't think of every scenario and then you have to be agile with your business owners and its constituents. So I think if you can do that from a single platform, I think there's a lot of efficiencies to be gained.


Brent Sanders  26:59

It's interesting you say that, because we've talked about that multiple times around how hard it is to have some sort of development test production parody, because the data is, in many cases, near impossible to replicate across multiple tiers. And I'm curious how you've dealt with that in the past. I mean, I think our best approach and strategy is to work towards informing everybody where a normal software deployment in the you deploy, and it's successful. Just setting the expectation that on automation projects, because of that gap, there tends to be a lot of rolling back deploying and just saying, hey, that's, it's totally normal for RPA to have to roll back and go forth and be it's a little bit more of a dance because it's so hard to attain that parody, in sometimes actually impossible to maintain that parody across tears.


Marshall Sied  27:50

Yeah, you hit the nail on the head brand. I mean, honestly, it's an education cycle. That's what it is, because that is exactly what happens. Almost every single automation has an issue when you quote unquote, go live. And, you know, if you come from, like the monolithic enterprise application world, like, like I did, you know, that was, you know, that was sacrilegious. Like you can't, you can't have an issue, when you go live, go live, we can everybody's, you know, on eggshells, right. This,


Brent Sanders  28:18

Right, it's a total failure, if you have to roll back, everyone's kind of rolling their eyes at you and saying, oh, man, I wish you would have, you know, test that a little bit better, we wouldn't be in this situation, it's embarrassing.


Mark Percival  28:30

I think the one thing there is that you see in software as well, though, is anytime you're dealing with external systems, and I'm sure you've dealt with this before, but when you deal with external systems, you kind of have this goal of I'm gonna I'm gonna model for it. And what you end up doing is you end up building a, you know, a mock, or some generation of data off an assumption of what that external model looks like. And then what happens is, it works fine in the test. And then a week later, a month later, something changes on the external system and breaks. And so you get this feeling of like, I've tested everything and like, it's perfect, and there's no way I can break. And it's a feeling of false security, because you've basically taken the external data out of the equation to the point that you're not actually testing it. 


Marshall Sied  29:09

Yeah, exactly. I've never I've never worked with an organization where, you know, their, their test environment mirror their production data set, right. I mean, they always, you know, you hope it's close, right? And it's just, you know, it's nobody's fault. It doesn't make anybody a bad person, if they're, their data doesn't mirror but that is an expectation we actually have as a default stance. And there is not a silver bullet right now. I think. I think there is a lot of a lot to be gained from folks that can kind of figure this out. In an iterative fashion. I mean, there's a process kind of hack In my opinion, right? You can kind of some of this is just level setting of expectations, right? But if you kind of try to do kind of soft go lives, if you will, as a part of your testing cycles, and you communicate it that way to the business. I think most people are happy with that, because then they feel like it wasn't an epic failure. When Get no issues or Hey, we need to, we roll this out in production. And that's, you know, that's another issue, right? Like, we can't roll anything out in production like that we have to follow our path or promotion path. So I think it's just an education cycle, frankly, but we started to do a little process acts like that until we can really, really figure out what the silver bullet is if there is one. But I think that's an area that you know, there will be some investigation because that is prolonging development life cycles, which is prolonging more bots being put into production, which is prolonging realization of business outcome benefits.


Brent Sanders  30:38

That's great. I think that does it for this version of it's automic podcast. Be sure to check us out at itsautomic.com and we want to offer a special thanks to Marshall of Ashling Partners, which you can check out at Ashlingpartners.com. Stay tuned for more insights on the RPA world. Stay tuned on Apple podcasts as well as your any other podcasts.


Mark Percival  31:05

Thanks, Marshall.


Brent Sanders  31:06

Thanks, everybody. 


Keep in the Loop

Get the latest news and exclusive releases

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.