In this episode we speak to M. Scott Ford, the Chief Code Whisperer at Corgibytes. A company that focuses on mending software. He is also the host of one of our favorite podcasts - Legacy Code Rocks. This episode is all about legacy code, a topic that is really near and dear to the RPA community.
Just a note this episode may get technical but have some solid lessons for anybody dealing with tech that frightens them.
Dive Into Legacy Code with M. Scott Ford
Mark Percival, Brent Sanders, M. Scott Ford
Brent Sanders 00:03
This episode, we speak with M. Scott Ford, the chief code whisperer at Corgibytes company that focuses on mending software. He is also the host of one of our favorite podcasts, legacy code rocks. This episode is all about legacy code, a topic that is really near and dear to the RPA community.
Mark Percival 00:22
Well, thanks Scott, thanks for joining us, I just would get a quick background from you. And from my background on this is I actually know you from your excellent podcast Legacy Code rocks, you do a great job with that. And it's actually if anybody's listening, this is a really good podcast to follow if you're kind of in the legacy space. And if you're in RPA, there, you're likely in the legacy space. So you guys have been doing that for a while now. And yeah, we'd love to get your background on what first drew you to the legacy space and what keeps you there?
M. Scott Ford 00:52
Yeah, so like, what drew me there is just like, honestly, joy, which I know for some people sounds strange that like, there are people out there who enjoy working and legacies, legacy stuff. But I genuinely enjoy it, I enjoy working with a giant tangled mess more than I enjoy working with a blank page. And so that's kind of always been, that's always been my career path. But it took me I don't know, 1015 years to realize that that's what I was doing. That's why I kept job hopping. I would work someplace for like, six to eight months, and then I would get bored. And it took me a long time to realize that the reason I got bored was because they told me to stop fixing bugs, and work on features and said that, you know, like pretty much like my typical experience, anytime I started a new job was okay. you're brand new to the team. So you can learn the system by triage in the bug lists and fixing bugs. And I love that. And then there was this magical day when I was told Okay, you know, enough. Here's a feature I was like, but I really, there's if you seen the bug list like.
Mark Percival 02:00
This is, this is a rare, this is a rare skill set. I think a lot of developers kind of follow the other way, which is the brand new shiny features.
M. Scott Ford 02:08
Right, right. Yeah, it was like the shiny features like No, I don't want that. Like, can I please? It was like, you know, I would was got, you know, less than favorable performance reviews, because I was rocking the boat too much about wanting to refactor more. And yeah, it was, it was interesting. And so I would inevitably, like, switch jobs. And I would like it for a little while. And then and then not. And like I said, it took me like 10 years. So there's this there was a pattern. And then and then creating my own business became kind of like a forcing function for, well, if I like doing this kind of work, if I say this is what the business does, then clients will come, then clients will come to me who want that kind of work, and I won't have to go hunting for it anymore. And I won't have to like, be in the trenches that accompany convincing people that like the improvement work is worthwhile.
Mark Percival 02:58
Yeah, what's a company? I mean, what does a company look like that's looking for that is somebody who kind of has a system that they've had for years, and they want to start fixing it? Or is it because there's obviously the other piece of legacy, which is like the initial developer response? Sometimes it's rewrite, which is rewrite everything?
M. Scott Ford 03:15
Yes, yes. I'm so sorry, systems that are kind of in that legacy state, they really vary. You know, we've seen systems that are like six months old that need the same kind of love, that's something that's 20 years old needs it. So how quickly it gets there can be more a factor of constraints that are placed on the individual, or the individuals who are working on working on the system. So age isn't really we've noticed that age isn't quite a close correlator. You know, we've worked with startups that just recently got funded by venture capital, and they made a mess to get there. And now they want to invest in paying that mess off, like cleaning that mess up, instead of just doing a rewrite. That's for, you know, there's an organization that's had a system that's been like, keeping it alive. It's been the kind of the lifeblood of the company for 20 years. It's been the reason it's making revenue. But they've accumulated a ton of debt slowly over time. And now they're recognizing that it's slowing them down, and they've hit this inflection point where they just, it's time to pay it off. It's time to pay it back. And so they ask for help. So it's really kind of two ends of two extremes there.
Mark Percival 04:28
Yeah, I mean, this is Brett and I were just talking about this but like the definition of legacy is actually you know, I think people kind of default to age. But today it's amazing how much stuff that is really not that old is still kind of considered legacy.
M. Scott Ford 04:44
Yeah, and I think like difficulty to work with. Yeah, you have difficulty reasoning with Andrea, my business partner. This is also my wife. Her definition is like anything that doesn't have communication artifacts to backup The reason it exists. So if so if that knowledge of why something was built has been lost, then you she kind of clarifies that as being like a sea, which I think is a great definition. I've heard others use the definition of like, you know, you anything that's understood by the people working on it anymore? Like it's another good definition or
Mark Percival 05:23
It falls back to the communication side because the other definition I've heard is anything you're afraid to touch? Yeah. Which is the communication side, right? Because if you don't have a reason, if you don't know where it came from, or why it exists, they're afraid to touch it.
M. Scott Ford 05:36
Yeah, absolutely. And there are tons of teams that have like, giant messes in the corner that like nobody on the team wants to touch, because they haven't hired a crazy person like me yet. Would love to dive in and figure it out. Or there's just genuine fear built up on the team, because every time they do touch it, things go drastically wrong. I've seen that before as well, where there's like, very reasonable fear that gets developed from almost like past traumatic experiences, where the last time we touch that it cost the business, you know, hundred thousand dollars or something like that. And we almost all lost our jobs. Right. Like, that's, that's a traumatic event. And that's it. That's not an unusual story. And so naturally, the people who work around that system will kind of tiptoe around that after that, because nobody wants to. Nobody wants to have that impact again.
Mark Percival 06:28
Yeah. Yeah. And that's, that's kind of falls into the RPA space, a lot of times you see RPA as a solution, sometimes for not touching that problem. It's like layer on something else that just, you know, doesn't actually go in and change the underlying code.
M. Scott Ford 06:43
Right? Well, it can I think, you know, automation, by its very nature was meant to replace something that was manual and risky, right. And so yeah, you anytime you have an automation solution, that it's like, it's working, let's not, let's not fix it, right?
Mark Percival 06:58
M. Scott Ford 06:59
How about going?
Mark Percival 07:02
Oh, you say, when you kind of look at this, when you go into a company that's looking at, you know, there's legacy problem. One thing we see in the automation space is there's this other side of it, which is sometimes you have to change the process. Sometimes it's not just that you can automate, but there's actually needs to be a process change, do you kind of see that as well, when you kind of tackle a legacy problem?
M. Scott Ford 07:23
Yeah, there's often the process itself is what needs to change. But it can be difficult to convince people that because, you know, they think the process is perfect, or there's a disconnect between the process that they think they have, versus what's implemented in the software. And that there's, there's a disconnect there, where they don't realize that, you know, they think of the process as everything is going well, but then they don't realize that there's like, you know, 40 or 50 corner cases that are all spelled out in the, in the actual software. And anytime we make any change, we have to maintain all those corner cases down. And it that's where the difficulty lies, it's not. And so it's changing the process around what to do when something goes wrong. But generally, like the process for what to do, when something goes right isn't usually the my experience hasn't been the problem is what to do when something doesn't go to go to plan is where you end up with a strange corner cases, or the business has changed in the business needs a new process. And that's what you know, is kind of the impetus for the change of the online system in general. And so unit with like these small little changes that keep getting layered on. And the first one's really easy, so we just squeeze it in, and the second one's really a little bit easier. So we squeeze it in. By the fourth or fifth one is getting harder and harder. By the sixth one, we broken the first one right down. And so that ends up being a problem as well. And sometimes it's your the process was the change. The process was made because of what was implemented already. And so when you do need, like when you make the decision, like Okay, it's time to clean up. It's, it is a good point to kind of reflect on the process and say like, okay, are there steps in here that we can eliminate? Or are there steps in here that we could do differently? And you, you take advantage of that. But it's scary, because that's a lot that you're changing at once. So,
Brent Sanders 09:27
Whenever there's a request or a business, let's say changes, there's always this like, Okay, well, what else are we going to be, you know, affecting by enhancing or improving something?
M. Scott Ford 09:37
Yeah, and sometimes that's a scary proposition. And sometimes it's an opportunity to change more like sometimes it's like, okay, we're afraid so we're going to change as little as possible. I've seen and then you end up you still end up with a mess at the end. I've also seen it kind of swing the other direction where it's kind of like while you're in there, you know replace the kitchen sink too, which seems like what Add a scope in. So you end up with unip introducing a lot more risk, because you're, it's almost like you're changing too much at once. And so finding that balance is it can be really difficult of like, what's the appropriate amount of change to introduce? What's the appropriate amount of risk, you know, especially in an automation context where like, you know, you, there might be safety implications. You, you need to take that into consideration.
Brent Sanders 10:28
Anchoring this back on automation, you know, one of the things and I'm assuming the systems you work with, or just in general, in software systems, but one of the things we were just having conversation about is, in automate in the automation world is like, there are not like good environments, there's not a good, you know, you can write code in your local machine, but like, generally, you're working against a bunch of different systems. And so I'm curious, like, if you have any thoughts or strategies around, you know, for our audience, like ways of how do you mitigate this, I mean, you can't really, I'm sure you can unit test and test individual modules of your automation. But until you get access to that sort of like real world environment where you're, you know, getting data from accounting, and you're cross referencing against Real, you know, customer data, and then building some report. It's like, it's kind of a crapshoot. It's like, yeah, the code works. But until we start seeing real data, we don't know what's actually going on.
M. Scott Ford 11:27
Yeah, like, in that context, like building simulators, helps a lot, especially if you can, like build a simulator off of realistic data, or, you know, you know, realistic context, if you're working with a physical device, also, you know, building a simulator for that device, or the driver that drives that device, you know, is is, is critical, especially like, you know, if you do have lots of disparate disparate systems or disparate devices that are that are working together, and you might not be able to get a hold of all of them to do your testing as a developer, whether they be you know, other software systems or physical systems, you know, that, that does create a challenge. So what you end up having to do is, you really kind of program really defensively. And like never, like my experience has been like to never trust the inputs that you're getting, and be really careful about all the assumptions you're making. And just try to be really clear about like, okay, we assumed that this would ever be true. And we're going to like, actually, like, you know, state somewhere that's very visible that that we encountered, we countered instance where it was true. So that, you know, the, the dev team knows to go back to the drawing board. And also think about, like, you know, even though you're making the assumption that something will always be false, for example, you know, what are the implications of it being true? What, what should you do? So, like, you know, I think like, whether it's malformed input, or whether it's like, assuming that you have, like two names, first name and last name, or, you know, assuming the format of something or assuming that, you're going to be able to look up one value based on another value, you're really, really being careful about all those assumptions you're making. And it really kind of, you put them in a way they were you like, you can't trust your inputs.
Brent Sanders 13:18
Yeah, I love the idea of a simulator, that's brilliant. And even think of that, that's a really useful mechanism to put in place in those types of situations.
M. Scott Ford 13:27
Especially if you can build a simulator that you can, if you can architect things in a way that the simulator can run in production for a little while and collect data for you to play back. That that's a great, that's a great mechanism, so that you can actually, you can collect real data from the live environment in a way that safe for the live environment, you know, won't cause any problems, like whether they be, you know, latency or, you know, maybe privacy violations of that sort, like, you know, you might have to create, you know, watch out for those edge cases. But, you know, if you can do a record and playback, that helps a lot too, because then you're like, you know, you're looking at that scenario. Another thing that you can do is, there's a library that GitHub came out with several years ago called scientist. It was originally written for Ruby. But it's been ported to most other frameworks by now. In, what you can do with that is you can design a candidate refactoring for a block of code. And you can basically say that, you I'm setting up an experiment, as a scientist, that's where the metaphor comes in. I'm setting up an experiment that the replacement code does exactly what the original code does. And you can set that experiment up to run in production, and collect the results from that. And so that if you have an environment where like you're not really confident about your testing, you can you know, you can basically deploy an A B testing into production, where A is the current code path that people trust and be used. experimental in the experimental runs at like maybe a 5% sample rate, and it never affects the outcome. It's only there to kind of collect data. And, and so so then that way you can, you can, you basically do your testing in production for like, say, a month or week, or however, how much time is needed to kind of give you confidence that you've kind of seen most of the edge cases. And then if it got the exact same results, and didn't cause any errors, and you can also look at things like performance, right? Like, if you're, if your hypothesis is that the replacement is faster rate, you can test that in production with actual data, and you by looking at runtimes. So if it's actually faster, it's not slower, then you can, you can eventually swap it out with the, you can take that experimental version and make it the actual version and a separate appointment.
Mark Percival 15:53
That's interesting. But one thing you mentioned was the having the assumptions on things like data, or there might be, you know, you mentioned, I think, look up a piece of database on another piece of data. I think that's interesting, because that's, a lot of times we think of legacy as the code. But obviously, there's that assumption that for example, you know, a relationship in a database, you know, project has a user always, and then you find out that, well, that wasn't always the case. And so a lot of times, it's hard to spot one, that because you've got to go into that now, it's just not the age of the code, it's also the age of the data and where you're getting that data from.
M. Scott Ford 16:26
Right. And it might, the system might be architected in a way now, where it's not possible to use the system created in that state. Yeah, where you'd have to, like in that to where it could be like, like, that was a bug that data got created that way, you know, somebody thought they stamped them all out, but they didn't, there's still one floating around in there. So the way to test it is actually like create the bad data manually, which can be really, really frustrating and challenging. And you end up having to know a lot about the system in order to do that great. And, and so like, you know, kind of building that up in an automated way is, is hard. And that's where simulator can actually come in handy is especially it's like simulating an external system. Often, if you're working with a real external system, getting it to behave badly, in a way that it behaved badly in the past, but you want to protect against in the future. Yeah, it's gonna be really hard to get the actual thing to do that, but some kind of simulator or some kind of like, mock or fake or whatever kind of term you want to use to describe it. You can ask a simulator to behave badly, so that you can make sure that your tests are appropriately catching that scenario.
Mark Percival 17:38
Yeah, that's a good point. I, you know, going back to the kind of the RPA side, and then the legacy side with that, a lot of times when people approach a problem, especially in the RPA space, there's this kind of this issue of just where do I start? Right? The problem is so big. And I think the same thing You see, when you kind of approach a legacy code base, which is, a lot of times it's very under, it's very hard to understand, actually, okay, where do I start? So you mentioned kind of going in and making one fix, and then following up with another fix, when you come to a legacy code base, it's, let's say, looks a bit unruly, where do you actually, where do you actually how do you start that process? How do you figure out where you're gonna focus? How do you find that action?
M. Scott Ford 18:14
So for me, I look at, I like to look at metrics, and the kind of the metrics that I like to look at or churn. And so that's like, looking at how many times a file or a class or an object or a method has changed in the source code repository, if you can get that information at that level of granularity. So, you know, knowing how often something is changing how often something has changed across this lifetime, how often it's changed recently. And then use that to kind of benchmark some of the other quality metrics that you can get, such as complexity, or duplication or test coverage if you've got an automated test suite, and really kind of balances against each other. So if you're, if you have a lot of change recently, but and you've also got high complexity, and you've also got high duplication, then that's a good place to start. Especially if you have a low test coverage. Like that's, that's a good candidate for getting started. Whereas if say, the complexity is really, really bad, and the duplication is really bad. And the test coverage is really bad. But it hasn't changed in three years, you can probably just leave it if it hasn't had to change in three years. You know, it's not a guarantee, but chances are, it won't change won't need to change in the future. Whereas if it changed a lot in like the last three months, then it's important that it's like it's good quality code, right. It's important that it's easy to understand, it's important that it's well covered. It's important that there's not a lot of duplication. And so for me that that is like a really good starting point is kind of collecting that data, and then and then using it to figure out what to act on next.
Mark Percival 19:52
Yeah, it also implies that there's a lot of process changes going on science that a lot of times you see those code changes in places where the company is actively changing the process.
M. Scott Ford 20:00
Exactly, exactly. And in that can help you kind of build your design in such a way that it defends against those changes. So, so if there are process changes that are happening, you can make sure that your software design is set up in a way that it's nimble enough to react to those changes quickly. And, and so if you're having to, like, you know, the real trivial example is like, having a change of color, right? It's, you know, it's blue this week, and it needs to be red in the next week, and it needs to be green the next week. You structure your system, so that's a configuration value, that's now easy to change, the color of the system becomes trivial to update. And the bulk of the code doesn't have to, you know, doesn't have to change in response to that configuration value changing. Now, you've made the system more complex, because you've introduced to complete that configuration value, it's, it's kind of you, it has another way that it can fail, because it's not just a hard coded value. But it's also more nimble, in the face of you know, that color value unit change in the future.
Mark Percival 21:06
Yeah, that's a good point. So, you know, looking at the legacy side of things, you brought this up, you obviously like the legacy space, but I mentioned, you know, a lot of developers kind of default the other direction, which is they love the idea of new features. How do you, when you, when you talk to developers, how do you kind of, I guess, sell them on the idea of, you know, not a rewrite.
M. Scott Ford 21:32
Not a rewrite is hard. For me, like I think the best argument for not a rewrite is a comment from or an observation from Joel Spolsky, which is that every line of code that you have in your system can be thought of as a defense against a bug. And what's your confidence that you're going to defend against every bug when you go to do the rewrite. And if you don't, if you've if you've got an incredibly high level of confidence, then go for it, you've got an incredibly low level of confidence, then maybe you should stick with what you have. I think another way of thinking of it is kind of like, draw a Venn diagram of what you have versus what you want. And how much overlap is there. If there's a lot of overlap between those two questions like what the system is giving me versus what I want it to give me, regardless of the metric, it could be like, you know, there are some teams out there, where what they have is PHP, what they want is go right. There's no overlap there, right. And if there's a really good business case to be made, that we need to move to go, then that's a rewrite, right, or, and you can like, and you can figure out how to do that rewrite safely and gradually. But it's ultimately there's, you can't make PHP go. Like that's, that's not something you're going to be, you're going to be able to do. Now, if you have PHP and you need PHP, and you have, you know, steps one through nine of the process, and you need steps one through 10 of the process, then add step 10, to what you've got, instead of doing a rewrite that includes step 10. And so I think like, those extreme cases are pretty obvious. It's kind of the messy middle, where you're kind of like dancing on the edge, that's really hard to tell whether like which direction you should go. My, my bias is to try to harvest what you've got, and try to stick with what you've got, because I really see the value in what's been built already. And I try to, like, recover and preserve as much of that value as possible. And I really kind of look at it as that of like, you know, a lot of hard work went into this. And a lot of smart people worked on it before me. And, you know, the assumption that I'm smarter than they are and I can do it faster. Doesn't always pan out.
Mark Percival 23:51
Yeah, that's that's lost a lot, which is that there is a lot of assumption that goes into the idea that well, I could do this better, but there is a lack of understanding of why it was done that way in the first place sometimes.
M. Scott Ford 24:01
Right. And so that's knowledge that's often lost. And there's a great poem by Rudyard Kipling. I think it's called the builder. I forget the exact title. But it's a poem that talks about this king, who's also a Mason, who's tries to build a castle on a hill. And when he gets there, he finds some stones that were left by a previous Mason and scrawled on one of the stones he finds. You know, basically, someone will come after me who's also a builder, tell him that I've also encountered the same problem. And so he reads this and he's like, I'm smarter than that previous person. I'm going to try to build a castle here. And so he sets up this grand plan in this grand vision and he builds this castle and realizes like 90% of waiting then into the effort that it's not going to work. So we ordered it to be torn down and he orders all of the workers to carve in that same message on every piece of stuff. on that, I want to try to be, you know, it's Rudyard Kipling tells us more eloquently. But basically, I tried to build a castle here and I failed as well. You will fail too.
Mark Percival 25:12
This is, this is perfect. This is where this is what comments are for? Yeah. All my code. Yeah.
Brent Sanders 25:20
You're just missing microservices, no one had microservices, then it solves everything. Right?
M. Scott Ford 25:24
Right. Right. Exactly. Exactly. And sometimes that's true, right? Sometimes it's, you know, they weren't able to be successful, because there wasn't as much processing power available, or we have a lot more RAM to throat problems now, or we can paralyze and work in ways now that that does turn out to be true sometimes. But I feel like it's often that, you know, the smart people thought of that, and they weren't successful. And you might not be either. So and it's, it's really hard to know, because I think a lot of times that information of that, that why that why they did what they did, it gets lost. And that's, that's, there's a technique that can help defend against that. It's called an architecture decision record. And if you do a search for that term, you'll find lots of really great resources for like, how to format these, but basically, it's it's artifacts of decisions that you're making about the design of your system as you go along. And kind of like the challenge that you're facing, what solution you came up with, and why you came up with that particular solution. And you basically like just save that away for a rainy day. And you keep it, I deal in your codebase. So that if somebody does do a, like a look at the Git history in the future, they'll see that the decision record changed at about the same time that this block of code changed, and they can look at it and see like, Oh, this is what the team is struggling with?
Brent Sanders 26:50
Yeah, those are invaluable. I mean, those are like, I got into the habit of having our team write those, or at least I would always be the one writing him. And it was kind of like a, you know, can we just get started? Can we get going, but it really does pay dividends when you come back? Once you go, like for me three months later, I don't remember why I did anything. And so even in that 90 day period, like coming back to Okay, why did I do this? Why did we go, you know, we all decided it was a good idea. And you know, maybe down the line, but that is a great practice to get in the habit of and because it's, it's so easy to look back and just be like, Oh, this is this just was not no one really thought through this. And going back to that, like, looking at a good history or looking at like version history and just understanding who did what, and sort of making judgment upon that. I mean, I would say beyond that another, you know, great thing from the RPA. side is, is having documented process, and then being sure to audit that process, like those documents and make sure Hey, does this still resemble what the business is doing? I mean, it's tedious, but it's like a 30 minute task, once a quarter that if you know, we see so much an automation world are people leaving, they leave a role, they take that institutional knowledge with them. And it wasn't captured anywhere. And I think, as you kind of lead with, it's like if things just aren't captured, and those artifacts don't exist, they're in the ether, and they're essentially lost. And then that's where it, I think you're putting your, your codebase at risk with like, Hey, we don't really know why it does this. And you know, you bring a coder, and he says, All right, let's rip it out. And then you find out why, you know, down the line after you've already replied something. So it's like such a great risk mitigant just to keep a journal almost.
M. Scott Ford 28:47
Yeah. And I think like another another thing that can be helpful is, like you mentioned doing an audit for like, is the process actually this way. And if you discover that it's not, like work to work to get the code to match the process, even if it means deleting code, especially if it means deleting code, because I think keeping code around that's no longer relevant, just like clouds and confuses the situation. I feel like teams are not like, doesn't matter the context, whether it be RPA, or any other kind of any other kind of context. None of us are aggressive enough about deleting features, especially as they're not needed anymore.
Brent Sanders 29:21
They're any, you know, principles that you bring into legacy projects that you feel like would work in, you know, across any engineering culture?
M. Scott Ford 29:34
Yeah, I try to like so one thing that I always do and I encourage team members to do is to try to make sure that your commits, like each individual commit, makes sense as an atomic unit. If you find yourself describing the commit, and you usually find yourself using having to use the word and then that needs to be, that needs to be multiple commits. And in really breaking it up in that way. So if it's like you owe it This and I did this and I did this and I did this, you know, sometimes there's a concise way to describe all three of those things together, in which case, that's how you should describe it. And maybe let's the other things as bullet points down in the further in the body of the commit description. But often my preference is to, like, make those individual commits. And even if it means that I have to kind of like, go back, and almost like undo my work, and then redo it, and redo it. And kind of like that iterative way, which can feel a little silly sometimes, especially if you're in a hurry. But often, what I'm able to do is just kind of use a version control ID that lets me pick the lines that I want to commit. So I'm able to say like, Okay, this group of lines, like is one logical change, and this other group of lines is another logical change, I'm able to make those separate commits, I don't, as often have to, like, go back and do the work over again, so I can separate it out. But that that's one another is, is just testing, like autumn, you know, test automation. And trying to make sure that your tests are, are adequately protecting you against what you've built. So either follow test driven development, or try to break, try to make your tests fail after the fact. Because you want to make sure that like, if you if you're writing your code first, and then your tests, go break your code and make sure your test catch the breakage, like break your code in a way that you you hope the test will will guard against it, make sure it actually fails before you before you commit it. I think like, that's the real advantage of test driven development is that by following that process, relatively rigidly, it forces you to kind of see that progression of this test was failing. Now it's passing. And it's passing because of the change that I made. You can come out, you can also do the reverse, you can write the code first, you can write the test. And then you can break the code in the way that you think the test will catch it. If it does, then you're done. If it doesn't, then you need to, you need to rework something, either the test or the code doesn't do what you thought it did. You know, it's only as good as the test actually are. Yeah, I heard a talk by Kent Beck several years ago, where he talked about how the tests are really artifact of what you thought of, you know, if you work under the assumption that everything you thought of was codified in a test, then like, like every every edge case that you could think of, or every kind of happy path that you can think of you wrote a test for, right? And so if something comes up in production, that isn't protected by a test. That's because you just didn't, you just didn't think of it. Yeah. And that's kind of like, it's kind of the human side of, of software element is, you know, we can't can't always protect against everything.
Brent Sanders 32:54
True. True. I feel less bad about it now. Thank you.
Mark Percival 32:59
I think a lot of it is you do have to have that pessimistic mind of figuring out it's easy to test them say, No, I know what to expect when it succeeds. But to understand all the different ways you can break is really tough. Obviously, there's, there's no way to guarantee your cash flows.
M. Scott Ford 33:13
I think it's also tough when you think you're done. Because you've seen it work, right. And I think as a human being who wants to feel some sense of accomplishment with what you've done. You know, like, yay, it works. Now, you kind of like, that's what you want to be done with it. Like you wanna, you want to stop on that high note, like going back and intentionally breaking it is hard. Like, it is like, I think that's something else to acknowledge, too, is that like, again, you're human. And it's work that you've just created in your attention, trying to poke at it to destroy it. And I think sometimes just maybe going on a walk before you do that, or taking a break, or maybe do it tomorrow can help. It's almost like the difference between writing a draft and then editing your draft. You know, where it does kind of require different parts of your brain. It requires like different levels of engagement with the work. And if you're trying to edit while you're writing and you're trying to write while you edit, I mean, maybe you're a master craftsperson, and that's that that's not so hard. But I think for a lot of mere mortals, like that's, that's two different parts of our brain that we have to like kind of switch gears between so kind of give your brain time to do that.
Brent Sanders 34:27
I feel like it's a lot of it's expectation like you start you know, everything starts red, you turn it green, everything looks good. Looks like I'm done you your expectation is that you're moving onto the next thing and when you really you need to kind of as you said get up take a walk come back to it and you know, it's just a man I always go back to that is like more of a craftsmanship, question or craftsmanship. testament to like, how, how far are you going to go and by the way, like I should caveat like, it all depends for me at least I was gonna evaluate this, I work on some projects that are super early stage that may or may not see that the light of day, that difference is very different than other projects.
M. Scott Ford 35:09
Oh, yes, absolutely.
Brent Sanders 35:10
Long running, and you know that this code is going to affect, you know, a factory full of people that are going to want to call you and wring your neck if it breaks. So it's just, you know, different levels appropriately.
M. Scott Ford 35:21
Yeah, one of our core values at Corgibytes is, is we call a crafting context. So and it's really about taking the context that you're in into consideration, and determine the appropriate level of craft, and discipline around the craft to apply to that context. And like, what you just described are two completely different contexts. And what the, the level of craft, that's reasonable in each of those situations is different. Like, if it's something that it's just a toy that may never see the light of day, like, you know, being super rigorous about that is not really reasonable, especially if like, maybe it's just an experiment, it's just a toy, it's just something that tickle an itch, making it completely polished and beautiful and put together in the best possible way. That might not be appropriate, right? You know, and in the end, you've also got the case of like, it needs to work, and it needs to always work. And so the level of craft that you put into that, the level of attention you put in that is very, very different. Because of that scenario. There's also it has to work right now. Like, it's, you know, something's broken, and we need, we need an emergency fix. And so you have to, you have to take that in consideration as well. So you like you, you put the band aid in, and you deploy it, and that gets led that allows things to limp along while you clean up and do the fundamental fix. You know, so that, that can be a way to go about it as well of like, Alright, chewing gum will work for the next hour. Let's use chewing gum.
Brent Sanders 37:03
I've been there, I've been there. It does work, and it helps. It's like it's sometimes that's okay. I mean, it helps you kind of, I don't know, I mean, I'm sure you've been in these situations where especially with a new codebase you're getting, you know, we're wrapping your arms around something and, you know, it tends to be a bunch of falling down. And then that's generally what informs you to make sweeping change or more effective opportunity to change the system.
M. Scott Ford 37:30
Yeah, I think like, it's like, you almost like have to grapple or wrestle with it, like your mental model. And the code, like in what it actually does have to fight. And in which one wins isn't always clear. True. So yeah.
Mark Percival 37:53
Well, this has been super helpful. I think it's very interesting for I think our listeners perspective as hearing kind of the legacy piece, because I think a lot of our listeners do kind of play in that space. Obviously, if you're in the automation space, you're dealing with older, older systems. And do you have any experience Scott? what I mean, have you had an experience where you've implemented automation or, or kind of been involved in that process.
M. Scott Ford 38:18
Um, there's been some, like, kind of back office process, automation that we've been involved with, you know, automation around, you'll be able to deploy a system that we've been involved with. And then, but so far, like with actual, like, physical devices, we haven't had haven't had any experience as Corgibytes. I had some experience previously with, you know, systems that were similar, like, you know, embedded systems. But, but yeah, and so it's, it's, it's interesting how, you know, some of those other some of those other contexts are really similar to the typical RPA and how the RPA disciplines can really inform things like ci and CD, because it really is an automation problem at the end of the day, and so a lot of those a lot of this practices that that come out of the RPA community can really help people who are setting up CI CI CD pipelines.
Mark Percival 39:17
Yeah, that makes complete sense. Um, yeah. Is there anything else you want to add? Or, you know, obviously, you know, feel free to pitch the, your excellent podcast Legacy Code Rocks.
M. Scott Ford 39:28
Oh, thanks. Yeah. So if I'm, if any of this if any of what we were talking about today sounds at all interesting to you then. And you're like, Hey, I'm one of those crazy people who likes fixing bugs. Then we have a home on the internet for you. Go to legacy code dot rocks. We've got a podcast, we have a slack community. We have a weekly virtual meetup that meets at 1pm. Eastern that you can you can join where we were doing that before it was cool. Before you had to because people couldn't meet up in person But that was really just born out of the fact that like, you know, in any one municipality, there aren't enough menders. So who would be able to show up and show up at a meet up without having to drive for two hours?
Mark Percival 40:14
Yeah, you bring this up in the show. You bring this up in the podcast a lot, which is that idea of menders versus builders. And our mentality.
M. Scott Ford 40:21
Yeah, yeah. And so like, yeah, so, so, mending, you know, being the activity of making refining something and making it better and improving it. And then making being the activity of, of, you know, taking something from scratch and from from idea to, to inception. And, you know, it can you can also kind of think of it in terms like the, the 8020 ratio, where I think like, you know, the, if, if the maker really likes that first 80%, and they hate that last 20%. That's where the mentor comes in, and just really enjoys and really enjoys that, that last 20%. And, and I think like, they need each other, makers and vendors need each other. Maybe that's, that's, that's a good thing to, to mention, is that without, without menders, the makers' work looks sloppy. And without makers, the menders had nothing to do so.
Mark Percival 41:15
Yeah, no, it makes sense. I think I think obviously, in our space and the RPA space, you're seeing it, it really is somebody who wants to come in and kind of fix the systems. And it is a mend.
M. Scott Ford 41:25
And we've also seen it's a spectrum. So you know, there are people who will enjoy doing making work for six months, but they need a break. They need like a palate cleanser, a little bit of money to work for a few months to really help them. And if you can kind of survey your team and see like, Okay, what needs to be done, and who's good at what. So if you've gotten meant people who are like naturally good at Menders, or good at mending, then match them up with work that needs it, match them up with a project that needs it. So if they're stuck in a project that really needs automaker energy, they're going to be miserable. And they might, you know, you might have you instances of people trying to quit like I would have, you know, back in the day when I started my career. But if you can, like if you can recognize that they have that talent, that ability and really harvest their energy in the right direction, then it can be a really powerful tool for your organization.
Mark Percival 42:15
Yeah, that makes sense. This is definitely something that you see a lot in the RPA space, which is finding people that want to kind of dive into this automation, add to your existing team is not necessarily that easy, because it kind of you'd have to cross not just the developer, but also be somebody who has the the product and the process expertise to understand the business problem you're trying to solve and try to automate away. Yeah, so yeah.
M. Scott Ford 42:36
Yeah. It's special. Be special unicorn.
Mark Percival 42:39
Yeah. Well, Scott, thanks. This is, I think, been really helpful. Brent, anything else you want to add?
Brent Sanders 42:45
No, this has been really enlightening. And I'm glad that we got to have more of an engineer-developer centered conversation. For sure.
M. Scott Ford 42:55
Awesome. And I really appreciate you having me on the show. And hopefully for the people who aren't as developer focused. It was so interesting.
Brent Sanders 43:02
Absolutely. Thanks Scott, thanks so much.
M. Scott Ford 43:04
Yeah. Thank you.