Home / Podcast / QA – how to create impeccable software, fast with David Burns from BrowserStack

Product Stories

QA – how to create impeccable software, fast with David Burns from BrowserStack

Launching
Product Stories
Product Stories
QA - how to create impeccable software, fast with David Burns from BrowserStack
Loading
/

Video

Summary:

Why is my app so buggy, and what can I do about it? Why can’t developers just test their own code? Who’s fault is it and who is to blame?

Today’s guest is David Burns, Head of Open Source at BrowserStack, a popular platform for testing apps and websites in different browsers. David takes an in-depth look at quality assurance, or QA—when you need it, how it works, and how to do it right.

Episode Highlights/Topics:

  • How David got into QA after finishing college and computer science burnout
  • Industrial Psychology: Putting computers and people together before UX concept
  • Process Re-engineering: David discovered how people think and how to break things
  • What is quality assurance (QA)? Brings up issues/bugs that people might experience
  • QA/Testing: Simple way for startups to employ successful process is to review workflow
  • Test Pyramid: Unit, integration, and end-to-end levels of how and what is done
  • Manual vs. Automation: Testing type depends on trying or exploring apps/websites
  • Behavioral- or Test-Driven Development (BDD/TDD): Write test before any development

 

Another video you might like

How can a non-technical founder judge code quality?

Read the transcript:

Victor [00:45]: David, welcome to the show.

David [00:46]: Hello, and thank you for having me.

Victor [00:48]: How does one get into QA or how did that happen with you in this case?

David [00:55]: The way I got into QA was I finished university. I was kind of burnt out on kind of computer science. I had a really bad computer science teacher, through my final year of university. I was like, well, I don’t want to work in this industry if people are going to be like that. But I was studying industrial psychology at the same time because people interest me the way they think, the way they do things and putting computers and people together was always a fascinating thing.

[01:22] And so like back then there wasn’t really the concept that I knew of like UX, which I think I probably would’ve gone into. But I went into what I thought was the next best thing which was kind of process reengineering and then got bored of that because I was working at a bank and I highly recommend people don’t work at banks because it can be soul sucking to be honest.

[01:45] And then went and worked at a startup, starting the QA because of all the things I learned at a bank, my industrial psychology. I was like, I knew how people think, I know how to break things. Let’s go do that. And at that time I was also trying to get into automation because of like process re-engineering, how could I automate repetitive tasks? And it was like, oh, this all just make sense. Like it’s just QA. Like how do you make things better for automation, which then generally makes things better for people.

[02:14] And it just kind of snowballed from there. And suddenly I was in QA. I wasn’t like this person who was like, oh I have this innate ability to find the minutia. I don’t have that, which some QA people have and they’re brilliant at it. I don’t have that, but I was very good at kind of

just trying to think how people were trying to do it and then making sure those parts were always really good.

Victor [02:37]: Yeah. And to maybe just take one step back as well to our audience, QA quality assurance, what would be your definition of it at this point? What is quality assurance? Because I think a lot of people have a lot of different mindsets around that.

David [02:54] So I think in general QA as a term should be shot at the sun and burnt and destroyed. I’m not going to lie, because if we look at just the pure language of it. QA is quality assurance and there’s zero chance that anyone can assure quality, it just can’t happen. Because of that, you have these tumbling things of like, oh, well, why didn’t QA catch this? It’s like, well, because obviously they were looking at different things, but like, this is not QA’s fault. The bug was there.

[03:27] It’s a team problem. That’s a different thing, but it’s not a QA thing. And so for me, QA is just about kind of trying to surface up issues that people might have. They’re not always bugs. So things might be working, but they might be working in a very like haphazard way or kind of just like, how did someone think this was how a person would think. Just having another person just work through your code and do it tends to be what QA is nowadays.

[03:57] And so that’s where it come from. And now it’s also kind of got automation, engineers. It’s got security. A lot of security engineers will tell you they’re not QA, but essentially they are QA. And we love them the same way as developers. They find bigger, better bugs than a QA engineer generally does. And it’s this all encompassing thing of just trying to surface issues and allow people to have confidence in their code.

[04:29] And so this is where I kind of think like testing tends to be a slightly better term, but then testers have a bad rep as well. And they kind of tend to be like underpaid in their engineering role.

Victor [04:41]: Ah, so when you negotiate, don’t use the term tester, essentially QA engineer.

David [04:47]: You’re a software engineer in test, because then you’re a software engineer and then you can on these things and go like, well, instead of leaning towards DevOps, I lean towards testing, like truthfully it is because their role is no less than an engineer developing the code. That’s the main key of this, is that focus on making the product awesome. And if you cut corners or pay less in certain areas, then it’s going to come back and haunt you just maybe not in the first six months, but later down the line, it will.

Victor [05:26]: Yeah 100%. And some professional career advised by David right there. So you say that QA were maybe call it testing, just means understanding, whatever we have here piece of software or probably not just software. Any kind of engineering or product to make sure that this is usable and it does what the client or the user wants to do. So what is the simplest form of quality assurance or testing? When I have a small startup I’m just getting started.

[06:01] I have some sort of app I want to release. What is the simplest thing I can do in a way that’s more than just taking a look at the code, just opening it up, seemingly works? What system can I employ?

David [06:16] I think the simplest way is like, if you’re a startup, unless you are one of the few people that’s going to do a startup on your own and be successful that way, which is very few, but they do exist. Generally it’s two or more people and just kind of getting someone else just to go through your workflow at a high, how would I expect to use this? And they might find little things and it could be like, this is where kind of the spectrum of testing becomes huge very quickly.

[06:51] Because like earlier, I said you’ve got your security testers, but you also have performance testers. People who liketheir whole job is just to make sure a website can scale if it’s a website. And when it scales that it’s not going to be super slow. And so you have all these things, but the ability to just click through the process, like if you are creating just your login form, just to get people to like, Hey, this is our new startup. We’re not ready for you yet, but let us know.

[07:21] And like, just getting them to do that signup. Does that signup make sense? Like just doing that. And then from there, it’s how repetitive is the testing going to be. So are you going to be doing this? You want to have your CICD pipeline. Like every time you make a change, it’s going to push it out into production. How could you get that confidence? And then it’s all about trying to find where you want to be and what are your confidence markers, I call them confidence markers that you want to hit.

[07:52] So if I press push to like get or to material, what steps need to go to it and to get to the end. And I think it builds and builds and builds every time. And so suddenly you get these highly complex processes, but it gives you that confidence. The whole point of testing QA is giving your product confidence that when it goes out to your customers, you’re not going to look like an ass, like that is the biggest thing.

[08:23] Because the minute you put out something there and it loses users, getting those users back is really expensive. This is why, if you want to move from one telephone company to another, they will try to just give you discounts then losing you, because giving you back is going to cost you way more in the long term. And the same concept should be there for softwares. But I don’t think people realize that very, very much.

Victor [08:48]: So you’re saying that obviously the more confidence I want to have, the more processes I need and the more complex everything is, and probably what I need depends on firstly probably how many users I have, what is at stake, but secondly, also how bad would a bug be. Is this a calculator app or to do app? Or is this like banking software? How bad is a bug I suppose? Or is this software that runs in planes? I don’t know.

David [09:19] Yeah, exactly. The way software has developed at NASA is very different to the way software is developed at a bank, which is very different to software that’s being developed by a little startup that people are using. Because you have different ways of creating the test. So like in NASA they can’t afford anything to fail because a failure is like a hundred million dollars to them. If you look at the James web telescope, like a failure up there is tons of money and they can’t afford that.

[09:57] So they got to test, test, test, test, test, and they’ll be testing like at different levels. So there’s this concept called the test pyramid and it breaks down the types of tests that you want. So you would have like a unit test or a small test, then you would have an integration test. So two components speaking to each other, do they make sense? And then you might have end to end tests.

[10:20] And it’s a pyramid because you want loads of the small unit tests and you want very few end-to-end tests because the end-to-end test can become flaky and they can cause other problems. But you want to know all of that. But that pyramid is the same if you work at NASA, a bank or a startup, it’s just depending how much you do at each level or what you do at each level changes kind of in the context that you’re working.

Victor [10:45]: So a unit test, essentially you test a particular unit thing or functionality where an integration test and make sure everything works well together. And the end-to-end test than really is an entire user flow. And I do that less often because it obviously takes more time, but probably before each release, to just go through everything.

David [11:08]: Yeah. The thing is, it depends on how you set up. If you’ve got your awesome super duper CICD pipeline, that’s like all bells and whistles, you might throw it in there for every release, but if you don’t, you might not have that end to end test as an automated test. You might do that as a user. And then it’s like, it’s done all your units and integration test pushed to like a staging server, which you then manually test through your end to end. And if that’s all good, there’s another button that then pushes it out to infrastructure. And then how that happens can be like this again. It’s a spectrum depending on what type of company and what scale you’re working at.

Victor [11:51]: Okay. So we’ve become quite complex with this already. Let’s get back to the simple, simple, simple steps. So obviously we have manual testing. This is done by a human we’re just testing if stuff works. So QA as someone who goes through the app and sees if things break. So is it really just going through the app or is there a system behind it that’s being created for these tests? How does that, like the simplest form of manual testing look, is there like a to-do list or a framework?

David [12:29] Yeah. So I think kind of, this is where if you have good UX, like you shouldn’t necessarily need a framework in your app. So manual testers or they’re more commonly known as exploratory testers. Their whole point is just exploring the app, trying to use it as a user word. And as they’re going along, they’ll be making notes going, I’ve done this, done this drawing things. So if you’ve got an email marketing software, you want to be able to design your email. Can I do that? Can I change things to bold, whatever.

[13:02] Like you suddenly got a whole [13:04 inaudible] type system there and can they use that? And so they could have test cases. And if you are outsourcing, a lot of companies will have a test case management system where it’s like, oh, I’ve passed this on to the outsources. I know that they need to do this, this, this, and this. And then people will just make sure it’s there. But then you’ll also get people who, like I say, who are exploratory.

[13:28] So less of the mechanical turk side of things, but are actually just testing it. And so they’ll go like I did this, but this button then took three seconds to load. That seems a bit slow. And you wouldn’t catch them in automation, but like an exploratory tester would be able to spot those because it requires a certain amount of thinking. And if you are going for the mechanical turkers, like, yes, this is correct. I can do all these things, but it’s not fun, but I can do all of them. Like you’re not going to get that.

Victor [13:59]: So the two types, essentially being one, I have a list that just things I have to try. It’s the test cases, which ensures kind of repeatability of these tests that we try everything that we need to try. We have a checklist that we can go through versus the exploratory, whereas more like, okay trying to understand what a user would do and maybe find new things nobody’s thought of. Okay. Awesome. And now that we have that, the question is why don’t developers do that? We actually get that a lot. Why don’t developers just test their app, they’re developing it. They’re supposed to understand that anyway or not?

David [14:43]: I think for the most part, this is a project management problem, partly, and then there’s a cognitive bias problem on top of that. So there’s a project management problem in that an engineer will be like, I’m setting up my sprint, it’s going to take me two days to do this feature, two days to do this. I need another two days to fix that bug and five days to do the last thing. And I’m hoping that by 11 days I can do in 10 kind of, which is generally what like a lot of sprints tend to be like.

[15:19] Nowhere in that point, has anyone gone, like in your estimates, are you making sure you’ve got time for testing that. If you’ve tested it, what happens? Because what generally happens is they’ll kind of, this is my feature, go into testing, the bugs that come out of it will go into the next sprint. And so there’s this kind of problem, project management problem that kind of rears up and it’s kind of like, this is partly why in a lot of large tech companies, they don’t tend to do scrum and things like that.

[15:53] It’s like your feature needs to be done. And part of that process is that they might do kind of sprints, but the set of features before we release, not this must be done within this timeframe. The other side of it is kind of, there’s a cognitive bias. They are testing it when they’re doing it because no one wants to come across as putting out bad code or put low quality things that they’ve been working on. No one ever wants to do that.

[16:26] So it’s not that they’re not testing it. It’s that they’ve got their time squeezed a little bit. And then when they do test it, they’re going to be testing for what they think should be there. Because like what I think and what you think, even if we have the same problem, they’re going to be two different things. They might be very similar, but they are going to be different. It’s like, if I asked you to write a 10 word sentence on something and we both knew the same topic and we both had the same skills and everything, the chances of us getting the same 10 words are zero.

[17:03] It’s just not going to happen. It’s the same with like developers. They will do these things, but the way a QA person or just someone else, like it could be a product manager.
You don’t necessarily need QA. Like you could get your product manager, just helping out with QA side or getting other developers working on it. But it’s kind of that side of things. And that’s where I think kind of developers start to fail. They’re not actually failing. Like bug are just accidents. And I think a lot of people put too much pressure on developers, like, oh, it’s your fault. This is why we’ve lost this. It’s like, well, there’s an honest mistake. I wouldn’t want that.

Victor [17:43]: And that makes a lot of sense because on one side is like the second pair of eyes. It puts a bit of teamwork into it of like, Hey let’s think this route together, does this make sense? Which one person on their own is like, there’s no point of thinking about that again. I’ve already thought about it. This is what I thought, this is what I came up with. And secondly, the question is, is it a documentation problem then really? It’s not well defined enough, but then again, how well do you want to define things?

[18:17] And isn’t it better to sometimes catch things like with a second pair of eyes. So it’s interesting from a systematic level as well. Okay. So that’s cool. Probably also, if you have a lot of manual test cases that you’re going through, you just don’t want a developer to spend kind of an entire day testing an application probably if you do it manually.

David [18:41]: If you do it manually.

Victor [18:46]: Speaking of which we have these manual test cases now, a lot of them. We have a lot to test. We have a bit more mature application, more users, a grown code base. People are really busy doing these tests or sometimes, just don’t do them. We get it released, it’s pressure. It’s Friday, as always, Friday 6:00 PM. So the question is, can we automate this? So the question is, what can you automate? Can you automate all the tests? Do you still need manual QA? How does that work?

David [19:18] So you can never get rid of manual QA. And I’ve seen a lot of companies go, yes, I can. And then like earlier I was saying the example of like a click to button, but it took three seconds for something to happen. Automation is never going to catch those kind of cognitive types of tests, but yes, you should be automating it. And automation should be happening as early in the process as possible. And so it doesn’t necessarily need to be the developer, obviously a developer should be doing at least unit tests.

[19:52] Ideally they should be doing unit and integration tests. And if there are automation like end-to-end automated test and something they’ve done has broken them, they should fix those. They could also be writing the end-to-end test. Like anyone should be doing that. But there should be a belief that quality is everyone’s job throughout the team. From your VP of engineering down to kind of whoever’s at the bottom. Quality of that product is everyone’s job.

[20:24] And so everyone should be writing some automation at some point. So it could if you’re at a outsourced company you might have business analysts, they could be writing kind of the business specs. Those could be turned into behavioral driven development tests, so BDD and you can use tools like cucumber and things like that, if that’s your way of doing it. But you can start building these bits out and then suddenly it just fits into the narratives of oh, if I have this, then this, then that’s right.

[20:58] And you can build out your automation very quickly and still hit those quality markers. And then if you don’t have an exploratory tester, like I said earlier, you could have your product manager just kind of just going through it very quickly with a fine tooth comb before a release going, yeah, this is cool. Or it’s been released. And they’ve just like everyone should then be trying to use it if you have a product that your company could use.

[21:21] Microsoft famously brought up the idea of dog feeding. Like the idea that you must use your own product and make sure it’s better. And they do that quite a lot in a lot of companies. So Google make people use Google Chrome and Android devices. And so everyone gets to use these products and they can find the problems quicker that way. So it’s not always necessarily about exploratory testers, but at least having people in the company using the tool.

Victor [21:49]: That’s a good one. That makes a lot of sense. So you just mentioned BDD. Can you explain what that is? And also what TDD is?

David [21:57]: BDD is behavioral-driven development. So it’s kind of a business analyst would write it. And so TDD is very similar. TDD is test driven development. And in both cases, you’re kind of writing a test before you write any development. And so possibly because I’ve been in the industry for far too long like I never trust the test until I’ve seen it fail. And the idea with BDD and TDD is that you create a failing test and then you write code to fix it.

[22:28] And that way you know that if you’re supposed to do step one, step two, step three, that those points are hit and that someone can do those things. And so BDD tends to be more integration level or end to end where test-driven development can be everything. And BDD has a specified format around. I think it’s kind of as a user, I want to do this and this, where test driven development be like, here’s my test and I’ve written it and it could be a unit test. It could be integration test or [23:02 inaudible] test. And then you start back filling code to make sure that it’s working as intended.

Victor [23:09]: Oh, that’s cool. So in normal development I write documentation. I say, okay, I need this button and it’s supposed to do this. Then a developer implements it. And then someone writes a test to cover this functionality versus test driven development would be, I first write the test, obviously the button’s not there, nothing’s there so it fails. But then the developer’s job is essentially to write code, to make the test come true and to make the test work. That’s interesting. That’s great.

[23:42] Another question that we see quite often is around not sure if I want to call it success metrics of QA to put it on a systematic level like that, but how to understand whether, I want to say QA is doing a good job, but generally whether the process is working, because obviously you’ve said bugs are normal. It’s not the Dev’s fault or the QA’s fault, it just keeps happening. But then what is normal or how do I know that something is actually off the rails and we probably need to see as a team, how we can do better versus what is totally normal And these are just things that happen?

David [24:25]: Yeah. So I think kind of people should always be measuring the influx of bugs. And so there are multiple ways that you can do this. So the main kind of term that’s being used at the moment is around observability. So kind of, how do you add observability into your application so that you can see that observability is this tool, like honeycomb.io is really pushing it at the moment. There are others, it’s all open source and like open telemetry.

[24:58] And it’s the ability to kind of track your application while it is live. And then you can start seeing live error reporting and you can see kind of the difference between kind of like what New Relic do, because they’re now starting to score open telemetry. And what open telemetry score was, is that you can kind of see proper workflows better rather than kind of certain areas of your application.

[25:21] And so you can track the error rates. Obviously you should always have low error rates and that is again a quality point. Like you say, you want that because if you have high errors, then that’s going to come to your users. Users are going to be upset, they’re going to leave and you can subtract performance and stuff like that. The other thing is kind of the usual ways. And my brain’s failing with it, but kind of like CSAT, where you kind of just like. My main brain’s not working. It tell people, are you happy using this? And then you score it and you want all your nines and tens.

[26:03] So you want to be tracking all of those things and that’s what product managers are very good at in what they track, because they want to see that CSAT, and another one, like what we were using around documentation. Like has this page actually been useful to you? Yes or no.

Victor [26:19]: And NPS

David [26:20]: NPS. Yes. It was on the top of my tongue and I just couldn’t get it out, but yeah, NPS.

Victor [26:28]: Had to Google it as well.

David [26:30]: NPS has its problems, don’t get me wrong, but it gives you a direction of where things are going. Like in a lot of these things, you want to be checking trends over time rather than specific things. And then you oversee one-half good, like telemetry coming out of your system. So that the way you take money from customers, if that suddenly stops working, because you can’t always test that, in any of the systems, but you’ll see very quickly if suddenly there’s no money coming in.

[26:59] That’s where good, like telemetry could spot out. And I think kind of the Guardian Newspaper in the UK, that’s what they do. If memory serves, they have good automated tests around generally, but they have zero tests around how they get subscribers. And the main thing is they have brilliant telemetry. So like if they see subscribers, like they know roughly where subscribers should be at any point during the day. And if it suddenly goes to zero very quickly, they roll back the whole system and then start it and then find out what was the cause?

Victor [27:32]: Ah, that’s interesting actually. So actually being able to look at real time data and figuring out from that and where it should be, if something is wrong with my application. That’s also very, very cool.

David [27:49]: Yeah. Open telemetry is awesome. Charity majors, who’s the CTO of Honeycomb tends to pull out these little tidbits of kind of very good observability. And her principal engineer is Liz Fung, I think. Liz Degray on Twitter. She’s awesome. And so they put out all these really good ideas of what you should be doing around SRE to improve quality, not necessarily just about SRE, but like how to make sure that your quality is up there. Because like I said earlier, quality is everyone’s job. Like from your DevOps, your engineers, your VP, everyone needs to be involved.

Victor [28:29]: 100%. Well obviously you need a bit of data in order for that to work. So at a larger scale, that’s a cool thing. Probably in the beginning you gotta stick to manually going through it.

David [28:40]: Yeah, exactly. Everything is a spectrum. And at the beginning, I kind of said like, depending on how your CICD pipeline is, it could be a literally, like you’ve pushed to say GitHub, if you’re using GitHub, and from there, it’s like, oh, everything’s works. Because you might be using it like circle CI, [29:00 trav] CI or GitHub actions just to do those little tests. And then you might have something like Heroku where it’s just like, oh, I need to do a Git push to Heroku now to get my application.

[29:14] That could be a CICD pipeline. Even if it’s just on your machine, because you’re a startup, to kind of like I’ve pressed this thing. And now it’s rolled out to 10% of my infrastructure, while it starts gathering information and then we’ll automatically roll it out if there are no errors. Like how Google and Facebook do it to kind of then like, oh how do we make sure that it gets to 20,000 servers super quick? And it’s like, oh how could we like repurpose BitTorrent? I think that’s how Twitter did it.

[29:48] So it’s all the scale, depending on where you want to be, but it’s generally around maturity and how can you be more mature as a company and you don’t need to hit all these markers from the start. I think a lot of people get that wrong. It’s just like, you need to slowly build your MVP out and then quality and security and privacy should all be part of that as you’re doing it.

Victor [30:12]: Right. 100:. Well, this has been super, super insightful. Thank you so much for that. Where can people learn more about yourself?

David [30:22]: So I have my personal website, which is www.theautomatedtester.code.uk. I generally put out a few bits and pieces there. I’m @automated tester on Twitter and Masterdon on kind of other places. So if you want to follow me there. I generally talk about everything from kind of politics to tech to sports. I do it all on social media. I don’t do just one thing. And the other thing is because I’m head of open source at Browserstack. My role at browser stack is just being part of the open source communities that are important to kind of our customers. So testing communities. So you’ll generally find me in one or other slack related to those types of things. And I’m happy to talk to anyone who wants to kind of talk about automation and things like that. So you can just hit me up on one of those.

Victor [31:19] Perfect. Cool. Well, thanks again. Thanks for coming on the show. This has been a really great one and speak to you soon.

David [31:28]: Awesome. Thank you.

Other episodes you may like

Post link
Software Development

Everything You Need to Know About Working With Software Development Agencies

Episode 96
Post link
Tech teams

The COMPLETE Guide to Hiring & Leading Your First 5 Developers | SaaS Academy

Episode 95
Post link
Leadership

Overwhelmed SaaS Founder? Learn to Delegate Responsibility Strategically

Episode 94
Post link
Software Development

How AI Could Potentially Kill Your SaaS Business

Episode 93
image with a laptop and email graphic

Learn to Guide Dev Teams: Your Monthly Advantage

Get the edge 1,500+ founders swear by. Once a month, access a power-packed mix of podcast wisdom, exclusive events, deep-dive content, and actionable tools.

Claim My Advantage