You are here

Tracks

Partner track: Why It Is Important to Fix Non-Specification Bugs?

Most of the time we spend on testing is making sure that the software product is working as specified in the requirement documentation. The tester bases his testing using the documentation and together with experience and common sense. Sometimes we can log bugs which are conflicting with our common-sense interpretation of how things should work. This type of issues leads to skewed or outright wrong first impressions. How important is your first impression really?

This speech can accent on what problems manual tester face during their work (those testers who do not see the code itself).  How do we react, if we see something that is not normal based on our understanding or point of view.

If we will think that human being has the code inside (written by mother nature). Code that explains how should the heart beat or how blood is cycling through your body. Tester does not see this code, he sees only the result of code execution.  And the result  is your appearance and your fully working body. Here we come to the place, where tester sees how do you look like, your outfit, your hair condition, your face etc.

Let’s take an example. Your hair shampoo has in the instructions,  that you need to wash your hair every two days. And you do it. The “loop condition” cycle as described in the manual, but your hair looks slightly dirty every day. Tester will see this and judge your hair condition. Tester will think, that you can wash your head more often just because you have a different type of hair. Should we blame the tester for that judgement that it was based on his common sense, but not the manual itself, or should we ensure that non-specification issues should be logged and treated the same way as issues logged based on technical documentation?

Partner track: Breaking the Myths About Testing Casino Games

When people think of computer games, most imagine an evening at home sitting on a couch holding a controller making polygons fight or shoot on their behalf on the TV screen, or maybe they imagine sitting on their computer playing an MMO with their friends, laughing over voice comms. Coincidentally there are far more different computer games than just triple a titles and an ever growing avalanche of indie creations. Casino games are one such type. I think I don’t have to explain how irate you can get when you find an issue when you gleefully spend your time with your game of choice and sometimes you have to spend your time solving an issue that somehow got in the way of your “me“time.

When people think of computer games, most of them who are not in the industry or are as far removed from game development as they can be, have no idea how the games are being tested. When I was taking my first steps in Quality Assurance, I was one of those people. Being privvy to spending my time playing computer games of all shapes and sizes I was under the impression that game testing is nothing more than a room full of people replaying the game over and over, over and over, until they iron out all the quirks and inaccuracies. Naturally this is not the case, but after about 12 years of being involved in different types of software testing and a plethora of interviews for my current job, I see that there are still people think, that testing games is literally just sitting around doing playtests all day.

I would like to aim this talk towards newcomers to the industry or people who want to switch  their current jobs to game development to show that testing games be it regular video games or casino games, is a just as important and serious as testing communication software or banking software. To show that it involves all principles and methodologies of testing that can be applied everything else, but that it’s also a fun and engaging activity with its own perks.

Partner track: Avoiding Local Maximum via New Design and User Testing

There are only so many changes that designers can make to breathe new life into a product. At some point, they'll inevitably hit the Local Maximum or a point at which they have achieved the limit of the current design. The solution? Start over with a new design?

In this practical talk we'll share real-life before and after new design user tests, the importance of including UX from the early stages of development, how to balance end-user and business needs and more.

Partner track: Self-Contained Automated Tests and How to Remote Debug Them

How do you handle your work if you are employed at a fast-growing company where every sprint holds a ton of new features? The code base is growing every day and product coverage becomes a considerable challenge. Even if you solve it by writing autotests on a daily basis to handle the load, there may come a point where you have so many automated tests that maintaining them in their thousands becomes a whole extra headache. How can you keep track of what tests are actually important, and what can you do if they are gradually failing?

At Playtech, we have thought about those questions for many years and tried out different techniques. In this talk, I will give an overview of how we define self-contained automated tests – tests that are running continuously, proving their relevance – and how they can find the exact component version where they got broken. Additionally, I will touch upon how to use remote debug to find the issue so that others can use the same tests in same environment at the same time. In conclusion, we are aiming to design automated tests that can keep up with a fast-growing product.

Key takeaways:

* The idea of self-contained automated tests

* Why it is good to query the production database every night

* Why use a binary search algorithm to find bugs

* How to remotely debug problems so that others can use the same environment meanwhile

 

Gauge + Taiko: BDD for Web Revived

Does Behaviour Driven Development work? Unfortunately, it usually does not. While many people try to pitch as a way to bridge the gap between stakeholders on the project, many teams fail to communicate their test scenarios with everyone involved.

 Although this fundamental problem of lack of communication can be solved on the organization level, BDD is often used with Cucumber or Robot frameworks. Due to the complexities of these tools, developers and testers stop seeing the benefit in the entire approach of BDD and abundant the practice.

However, recently, Behavior Driven Development has seen a resurgence in adoption, thanks to Gauge framework. With the latest release of Taiko, they create a great combination of communication and testing tool with the use of Gauge+Taiko.

In this talk, we will discuss BDD principles and how Gauge can be used to take Behavior Driven Development to the next level. With Taiko, the audience will learn on how BDD can be taken to the web in few easy steps and to see what needs to be avoided when these tools are implemented in any organization.

Key takeaways:

  • How Gauge can be used to take Behavior Driven Development to the next level
  • How BDD can be taken to the web in few easy steps with Taiko
  • What needs to be avoided when these tools are implemented in any organization

WCAG 2.1 Standards and Accessibility Testing

The European Commission estimates that including all who have a “long-term physical, mental, intellectual or sensory impairment,” one in six people in the EU have a disability - or some 80 million. That is 80 million people who use our webpages, apps, programs and services in a way that is hard for a developer or a tester to imagine, making it maybe one of the most difficult concepts for a tester. 
Looking at accessibility as important quality standard in testing will be in many ways beneficial. Firstly, when designing our products we usually try to make its usability as comfortable as possible, so it is only right, we do the same curtesy to everybody. Population around us is ageing, so we can only expect the number of people using screen readers, and other assisting programs to use IT services to grow. 
Secondly, it is beneficial to the company, as there are 80 million new potential customers in Europe alone to be gained, if your product is accessible. Also guaranteeing it, is good for companies public image.

But if doing the right thing and potential benefits are not good enough motivator, then regulators will probably make it mandatory anyway. In USA for example there is already Section 508 amendment added to American Rehabilitation Act, giving standards to web pages. Also in 2017 there were 714 law suits agents different webpages / web service providers due to lack of accessibility, mainly claiming those pages/services to be in breach of Americans with Disabilities Act. While in Europe currently there is web standards only set to Europa web pages, then many governments, including Estonia are discussing setting accessibility standards as a law. 

So this matter is important today but will definitely become more important over time, making it important for testers to be ready to both understand how to test accessibility  and how to assure it is factor being considered already in development phase. 

That what my talk will do, I will present my research on the matter as well as practical example from my own work experience how to test accessibility and what are the standards that are sufficient to claim something to be fully accessible.

Key takeaways:

  • What is WCAG 2.1 standards and how to understand them.  
  • Why is assuring accessibility important. 
  • How to create a test strategy for accessibility. 
  • How to keep your product accessible over time.

Learning From Bugs

Bugs are great learning opportunities. So how do we make sure we learn as much as possible from the bugs we find and fix? One way is to reflect on what made the bug hard to solve, and how we could avoid this type of bug in the future. For the past 15 years I have written down short descriptions of the trickiest bugs I have solved. I have included the symptoms of the bug, the steps I took to find it, the fix, and the most important lessons I learned from it. This simple method has helped me distill patterns that have influenced how I write, test and debug code.

In this talk I will share my experience with this method. I will give examples of tricky bugs I have encountered, and show what the corresponding bug entries look like. I will also present the most important lessons I have learned from going through over 200 such bug entries. The lessons include rules for effective testing, and useful debugging techniques and heuristics.

Key takeaways:

  • Present a simple technique to maximize learning from bugs, one that everybody can start using right away
  • Describe several testing rules and heuristics that have resulted from the bug entries
  • Detail the most useful debugging techniques I have learned for hard-to-find bugs

Choose Your Test Approach with Cynefin Help

Testing is a complex activity, involving many decision points and multiple possible approaches to the same end goal, that of ensuring the most useful information is provided about the tested software.

At the same time, nature is also complex, and an inspiration source for some very useful models, such as Cynefin. This sense making and context determining model is quite popular in the agile practices world, and can also be a good candidate for testing.

Is test automation the best approach? Should automation alternatives be considered? Should I build or buy my testing solution? These are some of the questions that people involved in testing are facing many times, and that can be made easier to answer to with the help of the Cynefin framework.

My session reveals how Cynefin can be used to make sense of the testing context, thus helping determining the most suitable approach for testing.

At the same time, since Cynefin is not a static model, the session highlights how the testing context is or can be changed, updating accordingly the testing approach.

Key takeaways:

  • Awareness of the context challenges for testing
  • Introduction of the Cynefin framework
  • Mapping testing strategies to Cynefin complexity domains
  • An example for supporting the “no-automation” decision

Let’s Share the Testing!

In the world of DevOps and Continuous Delivery how can we, as Software Testers, adapt to continue to add value within our teams?

Within my cross-functional Agile team, the testing activity had become the bottleneck. The ‘To Do’ cards on my Kanban board were piling up. As the sole test specialist within my team I felt as if I was preventing us from being able to release code to our live environment. Frustrated, we got together as a team to discuss how we could fix this problem.

Our solution? I was going to share my exploratory and automated testing knowledge with the team. We were going to test throughout the development process. We could design test plans and discuss technical challenges together. We were going to collaborate on the testing effort. My role as the test specialist was going to evolve, which made me nervous.

In this talk I’m going to share how my team removed the testing bottleneck, increased productivity and started to become true cross-functional team members.

Key takeaways:

  • How sharing your testing knowledge can increase your team’s productivity.
  • How to encourage non test specialists to get involved in the testing effort.
  • How sharing your testing knowledge can improve communication within your team.

Dude, Why Don't You Test UX?

I noticed, that most of companies have no UX designers and products are just created by developers without any clear idea of how it should look and how it should work. I want to remind, that quality is not only accurate numbers in tables or correctly filled forms. Quality is also overall look and feel of the product, so QA should also work on making product look and feel good (or at least better).

I will ask my audience some question e.g:

  • How many of you know what is UX? And will give a definition of UX and also explain the difference between UI and UX.
  • How many of you have UX designer on board?

 

Firstly i would like to tell my story of how I used to work with UX designer and how i handled my work while having no UX designer in the company at all. I will tell some tricks on how to convince your developers and team leads to start changing the UX and how I managed to get a UX designed on board, including:

  • Learn about good UI/UX;
  • Start writing UI/UX related bugs;
  • Do some usability labs;
  • Use paint (crop and drag!);
  • Question new features;
  • UI/UX hall of shame/fame;
  • Stop talking about business to business applications, we all want to use "Facebook" on our daily basis!
  • Don't get used to bad UX!
  • Grow little UX designer in every developer!
  • Don't stop, even if you have UX designer, as you are the one who uses the product every day!

 

Direct UX impact for your product:

  • For apps UX usually impacts rating in app store;
  • For webpages UX can impact the sales of products on website, likability to use your web against concurrent product and more;
  • For desktop application it might take a lot of time to train your users if UX is not intuitive and users are more likely to choose competitors product against yours.

 

Key takeaways:

  • UX is important and can affect your product ratings in app store and overall user satisfaction.
  • If you have no dedicated UX designer, then QA should also cover this role.
  • Ways, how to drag your colleagues in to UX, so it would not be only your wish, but all teams purpose and target.
  • What you can do to make your product better on a UX side.

Vision Boards - Project Your Goals

How do teams share their understanding on the common goals? It is either audio or visual. Recording each talk and storing them (tagged) is not the most effective way to share common knowledge. Sketching is not new to agile teams. We are taking it a step forward in the form of Vision Boards. Vision Board – is creative visualization of your goals. While our focus in this talk, remains on- how teams could use the board, Individuals use these in order to make their life goals into reality. There are pictures or sketches of what they want – all pasted together on one board – so they constantly remind themselves of their ultimate goals in the bigger scheme of things. These goals may not be achievable with one task. They may need a series of tasks which do not directly seem to be connected with the goal. But these visualizations captured - are very good indicators of what success means to one.

We used Vision Boards to visualize our customer experience, their reactions and expected patterns of use for our application. This board single handedly kept all our teams aligned and as many changes happened – the teams knew their true north when they were discussing how to design the screens and which features to build on (priority). Our already agile teams were constantly looking at the short term goals of prioritised features, but vision board helped them reduce chaos and clutter and saved lot of time on understanding the overall requirement  - it also served as the basis for User Stories.

Key takeaways:

  1. How to capture the common vision, effectively?
  2. What is really important? What qualifies to go into the vision board?
  3. How do teams relate with the vision board. How do we to build it? And continue to refer to it.
  4. Benefits of using Vision board in a volatile environment.

The Age of Virtual Reality is Upon Us

The age of virtual reality is upon us. Just around the corner from now, the market is going to explode with companies geared to providing their customers with customized VR experiences. Already we have Oculus, HTC Vive, Google Cardboard/Daydream, amongst others. And Apple has finally joined the race introducing their VR development kit. Are you ready for this? In this talk, I will go over different automated testing techniques, the challenges we face, and how you can validate your VR content with existing technologies such as Selenium, Appium and image comparisons. I will provide a live demo with code examples and cover the different tools to achieve this.

Key takeaways:

VR is a tricky thing to validate in a deterministic way. The techniques I cover will show you how this can be done in an automated way without having to manually test your VR application. With image comparison technologies, and libraries such as FFMPEG, Selenium/Appium we can utilize these tools to validate our VR applications. 

Attendees will come away with:

  • The difficulty of testing VR in a deterministic way.
  • What tools are available to us to help test VR applications.
  • How they can test VR applications in a deterministic automated way.
  • Code examples to get them started automating VR applications. 

Doubt Builds Trust

In an uncertain world, your team wants answers. Project managers want to know when you can ship. Project owners want testing to be done. Developers want to know that you’ve caught all the bugs. Testers can find jobs getting paid to assure people of a product’s quality. But I don’t trust testers who always have confident answers to their team’s questions. Eventually a bug gets through, a deadline is missed, or a commitment is broken. Testing is not quality assurance. I trust the tester who expresses doubt. Doubt builds trust.

In my talk, I’ll explore how safety language, specificity, and nuance should color everything about the way we work. Testing software means engaging with uncertainty, and our communication should reflect that. Saying “I don’t know” can spark the beginning of a dialogue. Being able to admit the possibility of an unexplored path, a unknown interaction, or a fallible memory makes the difference between a team that moves forward and a team that stagnates by digging up evidence of mistaken certainty. We’ll get thinking about why it’s most important to say “I don’t know” in an interview and how admitting doubt can help a tester find an environment where they can thrive.

Key takeaways:

I want to give testers the power to be vulnerable at work. Rather than staying silent, testers can start admitting what they don’t know to get better explanations for themselves. If they’re already a pro at this themselves, they can encourage their teammates to voice their questions and foster an environment where this is encouraged. When attendees get back to work, I want them to be able to:

  • Approach situations with humility and a desire to learn.
  • Refrain from mocking or judging those who know less than they do.
  • Consider the privileges that allow them to gain knowledge others haven’t.

Build Your Own Internet of Continuously Delivered Things

We all know that a modern tester should have both analytical skills and basic knowledge of programming as well as soft skills. If we add experience in agile software development, we get an almost perfect candidate in the eyes of employers. However, what if one day your employer says: - Our sensor has been recently working too short. - Please, do the battery life tests. - Current functionalities of our product will be enhanced soon. Please, write some tests for the embedded platform. - Clients are complaining about our system working too slowly. Please, prepare the testing environment for performance tests of microprocessor memory. 

Nowadays, everything tends to deliver high-quality products in a continuous way. How to test an application for your customer with so many tools, equipment, and ecosystems combining HW, FW, mobile devices and complex backend architecture? Over the past few years, I've already gone three times through the transformation from "QA Team" thinking to "Dev/TestOps" mindset in IoT companies.

During my presentation, I'll show you how to implement a multi-node system (HW/FW/SW) in the environment of Continuous Integration and (finally) Continuous Delivery. I will start with a big picture of a centralized test environment. After that, all components will be separately analyzed to present possible solutions and implementations. I will also show how to involve manual testers and let actively participate in the environment evolution. All of that supported by Jenkins tool, Python language, BDD approach, and some HW knowledge.

Key takeaways: 

  • General understanding of Continuous Delivery solution in IoT products context
  • The real implementation of centralized test automation infrastructure
  • Understanding that (because of real devices) not everything can be dockerized/moved to the cloud
  • Overview of how manual testers can participate in building CI environment

The Road To QA - From Developer To Test Automation Engineer

The Abstract: 
In my career as a software developer I have seen a lot of QA talents transition towards test automation - some of them even turning into developers. My journey was the other way around - from a developer into test automation. In this talk I want to show why this was a natural choice for me, what I learned along the way and why this changed my perception of developers and QA alike.

Key takeaways: 

  • How being a Test Automation Engineer combines the developer and QA roles
  • How to not lose sight of coding
  • How questioning one's career path can be a huge benefit 

9 Ways to Test Your Spaghetti Code

“Test the legacy code as well” has been a mantra for many years now. But how do you actually do that? When stuck with tangled legacy-spaghetti, it may be hard to see the way out. The path from struggling with your spaghetti into doing TDD is shorter than you think.

It's so easy to say that you should test code as you change it, now matter how legacy, but in a real-world project, you need to know some tools and techniques to be able to do that.

Many developers out there struggle with the impression that testing and TDD cannot work on their project. I’ll challenge that view and hopefully prove it wrong for most participants, and share the techniques and tricks I’ve used.

Key takeaways: 

  • Good design improves testability
  • You can get legacy code under test, although you sometimes must make some trade-offs
  • TDD is suitable for a legacy project

What To Look For in a Test Automation Tool

When my team had to pick an API testing tool, we went with Postman. And the more I used it, the more frustrated I became with it. Both maintaining the tests and figuring out why tests failed (especially when in a CI pipeline) took too much effort. So I decided to write my own tool. Or rather, I set to work with pytest and the requests library, and ended up with a framework of just 300 lines of python code that fit our needs.

The core idea behind my framework is that I don’t want a testing tool that only focuses on making it easy to build tests. It should also make it easy to analyse the test results. It should also make it easy to read and understand the tests, to share them, and to maintain them. Because the thing is, during most of those activities I don’t want to be aware of the tool. I want to focus on the tests and what their results tell me about the quality of the product. That’s the key idea here: only at very specific times do I want to focus on the tool as a tool. At all other times it should just help me test better.

In my talk I will use my framework to illustrate these ideas and expand on them. I will also show how these ideas apply more generally than only to API testing frameworks. Since you’re only as good as your tool and a tool is only as good as its user, being able to evaluate testing tools is an important skill for any tester.

Behind the looking glass: A meta-talk on the reality of creating embedded test ecosystems for ASIC HW Functional Verification

Testing is often perceived by product developers as a non-creative activity.

​These developers are surprised when they discover the amount of work that goes into creating, growing, and maintaining the testing ecosystem. For the functional testing of complex ASIC devices this ecosystem involves almost all aspects of new product development: hardware, electronics, firmware, software, tools, architecture, reverse engineering, hacking, and innovation, as well as all the usual soft skills for teams and team members to communicate and work well together. The ecosystem's components follow typical product lifecycles just as the 'customer' products do.

​In fact, the development of embedded testing ecosystems mirrors most aspects of 'customer' product development.

​The surprise of the uninitiated developers, when they realise what is going on behind the scenes, would have been charming and amusing, had it been only just a rare occurrence. Unfortunately it is so regular that it is time to raise awareness: embedded testing is complex, it is creative, and it can even be liberating from the constrains of 'regular' product development.

​In this talk I will challenge the negative image perceptions of this interesting field and attempt to dispel them. I will use real life examples, drawing from my experience, and I will explain how my team has organised our embedded testing ecosystem. I hope this talk shows that embedded test development, when done correctly, can be just as complex and rewarding as any product development.

​Key takeaways: 

  • Developing embedded test systems is creative and rewarding.
  • The development of embedded test systems is equivalent to customer product development.
  • A presentation of different components involved in an ASIC embedded functional verification ecosystem

Driving Higher Quality with Devops Initiatives

Plenty of things have been written about the roles of testers and QA in the ever changing world of developers, devops, SRE et al. It is a fact that devops practices are here to stay, and the demands for time to market is getting ever so small. Still, high quality is what will keep your product relevant. 

Sigge will describe how devops initiatives can affect the culture of quality amongst development teams, driving high quality software delivery. 

Key takeaways: 

  • How devops and quality work fits in the bigger development context
  • How to get developers more aware and caring about quality
  • The importance of all kinds of automation in a fast paced future

Testing Backend API from Mobile QA Perspective Using Rest Assured

From the position title "Mobile QA" you may think that you only have to deal with testing of the applications themselves. This is true sometimes, when you have a purely offline app or your app works solely with 3rd party API. My experience shows that most of the apps deal with the backend, and there will be your responsibility to make sure it works as expected. There are numerous examples of tools for API testing, and Rest Assured is only one of them, and the one that I currently use. I will tell you about challenges that may occur during backend API testing and what result mobile apps developers expect to get from you.

Key takeaways: 

  • Why mobile QA should test backend
  • What are key points to check when writing API tests
  • Using Rest Assured for API testing

Security by Stealth

Security isn’t very fun for development teams to think about. It’s complex and something that isn’t brought to mind when considering requirements. Too often, it neglected by teams and left to the end for penetration testers to consider. But, it doesn’t have to be. Security can be considered early in the development cycle. How can we encourage this behaviour? How can you get development teams interested?

Security is an important skill to possess while delivering quality software. The cost of not having security skills within teams is now more obvious than ever. Security should be in the forefront of development teams minds. Even with these risks, data leaks and denial of service attacks are in the headlines often. How do we stop our companies being another statistic?

Learning should not be compulsory. Especially if you want something to become part of the culture. Beginning with a simple workshop to expanding to a security guild, people were eager to be involved. This lead to further workshops which included the basics of threat modelling using STRIDE to the complexity of automated checks. Security at Sky became not only fun but cool. Security was no longer a rarely thought about requirement but a fun, oft thought about need.

Accessibility Assumptions and Arguments

There is a massive assumption in software development that accessibility = disability. I’ll dispel that myth with information, examples and practical tips of how our assumptions are potentially costing us, customers, making interactions harder and that the whole population has accessibility ‘issues’ with the applications we are building.

Key takeaways: 
Attendees will learn about accessibility assumptions, mostly false and how they cloud the small amount of attention we give it.That accessibility actually affects approx. 90%, yes 90% of all the ‘users’ who visit your site or use your app.Some common mistakes we make when designing sites and applications.

Arguments against accessibility and how you can counter them if you encounter them, and you shouldn't because considering accessibility is the right thing to do! Accessible to All Quadrants that put accessibility in a new light of inclusion looking at both readability, inclusive language, usability and compliance to the web content accessibility guidelines as only a part rather than the goal.

Why using inclusive design to ensure your site or app is accessible is the right thing to do, did I mention that?  Because it is!

Quite a few tips the attendees can take away and apply the next day to improve the reach and impact their sites and applications can make. The testing they can do immediately to help ask questions to improve their quality the very next day. 

Listen Beyond the Pass/Fail - Analysing Test Results over Time

Does this sound familiar? Each morning when I got into my office and opened my mailbox I would have received a report of the nightly test run, mainly Selenium and TestStack White run through Team City on dedicated test agents. Most days at least something had failed and most days we didn’t worry too much about it. After I while, a voice in the back of my head started nagging me about there being a pattern but I couldn’t pinpoint it when looking at each individual test run.

When I put all the historic data into a model and started twisting and turning it, a number of patterns stood out clear as a day and by using them I could start working on long-term continuous improvement instead of putting out fires.

During this talk we will look at a number of perspectives you can explore and how they might give you important insights:  

  • Tests that are always green: Do we need them? Why are they never failing?
  • Tests that are always failing: Do they add value? Can we remove them?
  • Tests that fail a lot: Is there an underlying issue? Are we addressing the problem or just re-run them locally and blame the environment?
  • Are the tests run when we need them, or are we running them nightly because we see no other option? Can they be sped up to a point where they can provide a faster feedback loop? No? Are we sure?

 

We will also address the problem of trying to use the same collection of tests to solve multiple competing needs: fast feedback to developers about recent changes, feedback to testers on where to exlore, feedback to team about releasing to production and feedback to managers who want to know they are spending their money in the right places. Can it be done? Should it? Are there alternatives?

Key takeaways: 

  • Looking at your test results over a longer time period will give you additional insights
  • What you can look for
  • Trends can be analyzed for continuous improvement

Utilizing Component Testing for Ultra Fast Builds

A best practice of software architecture is to design your applications into independent modules or components, with a published contract for interaction between components. This is a principal of the popular microservice style architecture, but also it applies to components created in a large monolith or with the out of the box patterns that front end libraries promote.

If we are able to test the functionality of the component independently, and apply a level of trust that those components work, this opens the door to rethinking, Continuous integration and Continuous delivery strategy, potentially reducing the need for long test suites and many environments. It will also cause a rethinking of how to unit testing strategy and the test pyramid.

In this talk Tim Cochran, will talk through the different kinds of component testing, show working examples and advice when to apply them. He will also cover what this might mean for your organization's broader testing strategy.

Key takeaways: 

  • Learn about component testing strategy
  • Worked examples of testing front-end and back-end components
  • Adjusting your CI/CD pipelines to take advantage
Subscribe to Tracks