Diversity, Leadership and Innovation EuroSTAR 2014 conference takeaways

Date posted
18 December 2014
Reading time
22 Minutes
Marek Weihs

Diversity, Leadership and Innovation EuroSTAR 2014 conference takeaways

EuroSTAR is probably the biggest testing event in Europe. The conference takes place every year, for over 20 years now, every year in a different host city. This year the 22nd EuroSTAR Conference was taking place in Dublin (on the 24th to 27th November) and the keynotes for the conference were diversity, innovation and leadership. It was easy to notice something is going on around as the city was just crowded with the conference attendees easy to recognize, as they were all wearing the EuroSTAR branded shoulder bags, a part of the conference welcome pack for every attendee. This year I had a pleasure to be one of the four Kainos delegates sent to the conference. What you will find beneath is a brief overview of some of the conference speeches I had a chance to attend, along with some key conclusions and lessons learnt. The Internet of Things The lecture was on the devices and little things surrounding us in our everyday life and the connections and networks those devices create and being connected by to communicate and cooperate the internet of things. Be it a mouse trap able to send you a text message just to let you know another poor mouse has just said goodbye to this world so it's time to re-set the trap again, a ferry tweeting you when it arrives at or leave the ferry ports or an intelligent house able to save you some money on the energy. All those things make people's life easier, but at the same time they make testers' life much more difficult. The new way of using things and an easiness and the ability of connecting them into the internets of things bring a lot of new challenges for testers and for testing as it is. We need to stay aware and to get prepared for the changes and the new challenges coming if we want our testing to stay efficient and relevant. Adopting automation to the available workforce Having dedicated test automation resources in each team isn't always possible, but the test automation tasks still need to be delivered. This is especially important for Agile projects (but not limited to). If you want to keep your manual regression suite size on the acceptable and what's more important, reasonable level, the test automation is a must for your project. The only problem is your testers will not always be able to provide the automated tests. One of the ideas on how to tackle the problem of delivering the automated tests with the limited resources is to split the workload into an appropriate way, so each piece of work is handled by people with the best available skillset. When speaking about the automated tests there are two key ingredients the testing framework and the automated tests itself. The idea is to let the Developers create the testing framework (after all creation of testing framework doesn't differ much from creation of any other framework and this something the Dev guys are best at) and then let Testers use it to create the automated tests. Assuming the framework is properly written (using the Domain Specific Language approach and exposing the problem domain rather than the implementation domain) and that the Testers are given some time to learn, it should be relatively easy for them to switch from the manual testing into the automated one. There are a few characteristic the framework need to have to make it possible (like simplicity, consistency and intuitiveness), but that's something achievable. Such an approach brings two key benefits: it soften the learning curve for QAs (who often come from the manual testing background), it is also an invaluable aid in cross-skilling of the team, giving the Devs an insight into the testers' work and the craft of testing. Each of the team members can focus on what they do best and still your automated tests are being delivered. It's not always possible to get the team full of test automation experts, but adopting and using the resources you have available, you can still be successful delivering the automated tests for your project. Testing the new web HTML5 challenges for Selenium test automation The introduction of HTML5 was a great gift for developers. It brought a lot of new possibilities and significantly enriched the final look of web apps. It gave an ability of creation of powerful web applications, reduced the need for external plugins and also took on-line a number of traditionally off-line applications, bringing the desktop feeling to the web. However, introduction of the new features usually means a challenge for the existing testing frameworks and HTML5 seriously stretches the known possibilities of the existing testing frameworks. Introduction of the HTML5 doesn't really change much when we consider manual testing, but when it comes to the automated testing and Selenium in particular, there is a number of new challenges to face and a number of new UI controls to handle. Here are some of the new controls introduced with HTML5
  • <meter> tag a control showing the progress of the currently running process. The things we may want to test for and verify could for instance be the current progress or the process completeness (100% of progress);
  • <details> tag a control allowing to expand for some additional details (a kind of a dropdown control) allowing to easily control the amount of information presented on the page. From the automation point of view we may be interested in getting the methods to expand or collapse the details section, get the text being displayed or simply find out if the control is currently expanded;
  • Drag&Drop HTML5 native support although the drag and drop action is now natively supported by HTML5, its new version is not supported by Selenium (at least not yet). If we want to simulate this action from within our automation framework, one of possibilities might be using the JS hooks.
  • <video> tag HTML5 introduces an easy way to embed the video files into your web page, at the same time it brings a new challenge for your automated tests. You may do some simple checks for the source file or the video duration, which is fair enough if you're happy with the basic checks. If you want to validate the video itself the things may become a little bit trickier though.
  • <canvas> tag enables to do a graphical drawing on the fly, being a pretty cool feature for the user, but also one of the biggest challenges for your test automation. The thing is the only thing you're able to retrieve from the source code of the page is a <canvas> tag and nothing more, this is all you'll be able to see. Selenium seems to be helpless in this case and you will probably need to look for some help from the external tools (e.g. Sikuli).
When the Change Programme goes wrong There is no such a thing as being stagnant in business. The existing processes need to be reviewed constantly, decisions need to be questioned, and opinions need to be shared. We're living in the era of change 'if you're not moving forward, you're going backwards'. Although the change is needed and is a good thing in general, sometimes improvement might bring bad outcomes and unexpected results and make your change program to fail. Changes program may fail, even if they start with success. Fortunately, there is always a way to recover. What's worth of keeping in mind is that it is usually one thing that failed and not the whole change programme. Even though the change programmes sometimes fail, they still do good things. Diversity as a drive factor for an innovative leadership Imagine you are the Test Team Leader at the organisation that does not care much about testing. They have hired you so testing is now your problem and your responsibility. What's even worse, there is pretty much no communication between the business and the testing team. You are the test lead, so you're expected to know the job. How to make testing effective, important and appreciated again under such limiting and challenging circumstances? What about the team? How to make them motivated and what's even more important, how to make them like their job again? Here are some tips that could do the trick in the situation described above:
  • Make learning fun encourage asking questions and answering them using the lollipop bucket (yes, a lollipop bucket). Place a bucket full of lollipops on your desk. Any time anyone has a question he sticks a post-it note to the bucket and gets a lolly. Any time someone knows the answer to that question, she does the same thing and gets a lolly. Easy as that, but it works and it's quite fun at the same time.
  • Make knowledge sharing easy equip your office space with a big (and I mean big) whiteboard, place in the centre of the office, make it easy accessible and let it be a place to exchange ideas, discuss problems, stick interesting articles or some funny stuff. You will be surprised with the effects and benefits this simple solution may bring.
  • Add a competition factor nothing motivates better than a competition factor. Make a list of improvements and little tasks to accomplish (they should be relatively easy achievable), that will make the testing process better, improve your workspace or the way your team members communicate with each other. Put some scores against them. Let your team draw the task, the way they don't know what the task and the score are beforehand. Summarize the scores on accomplished tasks on the regular basis (e.g. once a quarter), announce the winner. Assure there is some little prize to be won here. Even though it's a simple approach it makes wonders.
  • Make knowledge easy accessible an onsite library with up-to-date books on work-specific knowledge, online training platforms, knowledge sharing sessions - all of them make the learning process easier and more effective. Make sure your team have access to some of those, so they can develop and improve their skills.
  • Make the workspace inviting - to stay productive and prevent creative stagnation, craft your workspace into a place that you and your team will be excited to work in every day.
Traps to avoid in Agile testing Many teams fail on their first attempt, when moving to Agile. Usually, there are many reasons behind the failure, but very often the reasons are related to testing. Below you will find a list of typical Agile testing traps and some tips on how to avoid them. Trap #1 Waiting for the builds Problems:
  • No build available for testing
  • Testing is done in the next Sprint
  • Stories cannot be marked Done
Risks:
  • The mini-waterfall effect
  • Stories aren't tested completely
  • Testers lose credibility
  • Team changes the meaning of Done
Tips:
  • Make Continuous Integration mandatory
    • Understand your CI tool
    • Know your build pipeline
    • Plan for your test infrastructure
  • Collaborate
    • Include testing tasks into Team velocity
    • Get your developers used to an immediate feedback (from testers)
    • Test where it makes sense
Trap #2 Testers aren't considered a part of the team Problems:
  • Testing not included into velocity
  • Testers don't understand the stories
  • Testers don't actively participate
Risks:
  • Wrong assumptions are being made
  • Impacts to the system are found late
  • Team members skills are not utilized in an effective way
Tips:
  • Get the Dev team used to the frequent feedback from QA
  • Make testers involved into discussions on the stories
  • Include testing tasks into team velocity
  • Get the devs and the testers sited together
  • Encourage testers to actively participate all the sprint meetings
Trap #3 Quality Police mindset Problems:
  • All bugs reported in Defect Tracking System
  • No discussions between testers and devs
  • Testers are a separate team
Risks:
  • Team doesn't try the 'build quality in' concept
  • All the communication is through the DTS only and not face-to-face
  • Devs use testers as a safety net
Tips:
  • DoD includes testing
  • Consulting mindset instead of Quality Police
  • Make testers technology aware
  • Make all team members (not only testers) picking up testing tasks
Trap #4 Trying to test everything manually Problems:
  • Making excuses to not attend the meetings
  • Regression not being run regularly
  • Spending time on re-testing of what have already been tested
Risks:
  • Not testing the new functionalities
  • Testers don't contribute to implementation / design decisions
  • Testers are not keeping up with the new stories
Tips:
  • Automate as you go
  • Design for testability
  • Make automation maintainable tests pass all the time (green runs), coding standards are being followed, you know where to find your tests
Trap 5# - Forgetting the big picture Problems:
  • Testing only the individual stories
  • Testing is based on what Devs code
  • Devs adds extra code after the implementation is done
Risks:
  • Integration bugs found late
  • Not seeing nor understanding the bigger picture
  • Testing finds requirement-type bugs
Tips:
  • Understand the story and its wider context before you start coding / testing
  • Use diagrams / mind maps for better understanding
  • Use Acceptance Tests Driven development
  • Define Done on Epic / Feature level
  • Define Done on Story level
Summary The conference was not only about the knowledge sharing, but also about the networking (we met a lot of interesting people there) and entertainment ;). The attendees were given an opportunity to visit a Trinity College, have a dinner at the Croke Stadium (the 3rd biggest stadium in Europe that hosts Gaelic Football and Harling games) attending the EuroSTAR Awards event or to explore the Guinness Storehouse (not to mention the pub crawl through the most famous and well known Dublin pubs). It was really a fantastic and exciting time we spent in Dublin. There were also a number of lessons we learnt there. Thank you, Kainos! J

About the author

Marek Weihs