My first year of retirement has passed so quickly and, of course, has been filled with different activities than when I was working part- or full-time. Keeping the tradition going, though, I’m taking the opportunity to reflect on the year that was 2025.
Vital statistics
As I mentioned at the end of 2024 when I essentially retired from the testing industry, I expected my activity on this blog to reduce considerably – and I only published 4 blog posts this year (including this one).
Surprisingly, my total views for 2025 were still around the annual average since I started blogging here in 2014. For the first time in many years, I didn’t critique the World Quality Report and those annual posts had been some of my most popular.
While I still have a presence on Twitter/X (and closed out the year with almost 1,200 followers), I’m no longer posting on X and never visit my feed there. I’m still pretty active on LinkedIn, which is where most of the action around testing now seems to occur anyway.
Writing my first Status Quo book
Many of you probably know that the British rock band Status Quo have been a lifelong passion (and somewhat reflected in my X and WordPress handle, therockertester!). I’ve run one of the most popular Quo websites for more than twenty years – Access All Areas – and maintain the best online gig history for the band there.
I’ve toyed with the idea of writing some kind of Quo book for many years. With more free time in retirement, 2025 became the year when the idea came to fruition! I’ve spent most of the year working on a detailed history of the band’s tours in Australia and New Zealand (of which there were 14 across a period of almost 45 years). It’s been a long process, but very enjoyable. The content is basically finished now and I’m hopeful of going to print early in 2026 (as an A4-sized full-colour hardback tome).
Work life
I haven’t engaged in any testing-related work this year, but have continued some one-on-one mentoring relationships. After a long break, I’m looking forward to sharing my testing knowledge and experience to help a Melbourne organization in an advisory capacity early in 2026 (and I remain open to other gigs like this).
Testing books
I was delighted to be involved in Taking Testing Seriously, the epic new testing book from James Bach and Michael Bolton.
I contributed chapter 20 “From RST to AST” in which I describe my personal experience with the Rapid Software Testing approach and how it led to my deep involvement with the testing community and particularly the Association for Software Testing.
I hope this book reaches a broad audience it deserves – it’s beautifully crafted and treats testing in a way that no other textbook has ever done.
Volunteering for the UK Vegan Society
I continued with my volunteer work for the UK’s Vegan Society by contributing mainly to their web research efforts.
The process of building a completely new website for the Society still continued this year and most of my efforts involved testing it. It was good to be “hands on” and providing value to the organization using my existing skillset. I also tested new versions of the VeGuide mobile app.
I published three new blogs for the Society in 2025:
I enjoy blogging on veganism – it utilises my writing skills and feels like a good fit in terms of vegan activism for me.
As a result of my travel blogs for the Vegan Society, I was interviewed for the World Vegan Travel podcast . I shared my tips for travelling as a vegan in and around Melbourne in this enjoyable interview with Brighde Reed.
Coffee blog
With more spare time in retirement, I started another blog – this time about coffee! I enjoy a great oat latte and my blog’s name reflects this, In Search of the Perfect Oat Latte. I blog about every new coffee place I try and I also post similar content on Instagram @theperfectoatlatte.
Reading
I’ve enjoyed using my extra free time to read a lot in 2025, again largely thanks to the great service from Geelong Regional Libraries.
Looking back on my review of 2024, I mentioned that Rolf Dobelli’s plea in his excellent book “Stop Reading The News” was one of the most impactful reads of that year. I’m happy to say that I have successfully broken my addiction to following the news and would strongly recommend it – there really is now downside.
I read 33 non-fiction books and 2 fiction (by a local author who literally lives a few doors down on our street!). In terms of themes, I found myself heavily into AI, veganism, vaccines and the COVID pandemic response.
My most impactful reads were pretty diverse this year.
Will Guidara’s excellent Unreasonable Hospitality was really inspiring. His approach to leading people and organizations is so refreshing and, while it’s a story based around running restaurants in New York, his ideas are of great value to anyone who’s tasked with creating a great place to work. (I blogged about Will’s book here.)
The Age of Surveillance Capitalism (by Shoshana Zuboff) was insightful and a welcome reminder to keep pushing back on the encroachment of tech into more and more aspects on my life. And, yes, I’m one of those people who still uses cash whenever I can – central bank digital currencies are just around the corner if you don’t resist and ask yourself whether you really want programmable money (when you’re not writing the programs…).
Dissolving Illusions (by Suzanne Humphries and Roman Bystrianyk) is an important book, challenging the historical narrative around vaccines. It’s well worth a read for people new to the medical freedom/vaccine space, especially those open-minded enough to accept the possibility that they’ve been lied to by the medical profession.
Controligarchs (by Seamus Bruner) was a great read. While I consider myself well-informed when it comes to many of the topics covered by Seamus, he constructs a compelling narrative and backs it up with a lot of research (and amazingly deep “following the money” threads). The book serves as a good wake-up call for anyone who still thinks “philanthropists” and worldwide organizations (WEF, WHO, etc) are actually trying to help us.
My reading for the year is detailed below:
Non-fiction
Very Bad People (Patrick Alley) McMafia (Misha Glenny) Toxic (Richard Flanagan) Code Dependent (Madhumita Murgia) Chill & Prosper (Denise Duffield-Thomas) The Internet Is Not What You Think It Is (Justin E H Smith) The Coming Wave (Michael Bhaskar and Mustafa Suleyman) Braving The Wilderness (Brene Brown) Dissolving Illusions (Suzanne Humphries and Roman Bystrianyk) Unreasonable Hospitality (Will Guidara) Futureproof (Kevin Roose) AI Needs You (Verity Harding) Follow The Science (Sharyl Attkisson) The Age of Surveillance Capitalism (Shoshana Zuboff) Australia’s Pandemic Exceptionalism (Richard Holden and Steven Hamilton) The Art of Bleisure (Emma Lovell) AI 2041 (Chen Qiufan and Kai-Fu Lee) Techno Feudalism (Yanis Varoufakis) Upstream (Dan Heath) Brave New World (Aldous Huxley) When The Body Says No (Gabor Mate) Ikagai (Francesc Miralles and Hector Garcia) You Look Like a Thing and I Love You (Janelle Shane) Controligarchs (Seamus Bruner) Data Grab (Ulises A Mejias and Nick Couldry) How to Argue with a Meat Eater (and Win Every Time) (Ed Winters) Four Thousand Weeks (Oliver Burkeman) Eat To Live (Joel Fuhrman) Hanging By A Thread (Erin Deering) Eat For The Planet (Zachariah and Stone) Main Street Vegan (Victoria Moran) All In (Mike Michalowicz) Tools of Titans (Tim Ferriss) The Golden Years (Jamie Nemtsas and Drew Meredith)
Fiction
The Maw of the Beast (Rick Wilkinson) Poppy Day (Rick Wilkinson)
In closing
My first year of retirement has opened up time for following other passions this year and I’ve thoroughly enjoyed working on my Status Quo book while continuing to volunteer within the vegan community.
I was inspired to put virtual pen to virtual paper again by a LinkedIn post from my good mate, Paul Seaman, lamenting his experience of spending nine months looking for a new testing role in Melbourne (Australia):
During 9 months of job searching it was hard not to notice that the job market for software testers is broken. Not just a little broken, a lot broken…
…we have the job ads that ask for a million different things and tools. I was told by a recruiter that, in a market like the current one, it’s a form of filtering. We both agreed it’s a particularly poor filter. I suspect it’s more fundamental. Many companies seeking a tester do not know what they need so they resort to a “wish list”.
Paul asks how the testing industry got to be this way and that got me thinking. When you look at a system and it seems completely broken or makes no sense, it’s worth thinking about how it could make perfect sense just the way it is. The US “healthcare” system is a perfect example, it’s not broken for those who’ve architected it to be the way it is – far from it!
We know how systems become the way they are thanks to these sage words from Jerry Weinberg:
Things are the way they are because they got that way
So, what lens can we use to look at the current testing market and see it making sense? Who benefits from the way it is? Who decided it got this way?
I’m aware that many other folks in the testing community have charted the history of software testing in various different ways. What follows is my take on how historical events (and not just within testing itself) have led us to the current state – you may agree or disagree with my analysis and I invite further debate on the topic.
In my opinion, the testing industry has been shaped into its form today by the following factors (presented in somewhat chronological order). I will discuss them individually but, as will become clear, they’re intertwined and exert forces between each other as well as on the industry as a whole.
The Agile & DevOps movements
ISTQB certification
Commodification
SDETs
The “testing is dead” narrative
Keyword-driven recruiting
Surveillance capitalism
(No, I haven’t forgotten about AI, I’ll come to that in my closing remarks.)
The Agile & DevOps movements
The early 2000s saw the agile movement starting to gain traction, with DevOps coming into the mix towards the end of the first decade of the new millennium. I’m covering both of these movements together as their impacts have amplified each other in many ways, I think.
Both movements talk about faster feedback loops and don’t formally acknowledge the idea of testing being a speciality in terms of role. As both of these movements have become the dominant paradigms for modern software development (despite their adoption often not adhering to their foundational practices – yes, I’m looking at you, organizations with a “DevOps team”), it’s no surprise that testers have been devalued.
Organizations have institutionalized the utopian vision of machines rapidly & cheaply checking their software products instead of “slow & costly humans” critically evaluating them (and the conflation of human testing and “automated testing” is a consequence of the widespread organizational ignorance around testing).
Both of these movements have been very well-resourced and popular certification programmes further their financial clout, so there has been no shortage of high-profile coverage of the benefits of both Agile & DevOps in major IT and business conferences, industry publications and so on. You only need to look at the strong focus on these movements in CapGemini’s “World Quality Report” to understand their reach into the testing and quality management arenas. (I’ve critiqued these reports in previous blog posts: 2018/19, 2020/21, 2022/23, 2023/24 and 2024/25.)
That organizational decision-makers have gone “all in” on these approaches was an entirely predictable outcome – adopting them as the de facto way in which software development teams now operate across their organizations.
ISTQB certification
It’s over twenty years since the ISTQB was founded and they have issued over a million certifications in over 130 countries (according to their own data from May 2025). The lack of other software testing certification schemes created the perfect environment for the ISTQB’s offerings to flourish and they were highly successful in marketing their certifications as the “industry standard” especially in the 2005-2015 period (based on my own experience). Though they had no genuine authority, they created the “ISQTB as industry standard” narrative. While skilled practitioners questioned the value of these certifications, they provided an opportunity for candidate filtering that was too good to waste and they were subsequently viewed as mandatory for many testing positions for a long time.
The simplicity of obtaining the Foundation certification helped to create the illusion that testing is easy and, as such, anyone can be quickly trained to be competent. Treating testing in such simplistic terms inevitably helped it become seen as a commodity service (more on that later).
The ISTQB and its local boards actively promote the idea that they are non-profit organizations, but the accredited training providers associated with them are generally not – and are often owned or serviced by members of the boards (which would seem to be a conflict of interest). The market value of the certifications themselves along with the training courses around them is in the order of millions of dollars per year. This significant financial clout has been used to influence decision-makers especially in larger organizations, with a trickle-down effect on the industry more generally.
Commodification
With testing being seen as easy and capable of being performed by machines – via the forces of agile, DevOps, easy certifications, etc. – testing skill became conflated with deft operation of the machines or tools, rather than in the creative intellectual evaluation and exploration of the software.
It was then an inevitable “race to the bottom” for the humans left behind. This industrial revolution of testing resulted in competition only on price, with outsourcing to low-cost locations becoming more and more common.
SDETs
The SDET (Software Development Engineer in Test) role originated in the early 2000s and was popularized by Microsoft, who made a lot of noise about the fact that they no longer had testers, only SDETs.
Like sheep, other big players quickly followed suit, including Google with their version, the Software Engineer in Test (SET). As the big names talked up this new approach, many other organizations latched onto the idea and human testers all over the world found themselves out of favour (and often out of work).
The need for engineers who could both write code and the automated tests for it arose out of the agile and DevOps movements, but the move to SDETs critically missed where human testing added value (or ignored it in the interests of speed, automation, commoditization, etc.). The terrible user experience of Windows Vista released during the height of the SDET frenzy should have been taken as a sign that removing the human elements of testing was probably a bad idea.
SDETs, in practice, were likely to be much better developers than testers and the role seems to have fallen from favour in the last decade. It’s now common to see agile teams with developers and no SDETs or testers, based on the theory that developers can do all the testing, whether that be coding automated checks or performing human testing. I again see this notion as being based on other influences rather than facts, such as the devaluing of testing skill promoted by easy certifications or the perceived need to increase the speed of delivery.
The “Testing is dead” narrative (c.2011)
At the large STARWest testing conference in 2011, James Whittaker (then at Google) announced that “testing is dead” with testers no longer being required in a world of automated checks and automatic updates. A high-profile name from a high-profile company like Google guaranteed that the message would reach far and wide. It was music to the ears of the SDET fanboys and proof positive that human testers were a historical relic whlle the new, faster, better software development world marched on.
The death of (human) testing has been proclaimed so many times in my 25-odd years in the industry (for various reasons), yet human testers still exist in many software development teams. It’s almost as though the humans bring something to the table that the machines cannot, although some organizations are steadfast in their refusal to admit it.
Keyword-driven recruiting
This will probably feel alien to younger folks, but back when I first started work (and for some time afterwards), job ads were largely focused on broad capabilities like “problem-solving,” “communication” or “managerial experience”. Tools were often learned on the job and there was more on-the-job training, so it was uncommon for particular tools to be part of job ads.
With the internet boom in the 2000s, online job boards normalized searchable skill and tool keywords. Employers started to assume that general skills were not enough and applicants had to be “ready to go” with experience in the right tools for the particular job.
Over time, tools became more closely tied to workflows so experience in them was viewed even more favourably so that a new starter could “hit the ground running” in a Jira shop, for example. Companies that make such tools also push for credentialing and adoption, which filters into hiring norms.
With the digital transformation in full swing, tools then become more central and it was a perfect storm once Applicant Tracking Systems scanning for exact terms in resumes were employed en masse by recruiters – the age of “keyword-driven recruiting” was upon us.
Long laundry lists of tools rapidly became a feature of most job ads for testers and lazy recruiting practices were at least partly to blame. Smart testers learned how to manipulate the system, by using text like this as suggested by Michael Bolton:
I do not have an ISEB or ISTQB certification, and I would be pleased to explain why
But too many simply fell into the trap of focusing on toolsmithing rather than becoming excellent testers, just to feed the filtering beasts.
This keyword-based approach excludes many great candidates who are perfectly capable of picking up and learning new tools as required, but who haven’t used the exact tools to pass through the automated filtering process. It also overemphasises tools over core competencies and yet it is these more fundamental skills of the craft that are much more durable and essential to completing testing missions with credibility.
This move towards keyword-based recruiting has negatively impacted the hiring process for genuinely good testers, in my opinion.
Surveillance capitalism
“Surveillance capitalism” is a term used to describe a new economic system centered around the extraction, analysis, and commercialization of personal data. It was popularized by Shoshana Zuboff in her excellent book, The Age of Surveillance Capitalism.
One of the most obvious characteristics of surveillance capitalism is the commodification of the human experience. Human behaviour becomes a raw material: your clicks, likes, movements, conversations and even emotions are turned into products. These raw materials are the fuel used to predict and modify behaviour for the benefit of their actual customers (not their users who are merely seen as the sources of these raw materials).
The dehumanizing impact of surveillance capitalism is clear. Attempting to track, monitor, instrument, analyze, predict and modify every aspect of our world – from our virtual interactions (e.g. tracking our searches) out into to the real world (e.g. tracking our movements via GPS and mapping services) and right into our very beings (e.g. wearables and facial image recognition) – has become an accepted part of modern life. In doing so, these approaches feel less alien than they should and so removing humans from the picture in other aspects of life becomes normalized too. The move away from skilled human testers towards toolsmiths and machine operators thus seems completely natural to the current generation of software development professionals.
What about AI?
In the discussion above, I deliberately left AI out of the list of factors I think have contributed to the current state of testing. The factors I’ve identified have all played their part in my opinion, some more significantly than others. The impact of AI, though, is only just starting to hit our industry – and I fear that it will make all of these factors look very minor in terms of their impact. I realise that I’m writing this in the middle of a huge hype cycle around AI, but the “loading up” on all things AI is important to analyze, both from the viewpoint of the testing industry but also across software development & IT more generally.
The stage really has already been set for dehumanization as I’ve outlined above, so I’m not surprised that I don’t see too much resistance to the idea of “AI” replacing testers and other IT professionals. I don’t believe that skilled testers can be replaced by current AI systems, so I urge testers to navigate this time by focusing on being more human and not trying to behave more like the machines that look set to replace them. Being aware of the benefits and limitations of AI is important, as is seeing these systems as assistants or tools to help you do better or different testing, but not replacements for your humanity.
Who is the current system working well for?
Looking through the lens of testing tool vendors, the current state of the testing industry is looking good. More and more organizations are using more and more toolsets to assist with testing and agile & digital transformation projects tend to result in a move towards more tooling and less human testing. These vendors have deep pockets and can influence the testing space through their advertising, sponsorship of testing conferences and so on.
The AI vendors will also see the testing industry as being in a sweet spot for exploitation, with the stage set by years of talking about “testing is dead”, automating away the humans and surveillance capitalism’s normalizing of a dystopian world.
Recruiters seem to love the keyword-based filtering of applications, filtering down the massive number of applications (for fewer and fewer pure testing roles) to more manageable stacks to follow up.
For human testers, though, the moulding of the industry into its current form hasn’t been beneficial and, frankly, is likely to become even worse as AI hooks into more and more aspects of the development game.
So what?
The testing industry is what it is, shaped by many different forces over decades. For human testers, the time to be vocal about the value you offer is now – before it’s too late. The tidal wave of AI is heading your way and you can’t make a snorkel long enough to breathe through it – instead, head for higher ground where you can see the wave crashing in, while bringing your distinctly human skills to the table of those organizations still seeing value in what you bring.
There’s a big role to be played by those professional organizations representing testing as a craft, such as the Association for Software Testing. These kind of voices carry weight and are less easily silenced by the lobbying and financial weight of the players looking to dehumanize our craft.
Focus on being more human, not more like the machines, and build communities of like-minded folks. The industrial revolution transformed manufacturing with its factory model and many people were (and are) content to buy factory-produced low-cost goods. But there are also plenty of other people who want a more artisan, hand-made, craftperson experience behind their purchases. Excellent human testing is the same and there will, I believe, always be a market for the true craftspeople – go find it… or help to create it!
Now that work doesn’t consume any of my days, I’ve been finding even more time to enjoy reading (making extensive use of the amazing service provided by Geelong Regional Libraries).
I’ve just finished Will Guidara’s excellent Unreasonable Hospitality and I found it really inspiring. His approach to leading people and organizations is so refreshing and, while it’s a story based around running restaurants in New York, his ideas are of great value to anyone who’s tasked with creating a great place to work.
The book centres on the fine dining restaurant, Eleven Madison Park – interestingly, it transitioned to being vegan in 2021 after Will’s time there and is one of the few such restaurants to achieve Michelin stars. I’ve been lucky enough to enjoy dining at one of the first vegan Michelin-starred restaurants, Kajitsu (also in New York back in 2014 and sadly closed in 2022), but haven’t had the good fortune to get to Eleven Madison Park – yet!
One line from his book stood out to me as a great takeaway in how to reframe bad experiences (in his case, talking about his terrible treatment by a chef in the early days of his restaurant career):
…while it was a terrible experience, it was also a privileged peek at a mistake I never wanted to make
I can relate to his message, both in my personal and professional life. While I’ve been incredibly fortunate to generally have supportive managers during my career, there have still been occasions where I’ve witnessed behaviour – both directed towards me and others – that have offered me that privileged peek into ways I never want to relate to others.
I hope I model this behaviour and pass it on to my mentees so that they too can help shape better workplaces in the future. (I’m still open to taking on mentees, by the way.)
PS: While I’m spending more of my time reading these days, I’m investing much more in my first retirement writing project…. but that’s a story for another day!
My fifth opportunity to work with McGill University undergraduate students saw me being interviewed by Rob Sabourin again during one of his sessions of the ECSE-428-001 – Software Engineering Practice course for his Winter 2025 cohort.
I began my involvement with McGill back in February 2016 when I took part in a small group interview with some students. Since then, I’ve done interviews with Rob for his software engineering classes in April 2022, November 2022 and March 2024.
My latest interview was on the same topic as my March 2024 interview, viz. the Scaled Agile Framework (SAFe) based on my recent experience of being involved in a large project in the Australian federal government using this framework. Rob introduced the general ideas around SAFe and I spoke to my specific experience of using the framework in a government project context. While there are still many criticisms of this framework from the agile community, we discussed some of the beneficial aspects of it in the context of very large programs of work (and agilist Ron Jeffries has some interesting things to say about it in his post, SAFe – Good But Not Good Enough).
I enjoy being interviewed by someone as experienced and well-informed as Rob and I hope that this next generation of software engineers gained something valuable by hearing about my real-life experiences to bolster their learnings around the theory of software engineering.
I’m always willing to share my knowledge and experience, it’s very rewarding personally as well as providing an opportunity to give back – I’ve had a lot of help and encouragement along my career journey (including from Rob himself) and remain incredibly grateful for it.
Another year flies by, so I’m again taking the opportunity to review the year that was 2024.
Vital statistics
I only published 7 blog posts this year (including this one), again not meeting my personal target cadence of a post every month. I haven’t been finding much in the way of inspiration to post, but 2024 has still set a record for total views which I find quite amazing after 10 years of blogging! The traffic is slightly skewed by the fact that my most popular posts – viz. my critiques of the World Quality Report – appeared twice during 2024, one a belated review of the 2023 report and the other for 2024’s effort.
I’m still on Twitter/X and closed out the year with just over 1,200 followers, slightly down from last year. I’m no longer posting on X and it appears most of the interesting testers I used to follow have already left the platform. I’m seeing almost all of my engagement coming from my posts on LinkedIn now.
Work life
I spent the year working part-time for SSW in my role as Test Practice Lead and all of it for the same government agency I started working with in 2023. I had a great experience in this agency and my tenure there and at SSW have now both just come to an end.
In my own business, Dr Lee Consulting, I again focused on my Mentoring offering and have found one-on-one mentoring very rewarding.
I’m not intending to look for new opportunities at this stage of life, but maybe an interesting project or two could tempt me away from more pleasurable pursuits in the years ahead!
Testing-related events
For the first time in maybe 15 years, I didn’t attend any virtual or in-person testing conferences or meetups during 2024, nor did I give any presentations. It’s perhaps a sign of my deliberate choice to wind down that I opted out of opportunities during the year after a long stint of contributing to the testing community.
Testing books
I wrapped up the content for the free AST e-book, Navigating the World as a Context-Driven Tester. This book provides responses to common questions and statements about testing from a context-driven perspective, with its content being crowdsourced from the membership of the AST and the broader testing community. The final version of the book contains 28 responses and continues to be freely available from the AST’s GitHub.
I didn’t publish an updated version of my book An Exploration of Testers during 2024 and the current version is likely to be the last. There were more purchases of the book, though, so I was happy to be able to make another donation to the Association for Software Testing’s excellent Grants program.
Reading
My strong reading habit continued during 2024, thanks to the great service from Geelong Regional Libraries. I again added a little fiction into the mix.
Of the 37 books I read this year, the most impactful were two very different reads. Firstly, Rolf Dobelli’s plea to “Stop Reading The News” was just what I needed to break my addiction to following the news cycle and I went cold turkey early in 2024, never to go back. I don’t feel like I’m missing out on anything, apart from mainstream media’s propaganda. Secondly, snooker player Ronnie O’Sullivan’s biography “Unbreakable” was an inspiring read. Although I’ve followed his entire career, his many challenges and work on his mindset were interesting to read about – and he remains at the very top of the sport despite his age.
My reading is detailed below:
Non-fiction
The Myth of Normal (Gabor Mate)
The Courage to Face COVID-19 (John Leake and Peter A. McCullough)
Stop Reading The News (Rolf Dobelli)
The Paradox of Choice (Barry Schwartz)
Life As We Knew It (Aisha Dow and Melissa Cunningham)
How Innovation Works (Matt Ridley)
The Locked-up Country (Shahar Hameiri and Tom Chodor)
Ageless Soul (Thomas Moore)
The Upside of Stress (Kelly McGonigal)
Viral (Alina Chan and Matt Ridley)
Superforecasting (Dan Gardner and Philip E. Tetlock)
Same As Ever (Morgan Housel)
Ultra-Processed People (Chris van Tulleken)
Cobalt Red (Siddharth Kara)
Deadly Medicines and Organised Crime (Peter C. Gøtzsche)
Living Plantfully (Lindsey Harrad)
Read Write Own (Chris Dixon)
Oxygen (Patrick McKeown)
Breaking The Habit of Being Yourself (Joe Dispenza)
Shoe Dog (Phil Knight)
Slow Productivity (Cal Newport)
Lies My Government Told Me (Robert Malone)
Making A Killing (Bob Torres)
The Influencer Industry (Emily Hund)
The Psychology of Money (Morgan Housel)
The Way of Integrity (Martha Beck)
Unbreakable (Ronnie O’Sullivan)
The Violence of the Green Revolution (Vandana Shiva)
The Bodies of Others (Naomi Wolf)
Die With Zero (Bill Perkins)
The New Confessions of an Economic Hitman (John Perkins)
Unsettled (Steven E. Koonin)
How Not To Lose $1 million (John Addis)
Fiction
Changing Places (David Lodge)
Apples Never Fall (Liane Moriarty)
The Truth Teller (Angela Elwell Hunt)
I Know My Love (Catherine Gaskin)
Volunteering for the UK Vegan Society
I continued with my volunteer work for the UK’s Vegan Society by contributing to their web research efforts (and I didn’t tackle any proofreading jobs this year).
I came up with recommendations for changes to the website’s “Key Facts” page after reviewing other sites to define what a modern layout and content should look like for the page.
The process of building a completely new website for the Society continued this year and most of my efforts involved testing it. It was good to be “hands on” and providing value to the organization using my existing skillset.
I didn’t publish any new blogs for the Society in 2024, but started a couple of posts and expect those to be finalized early in 2025. I really enjoy blogging on veganism, both to flex my writing muscles and also to more deeply engage with vegan content.
Working with The Vegan Society continues to be a joy, I’m blessed to work with great people there who really appreciate my efforts. I expect to contribute more fully in 2025 now that my time in the testing game has come to an end.
In closing
I remain grateful for the attention & support from the readers of my blog and also my followers on other platforms. I wish you all a Happy New Year!
As I move on from the testing industry, this blog might not see too many more posts but I may be inspired to write again once in a while…
I’ve reviewed this annual epic from Capgemini for the last four years and will do so again here, albeit for the last time (see my reviews of the 2018/19, 2020/21, 2022/23 and 2023/24 reports in previous blog posts).
This 16th edition is titled “New futures in focus”, only slightly different from last year’s “The future up close”.
I’ve taken a similar approach to reviewing this year’s effort, comparing and contrasting it with the previous reports where appropriate. This review is again lengthy, but you’ll still save plenty of time by reading my summary compared to devouring the 106 pages of the report itself!
TL;DR
It was a bad omen when the first thing I noticed on opening the PDF report in Chrome was a mistake in the report’s title meta-data – “Brochure Potrait” appeared in the document header and in the name of the browser tab. Moving right along…
The 16th edition of the “World Quality Report” is the fifth consecutive report I’ve reviewed in detail on this blog. It’s a hefty report, yet consistently fails to produce the genuine insights I’d expect following such a mammoth undertaking (including about 900 hours of interviews with senior representatives at large organizations). In some ways, I’d argue it’s a report “by CTOs, for CTOs”.
The previous year’s report made bold claims about AI and these continue in the current report. While the authors conclude that uptake of Gen AI is huge and revolutionary, I’m not convinced that the data supports this view and neither does my experience in the industry.
“Quality Engineering” is not clearly differentiated from testing again and many other terms are used without clear definition (e.g. Green IT) so there’s significant potential for confusion, both in interpreting the results and for respondents answering the questions.
I still think it would be valuable to include some questions around human testing to help us to understand what’s going in these large organizations, an observation I’ve made in all of my previous reviews of these reports.
Once again, the focus areas of the report changed almost completely, from eight last year to six (largely different) areas this year, making cross-report comparisons difficult or impossible. The sample set of organizations appears to be the same as last year (maybe in the interests of year-on-year comparison), so I really don’t understand why the report doesn’t stick to a standard set of focus areas year on year.
There continue to be many problems with this report. The lack of responses from smaller organizations mean that the results remain heavily skewed to very large corporate environments. There are numerous errors that really should have been picked up in proofing and the report is poorly copyedited, resulting in a report that feels like a bunch of disparate sections stuck together with no consistent voice. It’s good to see that the data visualizations this year are much simpler and more consistent than last year, though, making the results much easier to interpret.
While these reports blow with the wind of trends, I’d recommend that you keep an eye on trends in testing but don’t get prematurely attached to them. Focusing on building excellent foundations in the craft of testing will likely be time better spent, enabling you to navigate the winds of change throughout your career. Big name industry reports like this one carry substantial weight – regardless of their content – so stay mindful of the hype and adopt a critical thinking mindset when it comes to the conclusions made in reports like this.
About the survey (p96-101)
This report maintains its trend of becoming longer every year, running to 106 pages – up from 96 pages last year and “just” 80 pages the year before that.
I find it useful to look at the “About the survey” section of the report first to understand where the data came from to build it and support its recommendations and conclusions.
Notably new in the study design this year is that “…advanced AI-driven tools were integrated into the research process to enhance data quality”, so this needs to be kept in mind when looking at the findings of the report, I think.
The survey size was 1775, up slightly from the previous report (with 1750). Given the very similar survey size, it’s surprising the resulting report is longer.
The organizations taking part were again all of over 1000 employees, with the largest number (34% of responses) coming from organizations of over 10,000 employees. The response breakdown by organizational size was essentially the same as that of the previous four reports, strongly suggesting that it’s the same organizations contributing every time. I’d love to see the responses from those in smaller organizations where the technological and organizational context is likely to be very different.
While responses came from 33 countries (up from 32 in the previous report), they were heavily skewed to North America and Western Europe, with the US alone contributing 16% and then France with 8%. Industry sector spread was similar to past reports, with “Hi-Tech” (19%), “Financial Services” (15%) and “Public Sector/Government” (11%) topping the list (and all exactly the same percentages as the previous report, reinforcing my hypothesis that the same organizations are reporting each time).
The types of people who provided survey responses this year was also very similar to previous reports, with CIOs at the top (24% again), followed by QA Testing Managers and IT Directors. These three roles comprised over half (59%) of all responses – again exactly the same breakdown as the previous report.
Introduction (p4-5)
The introduction sets the scene for a big focus on AI (or “Gen AI”) in the report, so no surprises here. This finding troubles me, given the large organization focus of this report’s underlying data:
The other striking result is that there is more acknowledgment of the importance of quality (or rather the risk and impact of insufficient quality) – but organizations still need to work to give true strategic attention to the topic of quality.
Executive summary (p6-7)
The high-level summary of the report also makes a big deal about AI, but still manages to make some noteworthy observations.
On the model for where QE (“Quality Engineering”, a term that’s used throughout, never defined and sometimes used interchangeably with testing) fits into teams:
First, the integration of quality engineers within agile teams has become a standard practice, with 40% of organizations now embedding these experts directly into their agile processes. Second, we are witnessing a rise in organizations that not only integrate quality engineers into agile teams but also maintain dedicated Quality Engineering roles operating independently to ensure comprehensive coverage and oversight.
There’s a refreshing honesty around the actual use cases for AI:
The debate on which Quality Engineering & Testing activities will benefit most from Gen AI remains unresolved. This year’s survey highlights a growing focus on leveraging Gen AI for test reporting and data generation over test case creation.
While I noted in my review of last year’s report that the expectations of “QE experts” were getting very high (specifically including skills in coding, BDD and TDD alongside their idea of the typical QE skillset), this year they ramp it up even more:
Over the past decade, the shift to Agile methodologies, cloud computing, and smarter technologies has transformed quality engineers into SDETs (Software Development Engineers in Test) and, further, into full-stack test engineers. The skill set requirements for quality engineers have now expanded even further, encompassing data proficiency, AI expertise, Gen AI capabilities, and product engineering skills. However, this evolution does not diminish the fundamental need for risk-based test strategies, human collaboration, and deep business expertise—elements that remain crucial for ensuring comprehensive and effective Quality Engineering.
This unicorn “full-stack test engineer” nonsense really needs to stop. In my recent experience, it’s hard enough to find good testers without all these additional expectations on individuals. Many of the skills they’re throwing onto the “full-stack” are genuine specialties in their own right and we should treat them as such.
Key recommendations (p8-9)
This year’s recommendations are across six areas (down from eight last year and it’s not obvious which ones are the same and which are different, though “Intelligent Products Validation” appears similar to the previous “Intelligent product testing” and “Quality Engineering in Sustainability” probably relates to the previous “Quality & Sustainability”), as follows:
Quality Engineering in Agile
Quality Engineering Automation
Quality Engineering and AI
Intelligent Products Validation
Quality Engineering in Sustainability
Data Quality
Changing the areas and categories every year is very unhelpful and makes cross-report comparison too difficult. If I was being cynical, I would argue that this could be a deliberate move to make it hard or impossible to see how well their predictions pan out over time. There are three to five recommendations made in each area, so let’s dig into the details, area by area.
Current trends in Quality Engineering & Testing (p10-61)
Half of the report is focused on current trends, broken down into the six areas detailed in the previous section. As usual, the most interesting content is to be found in this part of the report. I’ve broken down my analysis into the same sections as the report. Sitting comfortably?
Quality Engineering in Agile
The introductory spiel for this section has an unusual tone and I particularly dislike the “Great reset” language (triggering inevitable World Economic Forum flashbacks to the atrocities of the pandemic response for me). This “reset” mantra appears again later in this section of the report.
The first set of remarkable data is around the “top 5 skills for your Quality Engineering associates” with an astonishing 70% saying “Quality Engineering skills”!:
There is a data error in this first chart: note that “AI/ML and Gen AI skills” are repeated, with one showing 66% and the other 57%. Putting two and two together from the commentary on this data, it seems that the 57% stat is for “coding skills”.
The next set of data relates to organizational structure:
I find the options on offer here quite difficult to differentiate and I can imagine respondents having difficulty in choosing the right options for their organization’s context. On this, the authors say:
A notable change is the decrease in the use of traditional Testing Centers of Excellence (TCoE), with only 27% of respondents reporting their continued use. This marks a substantial drop from the 70% who relied on TCoEs last year. While the results from last year’s survey seem high, likely due to conflicting interpretations of “TCoE”, the decrease this year is clear and consistent with other survey responses. Concurrently, 40% of respondents now have quality engineers embedded within Agile teams, highlighting a trend towards embedding Quality Engineering into Agile workflows.
The 70% claiming to use TCoEs in the last report always looked wrong (as I pointed out in my review last time) so it’s no surprise that a very different response was found this time. This also shows some confirmation bias on the part of the authors and they are acknowledging that their poor (or, more accurately, complete lack of) definition of the terms they use is likely impacting on the validity of the results.
The supporting commentary for figures 3 and 6 appear to be the wrong way around so it makes for a confusing read! Moving on to looking at challenges for QE adoption:
As development skills have become less critical and the focus has intensified on Gen AI and core Quality Engineering competencies, it appears that the broader value of Quality Engineering is not being fully recognized. The core problem may not lie in the alignment with development teams, but rather in demonstrating tangible value. Despite an increase in the use of advanced technologies like Gen AI and expanded automation coverage, the perceived value of Quality Engineering remains underwhelming.
This is quite a remarkable observation and is a sad indictment of trend following, if that is indeed what these large organizations are trying to do. The WQR has been championing the move to QE – and away from what I’d call testing – for many years, but this change of focus appears to be failing to achieve good outcomes for anyone.
Turning to their recommendations in this area (remember, it’s “Quality Engineering in Agile”), these two stood out to me:
Integrate quality engineers directly into product teams to ensure their work is closely connected with product development and outcomes.
Maintain the independence of testing. As systems continue to increase in complexity with multiple technologies and hosting locations, the benefit of an independent testing team will pay dividends.
These recommendations seem to contradict each other, unless “quality engineers” are not being thought of as providing testing services. This murky distinction between QE and testing in this report (and commonly in the broader industry) is leading to a lot of confusion, pointless rebranding and problems for those of us focused on testing as a craft in its own right.
Quality Engineering Automation
This section of the report leads with AI, of course, asking about the extent to which organizations are using it to enhance the “maturity of test automation”:
My read of this data is that, at best, 29% of the respondents are really doing anything meaningful with Gen AI today. The authors choose to put a more positive spin on uptake:
In 2023, 69% of organizations were experimenting with innovative automation solutions like Low Code/No Code or AI-based automation frameworks. Fast forward to 2024, and the landscape has shifted by leaps and bounds. New futures are in focus – as 29% of organizations have fully integrated Gen AI into their test automation processes, while 42% are actively exploring its potential.
Further to this, when asked about the benefits of Gen AI in enhancing test automation, there were no surprises – “Faster automation” came out top (72%) and “Reduce testing effort/resources” was close behind at 62%.
The following claim doesn’t appear to be supported by any of the data in this report:
The survey results reveal that the global average level of test automation has now gradually increased to 44%
It’s hard to know what this really means – I can imagine different organizations measuring their “level of test automation” quite differently (especially as this is a difficult thing to measure meaningfully), so this average level is probably not indicative of anything – 44% might be great, might be terrible or might mean nothing at all.
The data around business outcomes achieved through test automation caught my eye:
So “Over half of the respondents highlighted that automation reduces manual effort” while the most popular response suggests that there is “improved testing coverage” which increases “confidence in IT”. I find this interesting as those business stakeholders gaining confidence from seeing increases in reported test coverage likely have no idea what’s being measured or the quality of the automation that’s been built.
Turning inevitably to “cost benefits of automation”:
It’s pleasing to see that a large proportion of respondents don’t use cost benefits as the primary driver for test automation, but the authors still claim “One of the key benefits of automation is reducing operating costs”. There appears to be another error here, I assume the the last bar should be labelled “Decreased operational costs due to additional tooling” (since increasing these costs wouldn’t generally be seen as beneficial).
Looking at the “talent requirements” around automation next:
I think it’s a big ask that “31% of respondents identified the need for full-stack engineers – quality professionals with additional expertise across the technology stack, including infrastructure, cloud, performance, resiliency, and reliability.” It’s good to see such a low percentage of respondents believing “developers can do all forms of testing so separate testing is not required”, but I struggle to believe that “Manual testing is still prevalent due to specific application architectures” in only 10% of organizations (and I don’t see why the latter part of that response was included or relevant). If we are to believe the previous claim that the “global average level of test automation” is only 44%, what testing activities are covering the rest if only 10% is “manual testing”?
The recommendations are not very exciting, though the emphasis on increasing the use of AI stuck out to me:
Harness the potential of Gen AI to enhance and accelerate test automation. Gen AI goes far beyond the generation of automated test scripts and helps with the realization of self-adaptive test automation system, driving efficiency and effectiveness.
I didn’t see any data in this section of the report to support the efficiency or effectiveness claims made in this recommendation.
Quality Engineering and AI
Given the authors’ overwhelming focus on AI, I was surprised to see this being one of the shorter sections of the report. The opening gambit makes a bold claim:
The results from this year’s survey indicate what we believe is the new future – Gen AI-augmented Quality Engineering. We found that 68% of respondents have moved beyond the experimentation phase and have adopted Gen AI platforms to improve their overall IT efficiency and accelerate their speed to market.
The 68% stat is a little misleading, as it’s based on these results:
The authors have combined the first two responses to produce their 68% stat, but half of these respondents are not actively using Gen AI solutions yet. The next set of data is quite interesting, looking at the testing-related use cases for Gen AI:
The authors note that their client experiences don’t match this data, so they continue to recommend focusing on the bottom two use cases as “there are greater gains to be made in those areas”. I’m encouraged by this statement, though:
Gen AI isn’t about replacing the human touch or magically improving testing quality on its own. Instead, it’s a game-changer for boosting the productivity of quality engineers.
This section closes with a big call to action:
Although the sheer volume of data may feel overwhelming, one thing is clear: Gen AI will revolutionize Quality Engineering. Whether you jump in headfirst or just dip your toes in, you need to start your adoption now!
The jury is still out on the use of Gen AI for most of the organizations contributing to this report, so this call to action seems too strong – and makes me wonder whether the authors are hallucinating just like the Gen AIs they’re discussing.
Intelligent Products Validation
The first two data sets in this section of the report are closely related, essentially asking about the importance of different types of testing for “intelligent products” (a term that isn’t clearly defined in the report):
These results seem contradictory to me. For example, looking at “Security”, 60% rate the security “test phase” as being very important (and the highest of all the phases), yet only 23% said that security was the most important aspect of validating an intelligent product (a worrying stat in itself!). These contradictory results don’t seem to worry the authors, though, who conclude:
When it comes to testing intelligent products, the emphasis on different test phases directly mirrors what respondents consider crucial for validation
I didn’t spot much else of interest in this section, with the recommendations unsurprisingly focusing on increasing the use of AI.
Quality Engineering in Sustainability
Turning to sustainability and “Green IT” (another term used in the report that is not clearly defined), the opening data relates to the prioritization of sustainability:
The authors conclude that “… a whopping total of 98% of organizations acknowledge that sustainability is extremely crucial to them!” while, in reality, the first three options are worded in such a way that I’d fully expect every organization to pick one of them (especially given the senior folks being interviewed). There’s another data error in this chart too, with the response percentages adding up to 102%.
The next set of data looks at focus areas to validate to drive Green IT:
I don’t understand how some of these choices are related to sustainability or Green IT, e.g. “the ability of IT systems, devices or software to work together for seamless functionality” doesn’t seem to me to have anything to do with efficient use of resources, sustainability, etc.
I find the data in the next chart impossible to believe:
The claim that almost half (43%) of the surveyed organizations are monitoring the environmental impact of every type of testing beggars belief. I have no idea how you would even go about doing that in any meaningful way. This data seems to feed into the recommendation to “Practice sustainable testing”, whatever that means.
Data Quality
There are few revelations in the section of the report relating the quality of test data. When it comes to provisioning test data, it appears more organizations are turning to AI (unsurprisingly):
There is a data/proofing error in the commentary on this data, saying “More organizations are turning to AI-generated test data (49%)”.
Looking at the issue of bias in test data next:
Again, AI is being seen as silver bullet here but kudos to the authors for acknowledging the potential issues with this (especially given the prevalence of using AI to also generate the data!):
Almost a third of organizations rely on AI to check data quality and remove biases (34%), but this approach often lacks transparency and context, which can unintentionally reinforce existing biases.
In most ways, this section of the report feels like the same old, same old with a sprinkling of AI while not suggesting any genuine improvements in the area of test data management, an observation also made by the authors in conclusion:
For over 15 years, we have been asking questions about data and its importance in the World Quality Report. Each year, organizations talk about focusing more on this, yet the same perceptions persist about the quality and importance of data. Despite the critical role data plays in AI and organizational success, many organizations still do not give it the focus it deserves.
Sector analysis (p62-95)
The sector analysis has generally not been as interesting as the trends section in previous reports and the pattern continues this year. The authors identify the same eight sectors as the previous year’s report (albeit presented in a different order for some reason), viz.
Automotive
Manufacturing
Consumer products, retail and distribution
Healthcare and life sciences
Public sector
Financial services
Telco, media and technology
Energy, utilities, natural resources and chemicals
A few things caught my attention in the sector analysis section:
In the Retail sector, 31% “Believe that an environmental impact is monitored for each testing activity.”
In the Public sector, 34% “Prefer to use developers to perform automated testing rather than dedicated SDETs” and this is the first time I’ve seen the conflation of AI and “shift-left”, I think: “The AI revolution, particularly the shift-left approach, is driving the need for faster, cheaper, and more predictable solutions.”
In the Financial Services sector, “Approximately 50% of financial institutions are now looking to reduce their dependency on IT services from India and the US, instead opting to relocate resources to Latin America (LATAM)…. For example, Mexico has become the fourth largest IT market, with a growing pool of IT professionals and STEM graduates.” This is an interesting development and makes sense from a timezone perspective for US organizations, so it’ll be fascinating to see how the established outsourcing players respond to this geographical shift. A remarkable 75% “Responded that Quality Engineering skills are considered the most critical for Quality Engineering associates in the financial sector”!
In the Telco sector, “… the need for critical testing is still new, and while organizations are slowly understanding the implications of overlooking it, the impact will be increasingly apparent in the future.”
(There are more basic proofing errors in this section of the report with repeated text in both the Retail (in the “Shifts in consumer buying channels” section) and Public (in the “Managing complexity in Cloud-based environments”) sectors.)
Geography-specific reports
The main World Quality Report was again supplemented by a number of short reports for specific locales. I only reviewed the Australia/New Zealand report and spotted a couple of interesting observations.
Firstly:
In ANZ, there’s been a shift from a scaled Agile approach to a more balanced methodology. Organizations are reassessing the value of dedicated testing teams versus integrating testing within engineering squads. While early testing and shift-left practices are gaining momentum, the specialized expertise in functional assurance, integration, and continuous testing is being reaffirmed.
I’m not sure how to interpret this, whether they’re saying there’s a move towards dedicated testing teams or away from them and towards embedded testers in teams. This is another example of poor wording in the report that should have been caught by decent copy editing.
The comment around automation is pretty stunning:
Automation … continues to be both an opportunity and a challenge. While some organizations are early adopters, many still struggle with the complexities of automating business-as-usual (BAU) projects.
Calling anyone undertaking automation today an “early adopter” seems absurd to me when we’ve had automation in various forms available to us in testing for decades. This very report in previous years has indicated the growing adoption and importance of automation.
Continuing the trend of “AI everywhere” in this report, it says this about using AI specifically for “hyper-automating test design”:
Gen AI is rapidly becoming a hot topic in ANZ, with organizations exploring its potential to address efficiency pain points within the testing cycle. While there is growing interest in leveraging Gen AI for hyper-automating test design, there is also a cautious approach towards trusting its outcomes. Some organizations have paused their experimentation to centralize control and ensure reliability.
I read this as saying “we tried to automate test case creation using AI and it didn’t work very well”!
I’ve enjoyed a long and fruitful relationship with the Association for Software Testing (AST), both as a member and through the CAST conferences (where I’ve been an organizer for Australian conferences and a speaker & delegate at US conferences).
When I was approached about the idea of creating an e-book to provide responses to some common questions and statements about testing from a context-driven perspective, I was keen to be involved. The idea was to crowdsource the content and then for me to collate responses for the e-book from this content. The e-book was designed to act as an FAQ for the day-to-day situations a tester may find themselves in and how to approach them from a context-driven perspective.
The Navigating the World as a Context-Driven Tester e-book project kicked off early in 2021 and the final edit has just been made after some 28 requests for contributions. These requests were made across multiple platforms, including Twitter (X), LinkedIn, Slack (viz. in the AST and Rapid Software Testing channels), Mastodon and the AST mailing list.
My experience of putting the book together
I loved the concept for this e-book and was excited to make the first few requests for contributions to see if there was similar interest and excitement in the project from the broader testing community. I was pleased to see lots of early engagement and my job in collating a response for the book was often a difficult one thanks to the sheer number and diversity of responses received.
It was interesting to see which requests got the most interest and I was often surprised by which ones received many responses. It was great to see that responses continued to come in even as the project entered its third year.
The best channels to elicit responses from requests for contributions to the e-book changed over time.
The AST and RST Slack channels provided by far the largest numbers of responses across the project, perhaps reflecting the community of more seasoned practitioners active on these platforms. Twitter was a good source at the start of the project, but faded quickly as many testers moved off the X platform. LinkedIn was fairly consistent throughout, but never a huge source of responses. The inclusion of Mastodon for the last year or so of the project resulted in only a very small number of responses and the AST newsletter was similarly ineffective in generating responses.
This e-book has been compiled from the collective wisdom of many excellent testing practitioners and I feel that it book provides a lot of value, especially to less experienced testers. It is my hope that the e-book will be a handy reference for testers and I look forward to hearing stories of how it’s proved to be useful.
A few stats about the book and the process of creating it:
72 contributors helped to shape the content of the e-book (all of whom are attributed).
292 responses were received from the 28 requests for contributions.
The request that drew the most interest was “Testing is a bottleneck”, with 28 responses.
James Thomas provided responses to all 28 requests via a separate blog post for each request. Amit Wertheimer and Frances Turnbull chipped in more than 20 responses each.
7 requests for contributions were made in 2021, 9 in 2022, 7 in 2023 and 5 in 2024.
The 28 responses consist of 7 in the “Testing” category, 6 in “Testers”, 4 in “Automation”, 3 in “Context-driven testing”, 3 in “Project Scheduling”, 2 in “Testing Status”, 2 in “Career” and 1 in “Scripts/test cases”.
The requests that drew the least interest were “What’s the best format for a test plan?” and “Pair and ensemble testing look like a waste of time and resources to me. What do you think?”, with just 4 responses each.
With thanks
Thanks to the AST for trusting me with the curation of this project and also to the various AST board members who reviewed my collated responses before publication and supported the project in numerous ways.
I’m so grateful to the 72 folks who made the effort to contribute responses – this book wouldn’t exist without you!
The questions/statements
The questions/statements that formed the 28 requests for contributions are listed below:
Question/statement
Request made
#responses
response published
We test to make sure it works
07/04/21
21
21/06/21
Let’s just automate the testing
03/05/21
9
21/06/21
Isn’t all testing context-driven?
14/06/21
15
19/07/21
Do more test cases mean better test coverage?
27/07/21
13
29/08/21
What percentage of our test cases are automated?
08/09/21
9
03/10/21
Stop saying “it depends” when I ask you a question
11/10/21
13
01/11/21
Testing is a bottleneck
06/12/21
28
10/01/22
What’s the difference between context-driven testing and exploratory testing?
16/01/22
6
12/02/22
Will the testing be done by Friday?
11/02/22
8
07/03/22
We need some productivity metrics from testers
12/03/22
9
16/04/22
There are no best practices, really?
01/05/22
5
26/05/22
Why didn’t you find those issues before we shipped?
05/06/22
5
02/07/22
For your annual review, I’ll need to see evidence of what you produced this year
01/08/22
12
23/08/22
What’s the right ratio of developers to testers?
27/08/22
10
27/09/22
What’s the best testing metric?
03/10/22
8
07/10/22
Testing is just to make sure the requirements are met
03/11/22
15
04/12/22
Whenever possible, you should hire testers with testing certifications
11/01/23
16
06/02/23
Developers can’t find bugs in their own code
12/02/23
10
13/03/23
Stop answering my questions with questions
18/03/23
9
16/04/23
When is the best time to test?
29/04/23
10
22/05/23
Testers are the gatekeepers of quality
29/05/23
15
12/07/23
If testers can’t code, they’re of no use to us
18/07/23
9
15/08/23
Is observability and monitoring part of testing?
11/11/23
9
06/12/23
When the build is green, the product is of sufficient quality to release
08/01/24
7
07/02/24
What’s the best format for a test plan?
22/02/24
4
21/03/24
Why don’t we replace the testers with AI?
07/04/24
7
29/04/24
How can I possibly test “all the stuff” every iteration?
01/06/24
6
08/07/24
Pair and ensemble testing look like a waste of time and resources to me. What do you think?
23/07/24
4
16/08/24
(The featured image for this post was inspired by my recent travels to Lisbon, Portugal, and its famous number 28 tram – thanks to Victoria Emerson on Pexels.com)
We attended the Animal Rights Forum 2024 at Melbourne Town Hall over the weekend of 24th and 25th February. This was the first in-person forum since the 2019 event that we also attended in Melbourne and it was great to see the event sold out on its return with over 300 attendees!
TL;DR
I appreciate that most of my readers are looking to me to provide testing/IT content (rather than veganism or animal rights material), so the only tech-related talk at this Forum was around AI, surprise surprise! Thankfully, it was a great session and a neat application of AI and automation in the not-for-profit sector by Kyle Behrend of NFPS.AI. I immediately saw how this niche use of these technologies can be of huge benefit to not-for-profits to free up their valuable and limited resources. More details of Kyle’s talk can be found under the Sunday section of this blog post.
Read on if you have any interest in the event in more detail.
There were many awesome organizations in the animal rights space represented at the Forum and the two days were packed with track sessions, representing incredible value for around $100 (a far cry from the cost of many tech conferences!).
Saturday
We missed the opening sessions on Saturday while travelling up to Melbourne and I kicked off the day by attending author MC Ronen‘s talk, “How I use my passion for writing to create a better future for animals (and humans!) And what you can learn from it” MC’s talk was interesting and her use of fiction to spread an animal rights message is novel (no pun intended!). It made me think about my own passion for writing (both in this blog and elsewhere) and how I can put it to use in this area. Not long into MC’s talk, the Forum was sadly interrupted by pro-Palestine protesters who had issue with some of MC’s public commentary on this subject. The protesters were very vocal and physically abusive to volunteers and venue security, making for a very uncomfortable ten minutes or so before the police arrived. The volunteers need to be more prepared for disruptions – especially as the event is likely to be the target of animal agriculture interests one day – so hopefully the training they need is offered before the next forum. (The minimal Town Hall security staff also seemed very unprepared, which was more surprising.)
In the same timeslot, my wife enjoyed a remote presentation by well-known US lawyer/activist Wayne Hsiung (from The Simple Heart) talking on “Making repression backfire”. Well-known from his work with Direct Action Everywhere and the open rescue movement, it was good to hear about his latest developments and his stints in prison don’t seem to have weakened his resolve to fight for the animals!
The brief lunch break (just 45 minutes) didn’t give us much time but trusty Gopals was just across the road and fed us well as always, before we quickly made our way back to the Town Hall to commence the afternoon sessions at 1pm. The protesters had gathered en masse outside the Town Hall so we had to be escorted in and out of the building by police – a bizarre turn of events for a gathering with such peaceful motivations!
We both opted for the same session to open the afternoon, with Dean Rees-Evans (from Three Principles Training and Consultancy) on “Accessing a peaceful mind in the face of animal suffering” Dean is a psychological wellbeing practitioner and his messaging around mindfulness and its benefits in dealing with the realities of animal suffering we witness within the animal rights movement (and, of course, in all aspects of daily life where animals are exploited all around us) was OK, if a little incoherent. Many in the audience seemed to find his talk more confusing than helpful and their basic questions tended not to receive actionable answers. Dean generally offers longer workshops and it felt like he struggled to distill his message with useful takeaways into such a short talk.
We went our separate ways for the next session. My wife opted for Alex Vince (from Animal Liberation NSW) talking about “Poisons and Pesticides: An Animal Welfare Crisis”, while I attended a remote presentation by Jenna Riedi (from Faunalytics, US) on “Using research and data in animal advocacy”. Alex is very vocal in the movement to ban the awful 1080 poison in Australia (noting it’s been banned in many countries for many years) and his talk resonated well with his audience. I’d hoped to learn more about the detail of some of Faunalytics’ work as I was already familiar with the organisation and its approach. The talk was more of an introduction to the organisation, though, so I didn’t get too much out of it (but I still recommend their site as a great resource for data around so many different aspects of the movement).
We both then enjoyed Sandra Kyle talking on “An Extraordinary Time To Be Living Through – My Story Arc”. An older activist, Sandra told her story beautifully, reading eloquently and gently from a script (no slide deck here!) while also showing her obvious passion and continuing desire to grow old disgracefully! This simple talk was a highlight of the Forum for us.
A short afternoon tea break preceded the final talks of the day and we both went to the same set of group reviews, featuring Vegan Australia (represented by their CEO, Dr Heidi Nicholl), The Captain Paul Watson Foundation (represented by Haans Siver) and the Coalition for the Protection of Greyhounds. It was good to hear the new Vegan Australia CEO talking about their current initiatives and Haans did a great job of introducing the Paul Watson Foundation (founded as a result of Paul Watson’s departure from Sea Shepherd, an organisation we continue to strongly support). It was sad to hear about the continuing plight of greyhounds in the Australian racing industry but also inspiring to know that there are so many passionate activists helping to hold the evil protagonists to account (given that the government and regulators seem to be unable or unwilling to do so).
We caught the introduction to the Animal Justice Awards but unfortunately couldn’t stay for the award announcements as we needed to cross town to make our ferry back home.
Sunday
We again couldn’t make it to the Town Hall for the first sessions on Sunday, so we decided to aim for the first session after morning tea, giving us enough time to pop into our old favourite, Union Kiosk, for a morning coffee and cake. The little café was very busy with many others from the Forum there too.
We opted for different sessions to kick off the day, with my wife attending Matthew Lynch‘s talk on “Cancel Culture – An Open Forum”, while I headed to see Kyle Behrend (from NFPS.AI) and his talk on “The Power of AI & Automations”. Matthew’s talk was timely given the narrative around so many topics in Australia currently and his experience working with initiatives such as Dominion and the Farm Transparency Project made his insights very powerful.
I didn’t expect to see an AI-related talk on the programme for this Forum, despite AI chatter infiltrating everywhere I look at the moment. We’ve known Kyle for a long time through his work with Edgar’s Mission, a farm animal sanctuary we’ve visited and financially support on an ongoing basis. After leaving the mission, Kyle set up NFPS.AI and works with not-for-profit organisations to help them leverage AI and automation technologies. I wasn’t sure what to expect from this talk, given so much of the hype and nonsense spouted by so-called AI experts. Thankfully Kyle presented a very pragmatic approach to leveraging AI to help often time- and resource-poor not-for-profit organisations. He presented some interesting case studies, including one around emails where automation was used to categorise incoming emails based on their content, stashing them in different folders and then generating draft replies using AI ready for review. This process was helping to save the NFP many hours of basic email processing, freeing up valuable time for their resources to focus on more value-adding activities for their organisation. This is a very niche use of AI and Kyle is passionate about the sector and technology, so I wish him well (and have offered to help).
In the brief lunch break, we quickly headed back to Union Kiosk and managed to order and get a table before the masses arrived from the Forum. A tasty jaffle and another nice coffee with the good vibes of the friendly crowd made for an enjoyable break.
For the first session of the afternoon, we again split up, with my wife opting for Abigail Boyd MP (from The Australian Greens) with “Animal Cruelty Under Capitalism” while I went to Paul Bevan‘s (from Magic Valley) talk on “Cultivated Meat – The Future of Food”. Abigail was impressive & passionate, especially for a politician, and illustrated by example just how much of our current economic system sadly has animal exploitation as its foundation. Meanwhile, Paul Bevan did an excellent job of explaining how his company, Magic Valley, is proceeding towards production with its first range of cultivated meats (including the world’s first cultivated lamb, very Aussie!). This is an interesting space and controversial with vegans due to its use of animal cells but Paul did a good job of explaining how his (patent-pending) process works and fielded a very broad range of questions very well. Paul is certainly an impressive CEO with a passion for removing the need to kill animals for those who still want to eat “meat”.
The next session saw us both with Athena (from Animal-Free Science Advocacy) on “The Power of Story: Reframing False Narratives in Animal Exploitation Industries” (my wife was meant to attend a talk from a vegan interior designer in this timeslot, but it was cancelled). The message of this talk was excellent in explaining how to turn the narrative around from the way the animal exploitation industries would like to talk about things to a more accurate story from which to expose their obvious cruelty. Given Athena’s role in marketing, the delivery of the presentation surprisingly let down the message a little, but the content was still excellent.
Next up, my wife headed to Gary Hall (from Sheep Advocate Australia) and his talk “Sheep Crisis”, while I went to Kimberley Oxley (from Animals Australia) with “The C Words: Using Social Media to drive change for animals through compassion, connection and content”. Gary operates a sheep rescue close to where we live and his passion in advocating for sheep was palpable (as was his frustration with the slow progress to achieve even small improvements in their lives in the industry). As supporters of Animals Australia for many years, it was great to hear such an excellent presentation from Kimberley Oxley. The material was targeted perfectly for this audience and delivered very well, a highly impactful talk from one of the most professional animal rights organisations in the world (I’d encourage you to check out the quality of their content if you’re unfamiliar with their work).
Afternoon tea was a catered affair with two nice choices of vegan cakes as well as vegan samosas, before the final session kicked off. These final sessions were all group reviews and we split up for coverage, so my wife saw presentations by World Animal Protection, GREY2K USA Worldwide (represented by Carey M. Theil) and Action for Dolphins (represented by Hannah Tait), while I got Animals Australia (Kim Oxley again) and the Animal Justice Party (the Australian Alliance for Animals was also meant to present but didn’t). Hannah was particularly impressive, a young activist leading an organisation doing great work for dolphins. Kim Oxley did a good job again outlining the current priorities for Animals Australia and the Animal Justice Party (AJP) representatives did a decent job of introducing the party and its values. (Disclosure: we were both paid up members of the AJP until they sided with the Labor government in Victoria during the pandemic, unbelievably supporting policies that infringed on the most basic human rights and certainly not in alignment with the claimed values of the party.)
All too soon, it was time for everyone to come back together for a short wrap-up, thanking the volunteers and so on.
It was a great event and kudos to the volunteers for getting it up and running again in difficult circumstances. There was a broad range of organisations and diversity in the speaker line-up (again in contrast to many tech conferences).
We both came away from the Forum feeling inspired and keen to help out some of the organisations (as well as continuing our financial support for some of them).
Another opportunity to work with McGill University undergraduate students came my way when Rob Sabourin again asked me to be interviewed during one of his sessions of the ECSE-428-001 – Software Engineering Practice course for his Winter 2024 cohort.
My latest interview was different, in that the focus was on the Scaled Agile Framework (SAFe) based on my recent experience of being involved in a large project using this framework. We discussed the general ideas around SAFe as well as talking specifically about where and how testing fits in. While there are many criticisms of this framework from the agile community, we discussed some of the beneficial aspects of it in the context of very large programs of work.
It was great to have so much engagement from the students physically in the room with Rob and I hope that by hearing about real-life experiences they get a more well-rounded perspective of not only the theory but also the practice of software engineering.
I’m always willing to share my knowledge and experience, it’s very rewarding personally as well as providing an opportunity to give back – I’ve had a lot of help and encouragement along my career journey (including from Rob himself) and remain incredibly grateful for it.
I missed the announcement of the publication of the 2023-24 World Quality Report so I’m catching up here to maintain my record of reviewing this annual epic from Capgemini (I reviewed the 2018/19, 2020/21 and 2022/23 reports in previous blog posts).
I’ve taken a similar approach to reviewing this year’s effort, comparing and contrasting it with the previous reports where appropriate. This review is a lengthy post, but I’m still doing you a favour even if you read this in its entirety compared to the 96 pages of the actual report!
TL;DR
The 15th edition of the “World Quality Report” is the fourth I’ve reviewed in detail and this is the thickest one yet (in every sense?).
As expected, AI is front and centre this year which is somewhat ironic given AI & ML barely got a mention in last year’s report. The hype is real and some of the claims made by respondents around their use and expected benefits of AI strike me as being very optimistic indeed. Their faith in the accuracy of their training data is particularly concerning.
Realizing value from automation clearly remains a huge challenge and I found the data around automation in this year’s report more depressing than usual. I got the feeling that AI is seen as the silver bullet in solving automation woes and I think a lot of organizations are about to be very disappointed.
It would have been nice to see some questions around human testing to help us to understand what’s going in these large organizations but, alas, there was nothing in the report to enlighten us in this area. The prevalence of Testing Centres of Excellence (CoEs) is another hot topic, despite previous reports suggesting that such CoEs were becoming less common as the movement towards agility marched on.
There continue to be many problems with this report – from the sources of its data, to the presentation of findings, and through to the conclusions drawn from the data. The lack of responses from smaller organizations mean that the results remain heavily skewed to very large corporate environments, which perhaps goes some way to explaining why my lived reality working with organizations to improve their testing and quality practices is quite different to that described in this report.
“Quality Engineering” is not clearly delineated from testing, with the concepts often being used interchangeably – this is confusing at best and potentially misleading.
The focus areas of the report changed almost completely, from “six pillars of QE” last year to eight (largely different) areas this year, making comparisons from one report to the next difficult or impossible. The sample set of organizations is the same as last year in the interests of year-on-year comparison, so why change the focus areas making the value of any such comparisons highly questionable? Is this a deliberate ploy or just poor study design?
Unlike the content of these reports, my advice remains steadfast – don’t believe the hype, do your own critical thinking and don’t take the conclusions from such surveys and reports at face value. While I think it’s worth keeping an interested eye on trends in our industry, don’t get too attached to them – the important ones will surface and then you can consider them more deeply. Instead, focus on building excellent foundations in the craft of testing that will serve you well no matter what the technology du jour happens to be.
The survey (pages 87-91)
This year’s report runs to 96 pages, the longest since I’ve been reviewing them (up from a mere 80 pages last year). I again looked at the “About the study” section of the report first as it’s important to get a picture of where the data came from to build the report and support its recommendations and conclusions.
The survey size was again 1750, the same number as for the 2022/23 report.
The organizations taking part were again all of over 1000 employees, with the largest number (35% of responses) coming from organizations of over 10,000 employees. The response breakdown by organizational size was the same as that of the previous three reports, with the same organizations contributing every time. While this makes cross-report comparisons perhaps more valid, the lack of input from smaller organizations unfortunately continues and inevitably means that the report is heavily biased & unrepresentative of the testing industry as a whole.
While responses came from 32 countries (as per the 2022/23 report), they were heavily skewed to North America and Western Europe, with the US alone contributing 16% and then France with 9%. Industry sector spread was similar to past reports, with “Hi-Tech” (19%), “Financial Services” (15%) and “Public Sector/Government” (11%) topping the list.
The types of people who provided survey responses this year was also very similar to previous reports, with CIOs at the top (24% again), followed by QA Testing Managers and IT Directors. These three roles comprised over half (59%) of all responses.
Introduction (pages 4-5)
The introduction is the usual jargon-laden opening to the report, saying little of any value. But, there’s no surprise when it comes to the focus of this year’s epic:
…the emergence of a true game changer in the field of software and quality engineering: Generative AI adoption to augment our engineering skills, accelerated like never before. The lack of focus on quality seen in the last few years is becoming more visible now and has brought back the emphasis on the Hybrid Testing Center of Excellence (TCoE) model, indicating somewhat of a reversal trend.
Do the survey’s findings reflect the game-changing nature of generative AI around quality engineering? What’s with the “lack of focus on quality seen in the last few years” when previous reports have been glowing about QE and its importance in the last couple of years? And what exactly is a “Hybrid Testing Center of Excellence”? Let’s delve in to find out.
Executive Summary (pages 6-7)
While the Executive Summary is – as you’d expect – a fairly high level summary of the report’s findings, a couple of points are worth highlighting from it. Firstly:
…almost all organizations have transitioned from conventional testing to agile quality management. Evidently, they understand the necessity of adapting to the fast-paced digital world. An agile quality culture is permeating organizations, albeit often at an individual level rather than at a holistic program level. Many organizations are adopting a hybrid mode of Agile. In fact, 70% of organizations still see value in having a traditional Testing Center of Excellence (TCoE), indicating somewhat of a reversal trend.
I’m intrigued by what the authors mean by “conventional testing” and “agile quality management”, as well as the fact that the majority of organizations still adopt a “traditional” TCoE. Secondly:
What is clear is the extended knowledge and skills that are required from the QE experts who operate in agile teams. Coding skills in particular (C#, Java, SQL, Python), and business-driven development (BDD) and test-driven development (TDD) competencies, are in demand.
The idea that “QE experts” need coding skills and competency in BDD and TDD strikes me as unrealistic. I’m not sure whether the authors are referring to expert testers with some development skills, expert developers with some testing skills or some superhuman combination of tester, developer and BA (remembering that, of course, BDD is neither a development or testing skill, per se).
With all the “game-changing” talk around AI, there’s a nod to reality:
A significant percentage (31%) remains skeptical about the value of AI in QA, emphasizing the importance of an incremental approach.
Key recommendations (pages 8-9)
The “six pillars of QE” from the last report (viz. “Agile quality orchestration”, “Quality automation”, “Quality infrastructure testing and provisioning”, “Test data provisioning and data validation”, “The right quality indicators” and “Increasing skill levels”) no longer warrant a mention, with this year’s recommendations being in these eight areas instead:
Business assurance
Agile quality management
QE lifecycle automation
AI (the future of QE)
Quality ecosystem
Digital core reliability
Intelligent product testing
Quality & sustainability
The recommendations are generally of the cookie cutter variety and could have been made regardless of the survey results in many cases. A couple of them stood out, though. Firstly, under “Digital core reliability”:
Use newer approaches like test isolation, contract testing etc., to drive more segmentation and higher automated test execution.
The idea that test isolation and contract testing are leading edge approaches in the automation space is indicative of how far behind many large organizations must be. Secondly, under “Intelligent product testing”:
Invest in AI solutions for test prioritization and test case selection to drive maximum value from intelligent testing.
I was wondering what “intelligent product testing” was referring to, so maybe the authors are suggesting a delegation of the intelligence aspect of testing to AI? I’m aware of a number of tools that claim to prioritize tests and make selections from a library of such test cases. But I’m also aware that good test prioritization relies on a lot of contextual inputs, so I’m dubious about any AI’s ability to do this “to drive maximum value” from (intelligent) testing.”
Current trends in Quality Engineering & Testing (p11-59)
Exactly half of the report is focused on current trends, broken down into the eight areas detailed in the previous section. Some of the most revealing content is to be found in this part of the report. I’ve broken down my analysis into the same sections as the report. Strap yourself in.
Business assurance
This area was not part of the previous year’s report. There’s something of a definition of what “business assurance” is, even if I don’t feel much wiser based on it:
Business assurance encompasses a systematic approach to determining business risks and focusing on what really matters the most. It consists of a comprehensive plan, methodologies, and approaches to ensure that business operation processes and outcomes are aligned with business standards and objectives.
One point of focus in this section is value stream mapping (which featured in the previous report) but the authors’ conclusions about its widespread adoption appear at odds with the data:
Many organizations are running pilots apparently, but their statement that “Many businesses are now shifting from mere output to a results-driven mindset with value stream mapping (VSM)” seems to go too far.
Moving on to how QE is involved in “Business Assurance testing” (which is not defined and I’m unclear as to what it actually is):
This data was from a small sample (314 of the 1750 respondents) and their conclusion that “both business stakeholders and testers are working together during the UAT process to drive value” (from the second bar) is hardly a revelation.
I remain unclear why this area is included in this report as it doesn’t seem related to quality or testing in a strong way. The first recommendation in this section of the report is “Leverage the testing function to deliver business outcomes” – isn’t this what good testing always helps us to do?
Agile quality management
This area wasn’t in the previous year’s report (the closest subject matter was under “Quality Orchestration in Agile Enterprises”) and the authors remind us that “last year, we noticed a paradigm shift in AQM [Agile Quality Management] through the emphasis placed on embracing agile principles rather than merely implementing agile methodologies”. The opening paragraph of this section sets the scene, sigh (emphasis is mine):
The concept of agile organizations has been a part of boardroom conversation over the past decade. Businesses continue to pursue the goal of remaining relevant in volatile, uncertain, complex, and ambivalent environments. Over time, this concept has evolved into much more than a mere development methodology. It has become a way of thinking and a mindset that emphasizes continuous improvement, adaptability, and customer-centricity. This evolution is clearly represented in the trends shaping agile quality management (AQM) practices today.
I’m going to assume “ambivalent” is a typo (or ChatGPT error). Much more worrisome to me is the idea that agile has evolved from being a development methodology to “a way of thinking and a mindset that emphasizes continuous improvement, adaptability, and customer-centricity” – this is exactly the opposite of what I’ve seen! The great promise shown by the early agilists has been hijacked, certified, confused, process-ized, packaged and watered down, moving away from being a way of thinking to a commodity process/methodology. Either the authors’ knowledge of the history of the agile movement is lacking or they’re confused (or both). Having said that, if they genuinely have evidence that there’s a move away from “agile as methodology” (aka “agile by numbers”) to “agile as mindset”, then I think that would be a positive thing – but I failed to spot any such evidence from the data they share.
The first data looks at how QE is organized to support agile projects, with a whopping 70% still relying on a “Traditional Testing Center of Excellence” and essentially all the rest using “product aligned QE” (which I think basically means having QE folks embedded with the agile teams).
Turning to skills:
Worryingly, there is no mention here of actual testing skills, but the authors remark:
Organizations are now prioritizing development skills over traditional testing skills as the most critical skills for quality engineers. Development-focused skills like C#/Java/SQL/Python and CI/CD are all ranked in the top 5, while traditional testing skills like automation and performance tooling ranked at the bottom of the results.
Looking at this skills data, they also say:
We feel this is in alignment with the industry’s continued focus on speed to market; as quality engineers continue to adopt more of a developer mindset, their ability to introduce automation earlier in the lifecycle increases as is their ability to troubleshoot and even remediate defects on a limited basis.
I find the lack of any commentary around human testing skills (outside of coding, automation, DevOps, etc.) deeply concerning. These skills are not made redundant by agile teams/”methodologies” and the lack of humans experiencing software before it’s released is not a trend that’s improving quality, in my opinion.
Turning to the challenges faced by quality engineers in Agile projects:
This is a truly awful presentation of the data, the lack of contrast in the colours used on the various bars makes it almost unreadable. The authors highlight one data point, the 77% of UK organizations for which “lack of knowledge of Agile techniques” was the most common challenge, saying:
This could indicate the speed at which these organizations moved into a product-aligned QE model while still trying to utilize their traditional testers.
They slip in and out of the use of terminology such as “Agile techniques” throughout the report (which is unhelpful) and they appear to be claiming that “traditional testers” moving into an Agile “product-aligned QE model” have a lack of knowledge – maybe they’re referring to good human testers being asked to become (QE) developers, in which case this lack of knowledge is to be expected.
The following text directly contradicts the previous data on the very high prevalence of “traditional” testing CoEs:
Many quality engineering organizations predicted the evolution and took a more proactive stance by shifting from a traditional ‘Testing Center of Excellence’ approach to a product-based quality engineering setup; however, most organizations are still in the process of truly integrating their quality engineering teams into an agile-centric model. As shown in the diagram… only 4% of respondents report that more than 50% of their quality engineering teams are operating in a more agile-centric pod-based model.
In closing out this section, the authors say (emphasis is mine):
…quality engineering teams are asked to take on a more development-focused persona and build utilities for the rest of the organization to leverage rather than focusing solely on safeguarding quality. The evolution of agile practices, the integration of AI and ML, and the synergy between DevOps and agile are transforming quality engineering in infinitely futuristic ways. The question is, when will organizations embrace these changes en-masse, adopt proactive strategies, and make them a norm? Next year’s report will probably unveil the answer.
I find it highly unlikely that the answer to this question will be revealed in the next report, as such questions have rarely – if ever – been answered in the past. Given how difficult it’s been for large organizations to move towards agile ways of working (despite 20+ years of trying), I’d suggest that en masse movement towards QE is unlikely to eventuate before the next big thing distracts these trend followers from realizing whatever benefits the QE model was designed to bring.
QE lifecycle automation
This section of the report is on all things automation, now curiously being referred to as “QE lifecycle automation”. It kicks off with the benefits the respondents are looking for from automation:
I’ll make no comment on these expectations apart from mentioning that my own experience of implementing many automation initiatives hasn’t generally seen “reduced testing effort” or “improved test efficiency” (though I’m not sure what they mean by that here).
Moving into where organizations intend to focus their automation efforts in the coming year:
The authors say:
We were expecting AI to steal the limelight this year since it is the buzzword in the boardroom these days. We think that AI is the new tide that needs to be ridden with caution, which means quality engineering and testing (QE&T) teams need to understand how AI-based tools work for them, how they can do their jobs better, and bring better outcomes for their customers.
More than 50% of the respondents were eager to see testing AI with automation, which is well ahead of the other top focus areas like security and mobile.
Interestingly, we found that functional (18% of respondents) and requirements (20% of respondents) automation were of the least focus for organizations, presumably because of the challenges in automating and regular updating involved in these areas. Perhaps, this is an area where we can expect to see AI tools becoming a key part of automation toolkits.
Surprise surprise, it’s all about AI! It’s time to ride the tide of AI, folks. What does “Testing AI” mean? Is it testing systems that have some aspect of AI within them or using AI to assist with testing? Whatever it is, it’s the top priority apparently.
I also don’t understand what “Functional” means and how other automation focus areas are related to it or not. For example, if I implement a UI-driven test on a mobile device using automation, does that come under “Functional” or “Mobile” (or both). It’s hard for me to fathom how respondents answer such questions without understanding these distinctions.
The last part of the quote above is illustrative of the state of our industry – building robust and maintainable automation is just too hard, so we’ll deprioritize that and get distracted by this shiny AI thing instead.
The data around integration of automated tests into pipelines makes for shocking reading:
The authors note:
Overall, the survey revealed that only 3% of respondents’ organizations have more than half of their suites integrated which may be due to diminishing returns or lack of QE access to orchestration instances. This comes as a surprise since most of the organizations now have some automated test suites integrated into their pipelines for smoke and regression testing.
I agree that this low percentage of organizations actually getting their automated test suites into pipelines is surprising! Maybe they shouldn’t be so worried about trying to make use of AI, but rather to actually make use of what they already have in a more meaningful and efficient way. The response to the next question “What are the most impactful challenges preventing your Quality Engineering organization from integrating into the DevOps/DevSecOps pipeline?” revealed that a whopping 58% (the top answer) said “Quality Engineering team doesn’t 58% have access to enterprise CI instance”, I rest my case. With no sense of irony about the report’s findings more generally, the authors say:
What was worth noting was that the more senior respondents (those furthest from the actual automation) reported higher levels of integration than those solely responsible for the work
The much vaunted dominance of low/no code & AI automation solutions is blown away by this data, with not much going on outside of pilots:
The data in this section of the report, as per similar sections in previous reports, only goes to show how difficult it is for large organizations to successfully realize benefits from automation. Maybe “AI” will solve this problem, but I very much doubt it as the problems are generally not around the technology/tools/toys.
AI (the future of QE)
With AI being such a strong focus of this year’s report, this section is where we get into the more detailed data. Firstly, it’s no surprise that organizations see AI as the next big thing in achieving “higher productivity”:
Looking at this data, it’s all about speed with any consideration around improving quality coming well down the list (“lesser defects” – which should be “fewer defects” of course – coming last on this list). Expressing their surprise at this lack of focus on using AI to reduce defects, the authors say (emphasis is mine):
With Agile and DevOps practices being adopted across organizations, there is more continuous testing with multiple philosophies like “fail fast” and “perpetual beta” increasing the tolerance for defects being found, as long as they can be fixed quickly and efficiently.
I find the response around training data in this question quite disturbing:
The authors are quite bullish on this, saying:
…the trust in training data for AI solutions is very high, which in turn reflects the robust infrastructure and processes that organizations have developed over the years to collect continuous telemetry from all parts of the quality engineering process.
I’m less than convinced this is the reason for such confidence and it feels to me like organizations want to feel confident but have little evidence to back up that confidence. I really hope I’m wrong about that, since AI solutions are clearly being seen as useful ways to drive significant decisions, in particular around testing.
The next question is where the reality of AI usage is revealed:
The use cases seem very poorly thought out, though. For example, does performance testing belong in “Performance testing or Test prioritization” or “Performance testing/engineering”? And why would performance testing be lumped together with test prioritization when, to me at least, they’re such different use cases.
I’ll wrap up my observations on this AI section with this data:
I’m suspicious that all of these statements get almost the same level of agreement.
Quality ecosystem
This section (which is similar in content to what was known as “Quality infrastructure testing and provisioning” in the previous report) kicks off by looking at “cloud testing”:
…82% of the survey respondents highlighted cloud testing as mandatory for applications on cloud. This highlights a positive and decisive shift in the testing strategy that organizations are taking on cloud and infrastructure testing. It also demonstrates how important it is to test cloud-related features for functional and non-functional aspects of applications. This change in thinking is a result of organizations realizing that movement to cloud alone does not make the system available and reliable.
The authors consider this data to be very positive, but my initial thought was why this number wouldn’t be 100%?! If your app is in/on the cloud, where else would you test it?
The rest of this section was more cloud stuff, SRE, chaos engineering, etc. and I didn’t find anything noteworthy here.
Digital core reliability
In this section of the report (which had no obvious corresponding section in last year’s report), the focus is on foundational digital (“core”) technology and how QE manages it. In terms of “ensuring quality”:
About half of the respondents say that a dedicated QA team is responsible for “ensuring the quality” but there’s also a high percentage still using CoEs or having testers embedded into product/feature teams according to this data.
Turning to automation specifically for digital core testing:
The authors go on to discuss an “enigma” around the low level of automation revealed by this data (emphasis is mine):
This clearly is due to the same top challenges around testing digital core solutions – the complexity of the environment owing to the mix of tools, dependencies related to environment and data availability. When it is hard to test, it is even harder to automate.
There’s also another contradiction we need to address – while 31% of organizations feel the pressure to keep up with the pace of development teams developing digital core solutions, 69% of organizations do not feel the pressure. With <40% of automation coverage and digital core solutions becoming more SaaS and less customized, it is a bit of an enigma to unravel. Why don’t organizations feel the pressure to keep up? Is that because they have large QA teams rushing to complete all testing manually? Or are teams stressed by the number and frequency of code drops coming in for testing?
It’s good to see the authors acknowledging that the data is contradictory in this area. There are other possible causes of this “enigma”. Maybe the respondents didn’t answer honestly (or, more generously, misunderstood what they were being asked in some of the survey questions). Maybe they don’t feel the pressure to keep up because they’ve lowered their bar in terms of what they test (per the previous commentary on the “increasing tolerance for defects”).
The following is one of the more sensible suggestions in the entire report:
When it comes to test automation, replicating what functional testers do manually step by step through a tool might not be the best possible approach. Trying to break the end-to-end tests into more manageable pieces with a level of contract testing could be a more sustainable approach for building automated tests.
Wrapping up this section, the focus turns to skills required by testers when testing “digital core”:
The top answer was “Quality Assurance”, is that a testing skillset? It seems awfully broad in answer to the question, in any case. The report’s tendency for confirmation bias rears its ugly head again, with the authors making their own judgements that are unsupported by the data (emphasis is mine):
When we looked at the data, quality assurance skills were rated as the most important skillset over domain or platform skills required for testing digital core solutions. This datapoint feels like bit of an outlier when compared to two other datapoints – 35% of organizations still utilize business users in validating digital core solutions and 33% of organizations pointed to gaps in domain expertise as a challenge to overcome when testing digital core solutions. The truth is while solid testing and quality assurance skills are sought after due to the nature of digital core solutions, domain expertise remains invaluable.
Intelligent product testing
This section (again not obviously close to the content of a section from last year’s report) looks to answer this bizarre question:
What really goes into creating the perfect intelligent product testing system ecosystem?
The first data in this section refers to different test activities and how important they are considered to be:
The presentation of this data doesn’t make sense to me, since the respondents were asked to rate the importance of various activities while the results are expressed as percentages. Most of the test activities get very similar ratings anyway, with the authors noting that 36% seems low for “E2E Testing”:
..we were surprised to learn that only 36% thought end-to-end (E2E) testing was necessary. It has been established that the value of connected products is distributed throughout the entire chain, and its resilience depends on the weakest link in the chain, and yet, the overall value of E2E testing has not been fully recognized. One reason we suspect could be that it requires significant investment in test benches, hardware, and cloud infrastructure for E2E testing.
Note how the analysis of the data changes again here – they refer to the necessity of a test activity (E2E testing in this case), not a rating of its importance. I had to chuckle at their conclusion here, which essentially says “E2E testing is too hard so we don’t do it”.
I see no compelling evidence in the report’s data to support the opening claim in the following:
One of the significant trends observed in the WQR 2023 survey is the expectation for improving the test definition. The increasing complexity of products, along with hyper-personalization to create a unique user experience, necessitates passing millions of test cases to achieve the perfect user experience. Realistically speaking, it’s not possible to determine all possible combinations.
The next question has the same rating vs. percentage data issue as the one discussed above, but somewhat supports their claim I suppose:
I again don’t see the evidence to support the following statement (nor do I agree with it, based on my experience):
For product testers, the latest findings suggest that automation toolchain skills and programming languages are now considered mainstream and no longer considered core competencies.
Quality & sustainability
It’s good to see some focus on sustainability in the IT industry, especially when “reports show that data centers and cloud contribute more to greenhouse emissions than the aviation sector”.
This section of the report looks specifically at the role that “quality engineering play in reducing the environmental impact of IT programs and systems”, which seems a little odd to me. I don’t really understand how anyone could answer the following question (and what it even means to do so):
Although the data in this next chart doesn’t excite me, I had to include it here as it takes the gong (amongst a lot of competition) for the worst way of presenting data in the report:
The final chart I’ll share from this section of the report again suffers from a data issue, in that the question it relates to (which itself seems poorly posed and too broad) is one of frequency while the data is shown as a percentage (of what?):
Perhaps I’m reading this wrong, but it seems that almost none of the organizations surveyed are really doing very much to test for sustainability (which is exactly what I’d expect).
Sector analysis (p60-85)
As usual, I didn’t find the sector analysis to be as interesting as the trends section. The authors identify the same eight sectors as the previous year’s report, viz.
Automotive
Consumer products, retail and distribution
Energy, utilities, natural resources and chemicals
Financial services
Healthcare and life sciences
Manufacturing
Public sector
Technology, media and telecoms
Last year’s report provided the same four metrics for each sector but this year a different selection of metrics was presented by way of summary for each sector. A selection of some of the more surprising metrics follows.
In the Consumer sector, 50% of organizations say only 1%-25% of their organizations have adopted Quality Engineering.
In the Consumer sector, 76% of organizations are using traditional testing centres of excellence to support agile projects
In the Financial Services sector, 75% of respondents said they are using or plan to use advanced automation solutions like No Code/ Low Code or AI based automation frameworks.
In the Financial Services sector, 82% of organizations support agile projects through a traditional Testing Center of Excellence.
In Healthcare, 81% of organizations have confidence in the accuracy of data used to train AI platforms.
In the Public sector, 60% of organizations said the top-most issue in agile adoption was lack of time to test.
Geography-specific reports
The main World Quality Report was supplemented by a number of short reports for specific locales. I only reviewed the Australia/New Zealand one and didn’t find much of interest there, with the focus of course being on AI, leading the authors to suggest:
We think that prompt engineering skills are the need of the hour from a technical perspective.