<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Qase Blog | Articles about our product, software testing and the QA community.]]></title><description><![CDATA[News and articles from Qase; about our product, the software world, and the software testing / DevOps community.]]></description><link>https://qase.io/blog/</link><generator>Ghost 5.74</generator><lastBuildDate>Sun, 19 Apr 2026 00:19:46 GMT</lastBuildDate><atom:link href="https://qase.io/blog/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Quality is not a separate Team. It's an Engineering discipline. And it's failing!]]></title><description><![CDATA[Software defects cost the U.S. economy an estimated $59.5 billion every year, roughly 0.6% of GDP. Adjusted for the complexity of modern distributed systems, microservices, and continuous deployment pipelines, the real number today is almost certainly far larger.]]></description><link>https://qase.io/blog/quality-is-engineering/</link><guid isPermaLink="false">69e0fe07913d8800010fd40d</guid><category><![CDATA[Quality Assurance]]></category><dc:creator><![CDATA[Nitin Deshmukh]]></dc:creator><pubDate>Thu, 16 Apr 2026 15:23:07 GMT</pubDate><media:content url="https://d36r73waboa44k.cloudfront.net/2026/04/quality_is_eng.png" medium="image"/><content:encoded><![CDATA[<img src="https://d36r73waboa44k.cloudfront.net/2026/04/quality_is_eng.png" alt="Quality is not a separate Team. It&apos;s an Engineering discipline. And it&apos;s failing!"><p>Software defects cost the U.S. economy an estimated $59.5 billion every year, roughly 0.6% of GDP. That figure comes from a landmark study by the National Institute of Standards and Technology (NIST), &quot;The Economic Impacts of Inadequate Infrastructure for Software Testing.&quot;[1] Over $22.2 billion of that cost could be eliminated by catching bugs earlier in the development lifecycle.</p><p>That study was published in 2002. Adjusted for the complexity of modern distributed systems, microservices, and continuous deployment pipelines, the real number today is almost certainly far larger.</p><p>And yet quality assurance professionals, the people who carry the heaviest share of that burden, are burning out at a rate of 75%. This is happening not because they are failing, but because organizations have made quality someone else&apos;s problem instead of everyone&apos;s responsibility.</p><h2 id="the-gap-between-what-teams-know-and-what-they-can-cover"><strong>The gap between what teams know and what they can cover</strong></h2><p>Every engineering organization has felt this tension. You know your test coverage is insufficient. Developers know it. Product managers know it. The people writing tests definitely know it. But there is no realistic path to closing the gap by adding QA headcount or asking the existing team to work harder.</p><p>Some industry observers have started calling this the &quot;Anxiety Gap&quot;: the distance between the coverage an engineering organization knows it needs and the coverage it can realistically achieve through traditional methods.</p><p>The downstream effects hit the entire team. Escaped defects pull developers into firefighting. Regression cycles balloon sprint timelines for everyone. Release confidence drops, and the whole organization slows down. Meanwhile, the people with the deepest quality expertise spend their cognitive energy on repetitive maintenance instead of the strategic work that actually requires their judgment.</p><h2 id="why-more-features-made-the-problem-worse"><strong>Why more features made the problem worse</strong></h2><p>For the better part of a decade, test management platforms competed on feature count. More dashboards. More configuration options. More integrations. The assumption was that teams needed more capability. What they actually needed was less overhead.</p><p>Every new feature that ships without reducing operational burden adds to the maintenance surface. The platforms designed to support quality ended up contributing to the very problem they were supposed to solve, and the burden fell disproportionately on the QA function while the rest of the engineering team moved on to the next sprint.</p><p>The correction is underway. Engineering teams are no longer asking &quot;what can this tool do?&quot; They are asking &quot;how much of our team&apos;s time does this tool give back?&quot;</p><h2 id="the-agentic-shift-is-real-but-unevenly-distributed"><strong>The agentic shift is real, but unevenly distributed</strong></h2><p>The testing industry is repositioning rapidly around AI and autonomous agents. Major platforms are rebranding around the word &quot;agentic.&quot; The underlying demand is genuine: engineering teams need testing capabilities that can operate with minimal human supervision on the repetitive, high-volume work that currently consumes disproportionate time; whether that work is done by QA professionals, developers writing tests, or SDETs maintaining automation frameworks.</p><p>But there is a meaningful difference between platforms positioning around the concept and platforms that have shipped working agentic capabilities into production.</p><h2 id="what-we-built-with-aidenand-why-it-matters-now"><strong>What we built with AIDEN - and why it matters now</strong></h2><p>At Qase, we shipped <a href="https://qase.io/blog/qase-product-updates-january-2026/#:~:text=Pillar%201%3A%20Intent%2DBased%20Engineering"><u>AIDEN&apos;s Agentic Mode</u></a> in January 2026; not as a response to the current positioning wave, but as the result of a deliberate product strategy focused on reducing cognitive load across the entire engineering workflow.</p><p><a href="https://qase.io/ai-software-testing?ref=qase.io"><u>AIDEN</u></a> is designed around a specific philosophy: quality expertise (wherever it lives in the team) should be the input, not the bottleneck. A QA engineer, a developer, or a product manager knows exactly how a feature should behave. The gap has always been the translation cost:- converting that thinking into selectors, wait statements, and boilerplate scripts that only a subset of the team can write.</p><p>AIDEN collapses that translation layer. Anyone on the team can describe what needs to be verified in plain language. AIDEN breaks it into actionable steps, navigates the application, captures visual evidence, and generates executable test code. When the application changes mid-run, AIDEN detects the failure, analyzes the change, and self-corrects.</p><p>This is not about replacing quality professionals. It is about making quality accessible to the entire engineering team while reclaiming the time currently spent on work that does not require deep expertise. QA specialists operate as strategic quality architects. Developers contribute to test coverage without context-switching into a different skill set. The whole team moves faster because quality stops being a &#x201C;bottleneck&#x201D; and starts being shared infrastructure.</p><h2 id="what-engineering-teams-should-be-evaluating"><strong>What engineering teams should be evaluating</strong></h2><p>The &quot;Anxiety Gap&quot; will not be solved by any single tool. It will be solved by a shift in how organizations think about quality capacity; not as the output of one team, but as a capability the entire engineering organization owns.</p><p><strong>Consolidate before you add.</strong> Every additional tool introduces integration overhead and maintenance burden. One platform that developers, QA, and product can all use is worth more than three best-of-breed tools that serve different audiences and do not talk to each other.</p><p><strong>Automate the repetitive, not the strategic.</strong> The goal of autonomous agents is to remove humans from the parts of the loop that do not benefit from human judgment. Test generation, regression execution, and self-healing scripts are pattern-driven tasks. Exploratory testing, risk assessment, and coverage strategy are where human expertise creates irreplaceable value.</p><p><strong>Measure time reclaimed, not features shipped.</strong> The metric that matters is how many hours per week your engineering team gets back for strategic work. Developer time lost to flaky tests matters just as much as QA time lost to manual regression.</p><h2 id="the-path-forward-from-bug-finding-to-reliability-engineering"><strong>The path forward: from bug-finding to reliability engineering</strong></h2><p>Better tooling alone will not close the gap. The deeper shift is organizational.</p><p>The quality crisis will not resolve until organizations stop treating quality assurance as tactical bug-finding owned by a single team and start recognizing it for what it actually is: strategic reliability engineering that the entire engineering organization is accountable for. <a href="https://qase.io/blog/shift-left-testing/"><u>Quality is not a phase at the end of the sprint</u></a>. It is not a gate that one team owns. It is the shared discipline that prevents the disasters that never show up on a P&amp;L statement: the outages that do not happen, the regressions that never reach production, the security gaps that never get exploited.</p><p>Until executives understand this (until quality is treated as a first-class engineering function rather than a team you staff up when things break) organizations will continue collapsing under impossible expectations while the $60 billion defect problem persists.</p><p>The 75% burnout rate among QA professionals is not inevitable. It is the predictable result of making quality one team&apos;s problem instead of everyone&apos;s responsibility, and then asking that team to do the work manually while undervaluing the strategic judgment that only humans can provide. The technology to change the first half of that equation exists today. The leadership to change the second half is overdue.</p><hr><p><em>Sources:</em></p><ul><li><em>[1] National Institute of Standards and Technology (NIST), &quot;The Economic Impacts of Inadequate Infrastructure for Software Testing,&quot; 2002. Total annual cost of software defects: ~$59.5 billion. Avoidable costs through earlier detection: $22.2 billion.</em></li></ul>]]></content:encoded></item><item><title><![CDATA[How to prioritize risks for testing?]]></title><description><![CDATA[<p>This article takes the risk statements from the <a href="https://qase.io/blog/how-to-identify-risks/" rel="noreferrer">previous one</a> and turns them into a ranked list you can act on. Each risk gets a score: exposure = likelihood &#xD7; impact. The result is a prioritized table with traceable rationale, and explicit decisions about what not to invest in.</p><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/sudmIeiVe_w?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="How to Prioritize Risks Which Need Testing?"></iframe></figure><p>At the</p>]]></description><link>https://qase.io/blog/how-to-prioritize-risks-for-testing/</link><guid isPermaLink="false">69d648196d96c2000166593c</guid><category><![CDATA[Testing Strategy]]></category><category><![CDATA[Test Management]]></category><dc:creator><![CDATA[Vitaly Sharovatov]]></dc:creator><pubDate>Wed, 08 Apr 2026 12:22:10 GMT</pubDate><content:encoded><![CDATA[<p>This article takes the risk statements from the <a href="https://qase.io/blog/how-to-identify-risks/" rel="noreferrer">previous one</a> and turns them into a ranked list you can act on. Each risk gets a score: exposure = likelihood &#xD7; impact. The result is a prioritized table with traceable rationale, and explicit decisions about what not to invest in.</p><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/sudmIeiVe_w?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="How to Prioritize Risks Which Need Testing?"></iframe></figure><p>At the end of the process described in the <a href="https://qase.io/blog/how-to-identify-risks/" rel="noreferrer">previous article</a> you have a list of explicit risk statements, each with a quantified impact estimate. What you don&apos;t yet have is a ranking, and without one you&apos;re still making decisions by instinct: whoever is loudest in the room, or whatever risk feels most dramatic, gets the attention.</p><h2 id="why-ranking-matters">Why ranking matters</h2><p>We don&apos;t do health checkups every month. Not because health doesn&apos;t matter, but because the return drops: if last month&apos;s checkup found nothing, the odds of catching something new this month are low. The cost of the next checkup exceeds what it&apos;s likely to find. That&apos;s diminishing returns.</p><p>Same with testing. Whether you&apos;re starting from zero or already have a suite, the question is the same: which risks get investment first, and when is enough enough? You can&apos;t test everything. You need to know which risks matter most, invest there first, and have a reason for where you stop.</p><p>Without a ranking, that decision gets made by whoever argues the hardest.</p><h2 id="exposure-likelihood-%C3%97-impact">Exposure: likelihood &#xD7; impact</h2><p>To rank risks, you need a number to compare them by. The simplest one that works: exposure = likelihood &#xD7; impact, both scored 0 to 10. Multiply, and you have a single number you can sort by.</p><p>Likelihood is how often you expect this to actually happen. If it&apos;s already happening every sprint, that&apos;s a 9. If it&apos;s rare and you have controls in place, that&apos;s a 2. I&apos;d suggest to use any information you can find: incident history, expert gut feel, past experience with similar systems.</p><p>Impact is always financial in the end, but the path to the money differs. In SaaS it&apos;s revenue loss. In safety-critical it&apos;s lawsuits and liability. In regulated industries it&apos;s fines and legal exposure. The question to ask your stakeholders: what does failure actually cost us here?</p><h2 id="a-concrete-walkthrough">A concrete walkthrough</h2><p>Here&apos;s how this works with two concrete risks.</p><p>Take an engineer&apos;s fear like &quot;codebase too complex&quot;. You gathered data and converted this fear to the risk statement: &quot;If the codebase becomes too complex, then new features take 3x longer to implement, resulting in delayed releases and $200K in contract penalties&quot;. Likelihood: this is already happening every sprint, score 9. Impact: $200K in contract penalties, score 8. Exposure: 72.</p><p>Now take the fear &quot;security keeps me up at night&quot; from the previous article: &quot;If user input is not properly sanitized, then SQL injection attacks may occur, resulting in a data breach and $2M+ in fines and churn&quot;. Likelihood: rare, some controls already in place, score 2. Impact: $2M+, score 10. Exposure: 20.</p><p>Code complexity: 72. Security vulnerability: 20, a 3.5&#xD7; difference.</p><p>When I first saw a result like this, I was amazed: the chronic risk outranks the scary, catastrophic one. But the managers agreed, even though before they had been dismissing the engineers&apos; complaints about code complexity for months.</p><h2 id="building-the-full-ranked-list">Building the full ranked list</h2><p>Once you&apos;ve scored every risk from your list, you end up with a table you can sort by exposure:</p>
<!--kg-card-begin: html-->
<table style="font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-line: none; text-decoration-thickness: auto; text-decoration-style: solid; caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0);"><thead><tr><th>Risk</th><th>Exposure</th></tr></thead><tbody><tr><td>Code complexity</td><td>72</td></tr><tr><td>Payment regression</td><td>56</td></tr><tr><td>Performance degradation</td><td>45</td></tr><tr><td>Security vulnerability</td><td>20</td></tr></tbody></table>
<!--kg-card-end: html-->
<p>One thing I&apos;ve learned: don&apos;t get into &quot;7.2 vs 7.4&quot; debates. The goal isn&apos;t precise measurement, it&apos;s a consistent ranking with a defensible rationale. Small score differences rarely matter, what matters is that you can explain why something is ranked where it is, and revisit it when conditions change.</p><h2 id="one-more-dimension-lifecycle-stage">One more dimension: lifecycle stage</h2><p>This ranking isn&apos;t static, because priorities shift as a product matures. In an early prototype, functional suitability and usability dominate: the product needs to work and be usable before anything else. In production, reliability, security, and performance efficiency all rise in importance.</p><p>This means the same risk list can produce different rankings at different stages. That&apos;s not inconsistency, it&apos;s the model working correctly. I&apos;d recommend revisiting your rankings whenever the lifecycle stage changes, or whenever there&apos;s a significant shift in usage, architecture, or context.</p><h2 id="not-everything-gets-testing">Not everything gets testing</h2><p>Some risks won&apos;t justify additional investment, and that&apos;s a decision, not an oversight.</p><p>I find it helpful to have thresholds ready when you&apos;re looking at the ranked list:</p><ul><li><strong>Unacceptable</strong>&#xA0;(exposure &gt; 50): must reduce through controls and testing</li><li><strong>Acceptable with controls</strong>&#xA0;(exposure 20&#x2013;50): can proceed if adequate controls are in place</li><li><strong>Acceptable</strong>&#xA0;(exposure &lt; 20): no additional investment needed</li></ul><p>In our example, the security vulnerability scores 20, right on the boundary. In a regulated context with lower thresholds, that might require explicit sign-off. In a commercial SaaS context, it might be documented as acceptable with existing controls and reviewed quarterly.</p><p>The important thing is that &quot;we&apos;re choosing not to invest here&quot; is a conscious decision, not something that happened by default. In my experience, documenting the rationale, assigning an owner, and setting a review date makes the difference between a deliberate acceptance and a forgotten risk.</p><h2 id="what-you-have-at-the-end">What you have at the end</h2><p>At this point you have a prioritized list of risks, ranked by exposure with explicit rationale, and documented acceptance decisions for anything you&apos;re not acting on now.</p><p>In the next article, we&apos;ll take this ranked list and map risks to quality characteristics, then decide where, how, and how much to test. That&apos;s where the portfolio gets built.</p><p>The full research behind this series is at&#xA0;<a href="https://github.com/BeyondQuality/beyondquality/blob/main/research/testing_economics/step2.md?ref=qase.io">BeyondQuality</a>.</p>]]></content:encoded></item><item><title><![CDATA[How to identify risks which matter most for testing]]></title><description><![CDATA[<p>This post is devoted to identifying and quantifying risks before deciding what to test. That means collecting stakeholder fears, converting them into explicit risk statements, and estimating their impact. The output is a prioritized list of risks, not a test plan.</p><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/DooZD44oi84?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="How to Identify Risks That Actually Matter Most For Testing"></iframe></figure><p>In most of the companies I worked for, testing</p>]]></description><link>https://qase.io/blog/how-to-identify-risks/</link><guid isPermaLink="false">69d646cd6d96c2000166592d</guid><category><![CDATA[Testing Strategy]]></category><category><![CDATA[Test Management]]></category><dc:creator><![CDATA[Vitaly Sharovatov]]></dc:creator><pubDate>Wed, 08 Apr 2026 12:17:10 GMT</pubDate><content:encoded><![CDATA[<p>This post is devoted to identifying and quantifying risks before deciding what to test. That means collecting stakeholder fears, converting them into explicit risk statements, and estimating their impact. The output is a prioritized list of risks, not a test plan.</p><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/DooZD44oi84?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="How to Identify Risks That Actually Matter Most For Testing"></iframe></figure><p>In most of the companies I worked for, testing efforts usually started quite simply: someone opens a ticket tracker or a test management tool and starts writing test cases. What system behaviour to verify, which flows to cover, which edge cases to include.</p><p>However, the more important part to discuss and align on first is: what could go wrong, and what would it cost?</p><p>Without figuring that out, you&apos;re designing a testing portfolio without knowing what you&apos;re investing against. You might end up testing what&apos;s easy to test rather than what matters most.</p><h2 id="two-purposes-not-one">Two purposes, not one</h2><p>So how do you figure out what could go wrong? You ask people. And what you get back isn&apos;t structured risk data: it&apos;s fears and concerns. Managers worry about revenue. Engineers worry about fragile integrations. Support worries about the same complaints every sprint. These fears are unstructured, emotional, sometimes vague, but they signal real uncertainty about real risks.</p><p>Gathering fears at the start of the process does two things.</p><p>It gives you signals for further analysis: once you know what stakeholders are worried about, you can convert those fears into explicit risks, quantify impact, and justify testing investments in concrete terms.</p><p>It also produces alignment. Managers, engineers, product owners, and operations often have completely different fears, and they often don&apos;t know each other&apos;s concerns. Surfacing fears in the same room forces a shared picture of what matters. That shared picture is worth building before you write a single test case.</p><h2 id="who-to-ask-and-what-to-ask">Who to ask, and what to ask</h2><p>Different stakeholders carry different fears because they sit in different places relative to failure.</p><p>Managers worry about business impact: revenue loss, regulatory penalties, customer retention, reputation. Engineers worry about technical failure: system complexity, integration brittleness, performance under load, security vulnerabilities. Operations worries about deployment stability and incident recovery. Support teams know the failure patterns that actually reach users.</p><p>You need all of these perspectives. The most common mistake here is only consulting technical teams and missing the business side entirely, or only consulting managers and missing the failure modes engineers already know about.</p><p>How you collect fears depends on your team and culture. In some organizations, structured cross-functional workshops work well: they surface assumptions and produce shared vocabulary quickly. In others, only one-on-ones get people to open up. In most, a combination works best: workshops for the group picture, one-on-ones for the things people won&apos;t say in a room, retrospectives and incident reports for recurring patterns. You know your company and your people better than any framework does.</p><p>One pitfall: rushing identification because it feels like preliminary work, not &quot;the real work&quot;. It is the real work, and skipping it means you&apos;re estimating without a map.</p><h2 id="from-fear-to-risk-statement">From fear to risk statement</h2><p>Fears are raw material. A fear like &quot;I&apos;m worried about performance&quot; or &quot;security keeps me up at night&quot; is a signal, not an input you can act on.</p><p>To make a fear actionable, convert it into an explicit risk statement:</p><blockquote>&quot;If [condition], then [consequence] may occur, resulting in [impact].&quot;</blockquote><p>The template forces specificity. Let&apos;s take two concrete examples.</p><p>Take our insulin pump from Post 1. &quot;I&apos;m worried the dosing calculation could be wrong&quot; is a fear. The risk statement is: &quot;If a defect in the dosing algorithm produces an incorrect insulin dose, then a patient may receive a dangerous dose, resulting in serious harm or death&quot;. That&apos;s a reliability and functional suitability risk. The consequence is named. The impact is named. There&apos;s nothing vague left.</p><p>Now the internal CMS. &quot;I&apos;m worried content won&apos;t save properly&quot; becomes &quot;If a user submits a draft and the save operation fails silently, then content may be lost without a clear error message, resulting in wasted time and editor frustration&quot;. That&apos;s a usability risk. Consequence named. Impact named.</p><p>Same structure, but completely different stakes. Which, in my opinion, is exactly the point.</p><h2 id="quality-check-before-moving-on">Quality check before moving on</h2><p>Not every risk statement is ready for quantification. In my experience, having a simple checklist helps catch the weak ones early.</p><ul><li>Is the trigger observable? (&quot;Peak load during checkout&quot; is. &quot;The system is under stress&quot; is not.)</li><li>Is the consequence measurable? (&quot;Response times exceed 2 seconds&quot; is. &quot;Degraded performance&quot; is not.)</li><li>Is the impacted quality characteristic named?</li><li>Is there an owner?</li><li>Is there a detection signal: what would tell you the risk is materializing?</li><li>Is it actionable: if its score rises, what decision does that drive?</li></ul><p>If a risk statement fails any of these, refine it before moving to quantification. A risk that can&apos;t be detected and can&apos;t be acted on isn&apos;t a risk statement, it&apos;s still a fear.</p><h2 id="quantify-the-impact">Quantify the impact</h2><p>Once risks are explicit and quality-checked, we can proceed and quantify their potential impact.</p><p>Everyone involved brings a different piece of the picture from their expertise. Engineers know how a risk would disrupt development: would it block the team, slow down deployments, eat into capacity for new features. Managers know the financial exposure: revenue loss, regulatory consequences, customer churn. Product owners know the user impact. Ops knows the operational fallout. The point is to get all of this into one conversation, in the same language, about the same risks.</p><p>Back to the insulin pump. Engineers flag that the production line hasn&apos;t been tested in extreme heat, and summers are getting hotter, so who knows, maybe insulin concentrations will start varying across batches. Clinical staff point out that incorrect dosing could lead to hypoglycemia, hospitalization, or death. Regulatory warns that a field defect means mandatory FDA reporting, possible recall, criminal liability. Business sees lawsuits, lost certifications, destroyed market trust.</p><p>Now the CMS. Engineers mention the editor has quirks with certain browsers. Users say formatting sometimes gets lost on save. Business shrugs: some people get annoyed, maybe they stop using it.</p><p>Same process, same questions. Completely different scale of consequences.</p><p>I find three-point estimation works much better than a single number here: optimistic, most likely, pessimistic. It makes uncertainty visible instead of hiding it behind a false-precision guess.</p><p>In my experience, two things help keep this step honest.</p><p>First, don&apos;t stop at &quot;we don&apos;t know&quot;. You always have enough information to produce a rough range. Historical incidents, comparable failures at similar companies, domain expert judgment. A rough range with named uncertainty is more useful than silence.</p><p>Second, get everyone&apos;s expertise into the same conversation. If only engineers contribute, you miss the business picture. If only managers contribute, you miss what engineers already know about where the system is fragile.</p><h2 id="what-you-have-at-the-end">What you have at the end</h2><p>By the end of this process you have a list of explicit risk statements, not vague fears, each with a three-point impact estimate and documented stakeholder agreement on which risks matter most and why. To me, this is the most important outcome: you now have something concrete you can make decisions with, argue about, and revisit when things change. And you have people across the company who collaborated on building it and now better understand the risks the whole organization faces.</p><p>In the next article, we will categorize these risks by ISO 25010 quality characteristic and prioritize by risk exposure, so you know where to invest first.</p><p>The full research behind this series is at&#xA0;<a href="https://github.com/BeyondQuality/beyondquality/blob/main/research/testing_economics/step1.md?ref=qase.io">BeyondQuality</a>.</p>]]></content:encoded></item><item><title><![CDATA[Q1 2026: Qase Product Updates]]></title><description><![CDATA[See what Qase shipped in Q1 2026. Tools designed to reduce maintenance overhead and amplify the impact of quality engineering teams.

]]></description><link>https://qase.io/blog/q1-2026-qase-product-updates/</link><guid isPermaLink="false">69ce394b6d96c200016658f8</guid><category><![CDATA[News & Updates]]></category><dc:creator><![CDATA[Glen Holmes]]></dc:creator><pubDate>Thu, 02 Apr 2026 10:37:43 GMT</pubDate><media:content url="https://d36r73waboa44k.cloudfront.net/2026/04/Product-update-Q1-2026.png" medium="image"/><content:encoded><![CDATA[<img src="https://d36r73waboa44k.cloudfront.net/2026/04/Product-update-Q1-2026.png" alt="Q1 2026: Qase Product Updates"><p>There&apos;s a defining moment in every engineering organization when the QA team&apos;s work stops being a checkpoint at the end of the sprint and starts being the infrastructure the whole company runs on.</p><p>You&apos;re not just catching bugs anymore. You&apos;re designing the quality systems that let your organization ship with confidence, maintain velocity as the codebase grows, and give leadership a clear picture of release health without someone having to manually compile a report. That is a different, harder, and far more valuable job than what most people outside QA understand.</p><p>We spend a lot of time talking with you, and one thing is abundantly clear: your expertise has never mattered more. But as your product scales, so does the operational weight: more test cases, more shared steps to maintain, more dashboards, more pipelines, more teams depending on the systems you built. The work compounds faster than the headcount does.</p><p>For Q1 2026, we focused entirely on clearing that weight. We spent the last three months building tools designed to amplify what you already do best, so you can spend your time on the strategic, high-leverage quality engineering that actually moves the needle, not on the maintenance and repetition that surrounds it.</p><p>Here is how we&apos;re supporting your growth.</p><h2 id="pillar-1-test-automation-minus-the-boilerplate"><strong>Pillar 1: Test Automation, Minus the Boilerplate</strong></h2><p>The hardest part about scaling test automation isn&apos;t finding the time to write tests. It&apos;s the translation overhead. You know exactly how a feature should behave, that knowledge exists in your head in clear, plain language. The work is converting it into selectors, wait statements, and boilerplate scripts that actually run.</p><p>In Q1, we shipped a set of AIDEN capabilities designed to collapse that gap at every stage of your automation workflow.</p><p><a href="https://docs.qase.io/en/articles/11012497-aiden-qa-architect?ref=qase.io#:~:text=within%20the%20application.-,Define%20Your%20Steps%3A,-AIDEN%20will%20break" rel="noreferrer"><strong>Agentic Mode.</strong></a> You state the goal: &quot;Verify a user can purchase shoes on our marketplace,&quot; and AIDEN figures out the path. It breaks your natural language into actionable steps, captures screenshots for visual feedback, and generates test code. You can stay high-level or get specific: &quot;Click &apos;Get a Demo&apos; and fill the email field as &apos;apidemo@qase.io&apos;.&quot; AIDEN finds the field and handles the input.</p><figure class="kg-card kg-image-card"><img src="https://downloads.intercomcdn.com/i/o/wsaz8vex/2109557718/c4eeb783ca83286e1f18a2983182/image.png?expires=1775196000&amp;signature=1d630ce76fdbac9aa8858a8c076a7c7cab59c5b89289c06aa7b59771f4688500&amp;req=diEnH8x7moZeUfMW3nq%2BgWk4nmuQ9%2BXrvjC7o7xgmlhk9Ot9iARFdnCNHbbF%0AwBnpHiqBB0Eakzv5CjxZfCvCWeU%3D%0A" class="kg-image" alt="Q1 2026: Qase Product Updates" loading="lazy"></figure><p><a href="https://docs.qase.io/en/articles/11981696-aiden-test-advisor?ref=qase.io" rel="noreferrer"><strong>Test Advisor in Folder View.</strong></a> Test Advisor has always helped you identify which test cases are ready for automation before you start converting. What&apos;s new this quarter is that it&apos;s now available in Folder view, and you can bulk-run it.</p><p>Select all your test cases from a folder, hit &apos;Run Advisor&apos;, and AIDEN analyses them in one go. Results come back color-coded: green cases are ready to automate with minimal oversight; yellow cases have good potential but are missing detail; red cases lack the steps or specificity that automation requires. Fix what needs fixing, re-analyse, then convert. No more discovering mid-generation that a test case wasn&apos;t ready.</p><p>The insight is yours, and it&apos;s also something you can share with any developer on the team who wants to contribute to test automation but isn&apos;t sure where to start.</p><p><a href="https://docs.qase.io/en/articles/11012497-aiden-qa-architect?ref=qase.io#h_d3aeaf4c0b" rel="noreferrer"><strong>Parameterized Test Conversion.</strong></a> If you&apos;ve ever maintained a data-driven test suite, you know the problem: one test scenario, twelve input combinations, twelve separate test cases. Keeping them consistent when something changes is the kind of work that makes automation feel like more maintenance than it&apos;s worth.</p><p>AIDEN now handles this natively. When you convert a manual test that includes multiple input combinations, such as different usernames, passwords, and expected results, AIDEN identifies the parameterized structure, extracts the data, generates one reusable test flow, and creates individual execution instances for each combination automatically. One test. Multiple runs. All your combinations covered.</p><figure class="kg-card kg-image-card"><img src="https://downloads.intercomcdn.com/i/o/wsaz8vex/2210579801/40f08f122d326ca4778c81548d95/image.png?expires=1775196000&amp;signature=8c07a1af448be620414ee41e18c9e994204c1af5a846574a6fec8bc88aefabf0&amp;req=diImFsx5lIlfWPMW3nq%2BgcrCHQgnoWBhf75K9nc2vvbORe3pi8uvC6TNhqFq%0Anvpu3du4LdW%2BgOVRvnxs8H37S3g%3D%0A" class="kg-image" alt="Q1 2026: Qase Product Updates" loading="lazy"></figure><p><a href="https://docs.qase.io/en/articles/11012497-aiden-qa-architect?ref=qase.io#:~:text=copied%20and%20downloaded.-,API%20Requests,-AIDEN%20supports%20executing" rel="noreferrer"><strong>API Testing and Random Data.</strong></a> Tests aren&apos;t just front-ends. AIDEN can execute GET and POST API requests via CURL right in your flow, so a single test can validate both UI behaviour and the API calls underneath. And to keep your environments clean, the Random Data Generator creates unique emails, usernames, and passwords on the fly. No more hardcoded test_user_123 polluting your pre-production data.</p><p><a href="https://docs.qase.io/en/articles/11012497-aiden-qa-architect?ref=qase.io#h_48efd646fd:~:text=clear%2C%20atomic%20steps.-,Self%20corrected%20Step%20Generation%20in%20Agent%20mode,-When%20using%20Agent" rel="noreferrer"><strong>Self-Correcting Generation.</strong></a> When your application changes mid-generation and a step breaks, AIDEN no longer stops and hands the problem back to you. It detects the failure, analyses what changed, and re-attempts with a corrected approach. The generation process keeps moving.</p><p><a href="https://docs.qase.io/en/articles/11012497-aiden-qa-architect?ref=qase.io#h_25fc566ec7:~:text=difficult%20to%20test.-,Canvas%20Testing,-Canvas%2Dbased%20applications" rel="noreferrer"><strong>Canvas Support.</strong></a> Heavy &lt;canvas&gt; elements, tools like Miro or Figma, have always been automation dead zones. AIDEN now interprets visual elements within a canvas just like standard DOM elements. If your product includes a canvas-based UI, that coverage gap is closed.</p><h2 id="pillar-2-architecting-the-enterprise-test-library"><strong>Pillar 2: Architecting the Enterprise Test Library</strong></h2><p>When you build a mature test suite, shared steps transition from a convenience into a foundational dependency. Treat thousands of them as a flat list and you end up with the kind of codebase where nobody&apos;s quite sure what anything does, changes break things unexpectedly, and onboarding a new engineer takes two hours of tribal knowledge transfer.</p><p>We&apos;ve fundamentally upgraded how you manage these assets.</p><p><a href="https://docs.qase.io/en/articles/5563709-shared-steps?ref=qase.io#h_ca606e4fbc" rel="noreferrer"><strong>Folder Structures for Shared Steps</strong></a><strong>.</strong> Shared steps can now be organized into folders and subfolders by domain, like Billing, Auth, or Compliance, mirroring your actual product structure. Bulk move, clone, and rename entire directories. The shared step library now works the way a codebase should: navigable, structured, and maintainable as it grows.</p><figure class="kg-card kg-image-card"><img src="https://downloads.intercomcdn.com/i/o/wsaz8vex/2107236794/f774502fb75fbcb8c3d66bb43082/34547.png?expires=1775127600&amp;signature=fddce4b8aecaa437ed51b501ab51c612ac454127eacdb883003af89495d4cc08&amp;req=diEnEct9m4ZWXfMW1HO4zUVj11uCOZmf6kzSV13rC596x8MaW17easOMcJN9%0Aa1zbEN869ZLBO3v22VM%3D%0A" class="kg-image" alt="Q1 2026: Qase Product Updates" loading="lazy"></figure><p><a href="https://docs.qase.io/en/articles/5563709-shared-steps?ref=qase.io#h_25d3079a10" rel="noreferrer"><strong>Child Steps in Shared Steps.</strong></a> Complex systems demand modularity. You can now nest child steps directly inside shared steps, building sophisticated reusable workflows. A Checkout Flow shared step can contain the governed logic for payment, address validation, and inventory checks nested within it. Reuse at the macro level, visibility at the micro level.</p><figure class="kg-card kg-image-card"><img src="https://downloads.intercomcdn.com/i/o/wsaz8vex/2100307183/f2fa19a2f817f3dec511f37a6f4f/5056.png?expires=1775127600&amp;signature=f7e35ba1f0f726b3b3a91538b36cc039dc279f50bbbaad1a26ad95002f5bb8cc&amp;req=diEnFsp%2BmoBXWvMW1HO4zfeG8IxmSdJpKZhtnzNxxVV5XGDtQiXNsuMi5SXW%0A4q%2FdxMypV4pStNHH7uU%3D%0A" class="kg-image" alt="Q1 2026: Qase Product Updates" loading="lazy"></figure><p><a href="https://docs.qase.io/en/articles/5563709-shared-steps?ref=qase.io#h_ee7d0d7d1e" rel="noreferrer"><strong>Bulk Conversion and Safety.</strong></a> Select multiple local steps and promote them to Global Shared Steps in a single click. For cleanup, bulk delete with options to either cascade the deletion across all associated test cases or convert them back to local steps first, so you don&apos;t silently break historical data while refactoring.</p><h2 id="pillar-3-data-that-makes-the-whole-team-look-good"><strong>Pillar 3: Data That Makes the Whole Team Look Good</strong></h2><p>Test data tells the story of your product&apos;s health. The challenge is that it usually tells that story in a way only the person who wrote the query can fully interpret. We want QA leads, engineering managers, developers, and product owners to be able to read it too.</p><p><a href="https://docs.qase.io/en/articles/13342243-qql-widgets?ref=qase.io" rel="noreferrer"><strong>Bar Charts and Donut Charts in QQL.</strong> </a>You can now switch any QQL result directly to a Bar Chart or Donut Chart view, save it as the default, and add it as a widget on your dashboard. Bar charts for distribution and trends; donut charts for proportion: automation coverage by business unit, test pass rates by team, coverage gaps at a glance. The work you&apos;ve done to build this data deserves to be understood by the people you&apos;re presenting it to. Now it will be.</p><figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/04/Bar-chart.png" class="kg-image" alt="Q1 2026: Qase Product Updates" loading="lazy" width="1862" height="899"></figure><p><a href="https://docs.qase.io/en/articles/6417205-saved-queries?ref=qase.io#h_f7cfe24266" rel="noreferrer"><strong>Private Queries.</strong></a> When you&apos;re deep in an investigation, every exploratory query you save to the public list degrades the signal for everyone else. Queries are now private by default, visible only to you and promotable to public when they prove useful to the team, but not demotable once shared. A one-way door that keeps the shared workspace clean.</p><h2 id="pillar-4-powering-the-complex-enterprise"><strong>Pillar 4: Powering the Complex Enterprise</strong></h2><p>The modern engineering org doesn&apos;t map neatly to &quot;one project, one repo.&quot; Monorepos, microservices, distributed systems, multiple teams working in multiple languages: your tooling needs to reflect that reality, not ask you to simplify down to it.</p><p><a href="https://docs.qase.io/en/articles/9123660-requirements-traceability-report?ref=qase.io#:~:text=of%20testing%20efforts.-,Overview,-Requirement%20Traceability%20Report" rel="noreferrer"><strong>GitLab Requirement Traceability.</strong></a> For teams that need to demonstrate that specific requirements have been validated by specific tests, whether in regulated environments, compliance-heavy orgs, or anywhere auditors ask for evidence, we&apos;ve extended full requirements traceability support to GitLab. Link test cases to GitLab requirements, track which requirements have test coverage, and see which open failures are blocking a requirement from being marked satisfied. GitHub teams have had this for a while. GitLab teams now have it too.</p><p><strong>Multi-Project Routing.</strong> Our JavaScript and Python reporters now handle monorepos natively. A single CI run that touches three product lines routes those results to the correct Qase projects simultaneously, with no custom pipelines, no post-processing, no manual splitting after the fact.</p><p><strong>Java Reporter Performance.</strong> Result processing time in our Java reporters has been cut by 40% on large projects. If you&apos;re running a significant Java suite, this shows up directly in your pipeline times.</p><p><strong>Network Profiler in JS Reporters.</strong> JS reporters now capture network activity alongside test results. When a test fails because of a request that timed out or returned the wrong status code, that evidence is in the report, not something you have to reconstruct from server logs after the fact.</p><p><strong>Interactive Test Reports.</strong> We published Qase Report, a CLI tool that generates a fully interactive, standalone HTML report from your test results in a single command. No server, no dashboard login required. Share it with a stakeholder, open it in a browser, done. Full write-up: <em>Interactive Test Reports in a Single Command</em>.</p><p><strong>Expanded Framework Support.</strong> We added native reporters for MSTest and xUnit v3, and open-sourced a reporter for Pest (PHP). For mobile teams, NUnit and JUnit 4 for Android. Behave now supports step-level attachments; CucumberJS gets parameter and suite annotation support; Mocha, Vitest, and Jest support expected results and input data at the step level. Jira Data Center 10.x is now officially supported, whether you&apos;re on the latest version or holding stable.</p><h2 id="the-finishing-touches"><strong>The Finishing Touches</strong></h2><p>Alongside the major pillars, we squashed 10 bugs across the platform this quarter. Not every improvement ships as a feature announcement. Sometimes it&apos;s fixing what&apos;s been quietly bothering you for three months.</p><p>We also moved our community feedback and roadmap to <strong>roadmap.qase.io</strong>. Submit requests, vote on what you want built, and track ideas as they move from submitted to in progress to shipped. If you used Canny before, your data migrates automatically. The roadmap is public, the conversation is open, and what you vote for directly shapes what we prioritise.</p><p>We&apos;re proud of what shipped this quarter, and we&apos;re even more proud of the work you do. You are the architects of software quality, and our goal is to build a platform that feels like a true partner in that work. Everything in this post is live today.</p><p>We can&apos;t wait to see what you build next.</p><p>Happy testing!</p><hr><p>Don&apos;t miss out on future updates and valuable content! Connect with us on<strong>&#xA0;</strong><a href="https://www.linkedin.com/company/qaseio/?ref=qase.io"><strong><u>LinkedIn</u></strong></a><strong>&#xA0;</strong>to stay informed about all things Qase.</p>]]></content:encoded></item><item><title><![CDATA[How much testing do we need? Economics of testing]]></title><description><![CDATA[This post introduces the Cost of Quality framework, portfolio thinking, and a four-step process for making testing decisions grounded in risk.]]></description><link>https://qase.io/blog/how-much-testing-do-we-need/</link><guid isPermaLink="false">69c550cd6d96c200016658d5</guid><category><![CDATA[Test Management]]></category><dc:creator><![CDATA[Vitaly Sharovatov]]></dc:creator><pubDate>Thu, 26 Mar 2026 15:32:41 GMT</pubDate><media:content url="https://d36r73waboa44k.cloudfront.net/2026/03/economics-of-test-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://d36r73waboa44k.cloudfront.net/2026/03/economics-of-test-1.png" alt="How much testing do we need? Economics of testing"><p>Testing is an economic investment, not a cost to minimize. The question &quot;how much testing do we need?&quot; is broken: it should be &quot;which risks justify which investments?&quot;. This post introduces the Cost of Quality framework, portfolio thinking, and a four-step process for making testing decisions grounded in risk.</p><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/Wv6no4j68tE?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="How Much Testing Does Your Product Need?"></iframe></figure><p>I read arguments and battles on &quot;what&apos;s the proper coverage&quot; on the internet a lot. This saddens me.</p><p>To me, this question is completely broken: it frames testing as a quantity problem. And quantity problems get quantity answers: &quot;80% code coverage&quot;. Arbitrary numbers that feel responsible but aren&apos;t grounded in anything real.</p><p>To me, the right question is: which risks justify which investments?</p><h2 id="a-story-about-cutting-coverage-and-improving-quality">A story about cutting coverage and improving quality</h2><p>A few years ago I worked on a project sitting at 70% code coverage. In reports, this looked nice. In practice, the suite was full of flaky tests which failed intermittently for reasons unrelated to actual defects.</p><p>The costs were layered. The direct cost was obvious: the team was spending up to two man-days a week investigating failures that turned out to be noise. But the planning cost was worse: because the spikes were unpredictable, some weeks nothing, some weeks the whole team was blocked, sprint planning became guesswork. And then there was the trust cost, which was the most damaging: even when tests passed, nobody was confident they meant anything. The suite had lost its credibility. There was no way to tell &quot;we&apos;re fine&quot; from &quot;the tests just aren&apos;t catching anything&quot;.</p><p>I ran the numbers: the effort going into maintaining and investigating those flaky tests was not producing expected value. So we cut them. Coverage dropped to 40%. We filled the gaps with manual testing while we slowly rebuilt, writing reliable tests for the things that actually mattered.</p><p>Quality improved, and not because we tested less, but because the evidence that remained was trustworthy. When the suite passed, it meant something. When it flagged a problem, we acted on it instead of adding it to the &quot;probably flaky&quot; pile.</p><p>&quot;Less testing&quot; was actually &quot;better investment&quot;.</p><h2 id="same-question-completely-different-answers">Same question, completely different answers</h2><p>I can think of two very different systems. An insulin pump: a defect can kill a patient. The software is FDA-regulated, subject to formal verification, extensive documentation, and extensive testing at every level. A corporate internal CMS used by 50 people to publish blog posts: a defect means wrong formatting, or a post doesn&apos;t save on the first try.</p><p>Ask both teams &quot;how much testing do you need?&quot; and you should get completely different answers. Not because one team is more professional or more serious than the other, but because the risks are different. The stakes are different. The consequences of failure are different.</p><p>Testing strategy follows from risk, not from a generic idea of &quot;the right amount&quot;.</p><h2 id="what-testing-actually-does">What testing actually does</h2><p>Testing is a learning activity; you explore a system, build understanding of how it behaves, and discover where it breaks. That learning produces evidence: about defects, failure modes, risks. In economic terms, this is appraisal: generating information that supports decisions.</p><p>It does not reduce risk on its own.</p><p>Risk is reduced when findings lead to action: a fix, a mitigation, a control, a rollback plan, a guardrail. &quot;We tested it&quot; does not mean &quot;it&apos;s safe&quot;. &quot;We tested it, found X, and did Y about it&quot; &#x2014; that&apos;s risk reduction.</p><p>This is what made the flaky test situation so corrosive. The tests were running, but the evidence they produced couldn&apos;t be trusted. So no risk was actually being reduced. The suite looked like an investment. It was a cost.</p><h2 id="testing-as-investment">Testing as investment</h2><p>Investment, in the economic sense: an asset obtained at cost, with an expectation that the future value it creates will exceed that cost.</p><p>This is exactly what testing is. You spend time, money, and attention today to reduce the probability and impact of costly failures tomorrow. The return isn&apos;t guaranteed &#x2014; you&apos;re estimating expected value under uncertainty, not booking a fixed outcome. But that&apos;s true of any investment.</p><p>Same with any insurance, actually. You pay premiums expecting never to file a claim. If you never claim, that isn&apos;t waste &#x2014; the protection was real. The decision to insure is based on risk exposure, not on the assumption you will definitely suffer a loss. Nobody buys scuba diving insurance for their daily commute. Construction sites invest heavily in safety equipment because the risks are real and the consequences of getting it wrong are severe.</p><p>Preventive healthcare works the same way. Catching a problem early is cheaper than treating it late. That&apos;s not a guarantee; it&apos;s a reason to invest.</p><p>But in my case, 70% coverage was a bad investment, more like a cost actually. The effort wasn&apos;t producing the future value we needed, namely reliable evidence about risk.</p><h2 id="the-cost-of-quality-framework">The Cost of Quality framework</h2><p>There&apos;s a useful vocabulary for where quality-related money goes. <a href="https://asq.org/quality-resources/cost-of-quality?srsltid=AfmBOooCrc2jboLSxLuPfXedZubghx7pSfS2XkM6kKZ_taPH9g4x0axj&amp;ref=qase.io" rel="noreferrer">The Cost of Quality framework</a> breaks it into four categories.</p><p><strong>Prevention</strong>: training, standards, design reviews, threat modeling &#x2014; activities that stop defects from being introduced.</p><p><strong>Appraisal</strong>: testing, code reviews, static analysis, audits &#x2014; activities that detect defects that exist.</p><p><strong>Internal failure</strong>: rework, debugging, re-testing, delayed releases &#x2014; costs incurred when defects are found before they reach users.</p><p><strong>External failure</strong>: incidents, support load, lost revenue, fines, reputation damage &#x2014; costs incurred when defects reach users.</p><p>External failure is the most expensive category, because it combines remediation cost with customer impact, operational disruption, and often lasting reputational damage.</p><p>The economic goal is to invest in prevention and appraisal at a level that reduces the more expensive failure costs. The two man-days a week we were spending on flaky test investigation was internal failure cost, rework caused by unreliable appraisal. We were paying the failure tax without getting the appraisal benefit.</p><h2 id="diminishing-returns-and-portfolio-thinking">Diminishing returns and portfolio thinking</h2><p>So the framework helps you see where your money goes. But even when you&apos;re investing in the right category &#x2014; appraisal &#x2014; not every dollar spent there delivers equal value.</p><p>The first tests you write tend to catch the most obvious, highest-impact problems. Each additional test catches progressively rarer or lower-impact issues. At some point, the cost of the next test exceeds the expected value of the risk it covers.</p><p>So you can&apos;t just &quot;add more&quot; &#x2014; you need to choose what kind of investment you need for which risks. That&apos;s portfolio thinking: selecting a mix of complementary activities to achieve maximum risk reduction per resource spent. Static analysis catches things integration tests don&apos;t. Exploratory testing finds what scripted tests miss. The combination matters.</p><p>To build a portfolio, you need to know what you&apos;re investing against.</p><h2 id="start-from-risks">Start from risks</h2><p>Investing against what, exactly? Risks to specific qualities your system needs to have. ISO 25010 gives a useful vocabulary for those: eight product quality characteristics that map to the categories of risk your system may carry. Functional suitability, performance efficiency, compatibility, usability, reliability, security, maintainability, portability.</p><p>For an insulin pump, the dominant risks sit in reliability, functional suitability, and security. For an internal CMS, they&apos;re mostly usability and maybe functional suitability. Different risk profiles mean different portfolios &#x2014; different mixes of testing activity, different intensity, different coverage targets.</p><p>When investment decisions are grounded in risks, they become traceable and reviewable. You can explain why you&apos;re testing what you&apos;re testing. You can revisit the decision when conditions change.</p><h2 id="what-comes-next">What comes next</h2><p>What we did on that project &#x2014; cutting coverage from 70% to 40%, adding manual testing, rebuilding selectively &#x2014; was an instinctive version of a more systematic process: calculate the cost, cut what isn&apos;t delivering, rebuild deliberately.</p><p>That process has four steps: identify and quantify risks, categorize and prioritize by exposure, decide where and how to invest in testing, then review and rebalance. It&apos;s a continuous cycle, not a one-time plan.</p><p>Next post in this series will cover the first step: how to identify and quantify the risks you&apos;re actually managing.</p><p>The full research behind this series is at&#xA0;<a href="https://github.com/BeyondQuality/beyondquality/blob/main/research/testing_economics/testing_economics.md?ref=qase.io">BeyondQuality</a>.</p>]]></content:encoded></item><item><title><![CDATA[Qase MCP: Bringing AI-Generated Testing Into the Real SDLC]]></title><description><![CDATA[The Qase MCP Server lets AI tools like Claude and Cursor create and manage tests directly inside Qase, keeping AI-generated testing structured, visible, and actionable.]]></description><link>https://qase.io/blog/qase-mcp-bringing-ai-generated-testing-into-the-real-sdlc/</link><guid isPermaLink="false">69c2b4206d96c200016658c1</guid><dc:creator><![CDATA[Berna Agar]]></dc:creator><pubDate>Tue, 24 Mar 2026 15:58:49 GMT</pubDate><media:content url="https://d36r73waboa44k.cloudfront.net/2026/03/1774358525797.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://d36r73waboa44k.cloudfront.net/2026/03/1774358525797.jpeg" alt="Qase MCP: Bringing AI-Generated Testing Into the Real SDLC"><p>If you work in tech, or a tech adjacent industry, you&#x2019;ll know that AI has accelerated everything, including software development. Engineers now write, refactor, and ship code faster than ever using tools like Claude, Cursor, and Copilot.</p><p>Development speed has increased, but testing workflows often struggle to keep up. Engineering teams face more features, more commits, and more releases than usual testing cycles were designed to handle.</p><p>AI is already helping close that gap. Teams increasingly use AI to generate test cases from feature descriptions, analyze code changes for regression scenarios, or explore edge cases that might otherwise be missed.</p><p>But generating tests is only part of the problem.</p><p>Organizations still need a way to manage those tests, track coverage, coordinate teams, and maintain visibility into quality over time.</p><p>This is where AIDEN and the Qase MCP Server work together.</p><h2 id="ai-can-generate-tests-teams-still-need-to-manage-them">AI can generate tests. Teams still need to manage them.</h2><p>AI assistants are excellent at producing test ideas.</p><p>Developers can ask tools like Claude or Cursor to generate test cases from a specification. AI can analyze code changes and suggest additional coverage. Within seconds, teams can produce dozens of potential tests.</p><p>But AI tools alone don&#x2019;t provide the structure organizations need to manage testing effectively.</p><p>Teams still need a system that provides:</p><ul><li>Traceability between requirements, tests, and defects</li><li>Coverage visibility across features and releases</li><li>Team coordination for QA workflows</li><li>Audit trails and governance required by enterprises</li><li>Human oversight to validate and evolve test strategies</li></ul><p>Without a structured testing system, AI-generated tests remain scattered across conversations, documents, or temp files.</p><p>To fully benefit from AI-generated testing, those artifacts need to live inside a system designed to manage quality.</p><p>That system is Qase.</p><h2 id="aiden-ai-built-specifically-for-testing">AIDEN: AI built specifically for testing</h2><p>AIDEN is Qase&#x2019;s AI testing expert, designed to help teams create and manage tests faster.</p><p>It allows teams to determine which tests are most suitable, as well as generate test cases directly inside Qase in multiple ways, including:</p><ul><li>Generating tests from requirements or feature descriptions</li><li>Using natural language prompts to create or update test cases</li><li>Expanding test coverage with AI-generated scenarios</li></ul><p>AIDEN is an AI purpose-built for testing workflows inside Qase.</p><p>Instead of drafting tests in external tools and manually transferring them into a test management system, AIDEN generates tests directly where they will ultimately be stored, reviewed, and executed.</p><p>This helps engineering teams reduce backlog and accelerate test creation while keeping testing artifacts structured and visible to the entire team.</p><p>But modern engineering teams don&#x2019;t rely on just one AI tool.</p><p>Developers already work with AI inside their IDEs, documentation systems, and collaboration tools. That&#x2019;s where the Qase MCP Server comes in.</p><h2 id="what-mcp-is-and-why-it-matters">What MCP is and why it matters</h2><p>Model Context Protocol (MCP) is an open standard that allows AI assistants to interact with external systems through structured operations.</p><p>Instead of AI tools working in isolation, MCP enables them to connect to real systems like version control platforms, documentation tools, and test management systems.</p><p>You can think of MCP as the bridge that allows AI assistants to interact with the tools teams already use.</p><p>For testing workflows, this means AI tools can do more than suggest tests, they can interact directly with the system where tests are managed.</p><h2 id="the-qase-mcp-server-connecting-ai-tools-to-your-testing-system">The Qase MCP Server: connecting AI tools to your testing system</h2><p>The Qase MCP Server allows external AI assistants to interact directly with Qase.</p><p>Teams can continue using AI tools like Claude or Cursor, but instead of generating test artifacts that remain in chats or notes, those tools can create and manage testing artifacts directly inside Qase.</p><p>Through MCP, AI assistants can:</p><ul><li>Retrieve structured testing data such as projects, suites, runs, and results</li><li>Search testing data using QQL</li><li>Create and update test cases</li><li>Access execution history and environment data</li></ul><p>This allows teams to combine the flexibility of AI assistants with the structure of a dedicated test management system.</p><p>In practice, it means AI tools can participate in real testing workflows rather than producing disconnected suggestions.</p><h2 id="a-practical-example">A practical example</h2><p>Consider a typical development workflow.</p><p>After a developer opens a pull request, a QA engineer can use an AI assistant to analyze the changes and generate relevant test cases and regression scenarios. Through the Qase MCP Server, those tests can be created directly in Qase, where the QA team can review them, refine the scenarios if needed, and execute them as part of the testing workflow.</p><p>Instead of documenting suggested tests elsewhere or leaving them in AI chat threads, the assistant adds them directly to the appropriate test suites in Qase.</p><p>These workflows allow teams to keep the speed of AI-assisted development while ensuring that testing artifacts remain structured, visible, and manageable across the organization.</p><h2 id="why-structured-test-management-still-matters">Why structured test management still matters</h2><p>AI can accelerate test creation, but organizations still need a system that manages testing as part of the broader SDLC.</p><p>Testing systems provide the visibility teams rely on to answer important questions:</p><ul><li>What coverage exists for this feature?</li><li>What failed in the last release?</li><li>Where are defects clustering?</li></ul><p>More importantly, they help teams answer the question that ultimately matters most: are we ready to ship?</p><p>They also provide the governance and traceability required by enterprise environments.</p><p>Qase acts as the system of record for quality, capturing the full lifecycle of testing activity across projects and releases.</p><p>By combining AIDEN&#x2019;s AI-driven test creation with MCP&#x2019;s ability to connect external AI tools, Qase helps teams ensure that AI-generated testing becomes part of a structured, collaborative process.</p><h2 id="bringing-testing-into-the-ai-augmented-sdlc">Bringing testing into the AI-augmented SDLC</h2><p>The software development lifecycle is being reshaped by AI.</p><p>AI now participates in writing code, reviewing changes, generating documentation, and suggesting improvements. Testing is becoming part of that same AI-assisted workflow.</p><p>Instead of a linear process where testing happens at the end, teams are moving toward a continuous feedback loop: Code, testing, and analysis now happen simultaneously, feeding improvements directly back into the development process.</p><p>For this process to work effectively, testing data and workflows must be accessible to the AI tools teams already use</p><p>AIDEN accelerates test creation directly within Qase, while the Qase MCP Server connects external AI assistants to the same system.</p><p>Together, these tools make it possible for testing to operate as a continuous part of the development loop, not a separate phase that slows it down.</p><h2 id="takeaways">Takeaways&#xA0;</h2><p>AI is accelerating how software is built. Developers can generate code and tests faster than ever before using tools like Claude, Cursor, and other AI assistants. But speed alone isn&#x2019;t enough. Without structure, visibility, and coordination, AI-generated tests remain fragmented and difficult to manage at scale.</p><p>That&#x2019;s where Qase comes in. AIDEN enables teams to generate and expand test cases directly inside Qase, ensuring that test creation happens within a system designed for collaboration and execution. At the same time, the Qase MCP Server connects external AI tools to that same system, allowing teams to continue using their preferred AI workflows while ensuring the results are captured, organized, and actionable.</p><p>Together, AIDEN and MCP bridge the gap between AI-enabled speed and structured quality management. Teams don&#x2019;t have to choose between moving fast and maintaining control because they can do both by ensuring AI-generated tests are created, managed, and tracked within Qase.</p>]]></content:encoded></item><item><title><![CDATA[March 2026 Quality Engineering meetup in Berlin]]></title><description><![CDATA[<figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/03/IMG_0352.jpg" class="kg-image" alt loading="lazy" width="4032" height="3024"></figure><p>Our Berlin Quality Engineering community came together for another evening, this time at a new venue, <a href="https://w3hub.berlin/?ref=qase.io" rel="noreferrer">W3.Hub</a>, which turned out to be a fantastic spot for the event. Around 70 people showed up, and the energy was as good as ever: three very different talks, great pizza, and the</p>]]></description><link>https://qase.io/blog/march-2026-quality-engineering-meetup-berlin/</link><guid isPermaLink="false">69aabef76d96c20001665891</guid><category><![CDATA[Community]]></category><dc:creator><![CDATA[Vitaly Sharovatov]]></dc:creator><pubDate>Mon, 16 Mar 2026 04:57:15 GMT</pubDate><media:content url="https://d36r73waboa44k.cloudfront.net/2026/03/Qase-meetup-Berlin-june12th.png" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/03/IMG_0352.jpg" class="kg-image" alt="March 2026 Quality Engineering meetup in Berlin" loading="lazy" width="4032" height="3024"></figure><img src="https://d36r73waboa44k.cloudfront.net/2026/03/Qase-meetup-Berlin-june12th.png" alt="March 2026 Quality Engineering meetup in Berlin"><p>Our Berlin Quality Engineering community came together for another evening, this time at a new venue, <a href="https://w3hub.berlin/?ref=qase.io" rel="noreferrer">W3.Hub</a>, which turned out to be a fantastic spot for the event. Around 70 people showed up, and the energy was as good as ever: three very different talks, great pizza, and the kind of discussions that keep going well past the official end.</p><p>It is a true pleasure to have people coming up to me and saying &quot;thank you for an amazing set of talks&quot; or &quot;you must run a conference&quot;. This means a lot, and maybe partly it works so well because I intentionally try to maintain a diversity of topics: a human-centric angle, a scientific one, and a day-to-day QA one. All about quality, of course, but from very different perspectives I believe we must always have.</p><h2 id="food-qa-testing-tasting">Food QA: Testing Tasting</h2><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/x0Ju-guJqiE?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Food QA: Testing Tasting | Quality Engineering meetup"></iframe></figure><p>Mitesh Patel opened the evening with an awesome talk on how big food corporations test food.</p><p>Mitesh introduced the concept of &quot;wetware QA&quot;: our brains are essentially running quality checks every time we taste food. When you bite into something and your brain decides whether it is good, it is running a testing algorithm &#x2014; dissolution of chemicals in saliva, activation of taste receptors, conversion to nerve impulses, and recognition of flavours by the cortex.</p><p>So it is very easy for one person to &quot;test&quot; what they are eating and give a verdict: high or low quality. But everyone is different, and tasting preferences change even within one person depending on the time of day or their mood. So what would it take for a food producer to test the food they produce? Ask all their customers to taste it?</p><p>Human taste testing does not scale. The food industry tries to solve this with Sensory Evaluation Panels &#x2014; groups of trained tasters &#x2014; but these are expensive, slow, subject to biopsychosocial bias, and take months to set up. So maybe it would be possible to skip humans altogether? Teach a computer what food <em>tastes</em>?</p><p>Computers do not have taste receptors or saliva &#x2014; they simply cannot run the same algorithm our brains do. But they can <em>see</em> flavours. Through near-infrared spectroscopy, food chemicals reveal unique light absorption signatures &#x2014; what Mitesh called &quot;seeing the invisible colours of food.&quot;</p><p>In software development, some systems are also too complex to fully analyse from the inside &#x2014; large language models being the obvious example right now. So we do the same thing Mitesh described: find a proxy. Black-box testing, RAGAS evals, observability-based testing &#x2014; all ways of <em>seeing</em> what we cannot directly <em>taste</em>.</p><h2 id="earn-trust-keep-your-job">Earn Trust, Keep Your Job</h2><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/gZhOlljhqUY?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="EARN TRUST,  KEEP YOUR JOB |  Quality Engineering meetup"></iframe></figure><p>Asya Isakova gave the second talk about trust at work, grounded in organizational psychology research.</p><p>You got the job, now how do you keep it? Your colleagues and managers decide whether to trust you not by reading your CV, but by watching what you do in the first weeks and months. Trust is defined as &quot;a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behaviour of another&quot;. So how do you build it? How do you help people desire to be open or even vulnerable to you?</p><p>Asya identified three pillars: <strong>integrity</strong> (do the right thing consistently &#x2014; be honest, fair, hold high standards), <strong>ability</strong> (show you can do the job well &#x2014; skills plus good judgement), and <strong>benevolence</strong> (show you care about others&apos; success, not just your own).</p><p>To show ability: make &quot;done&quot; explicit and testable, deliver something small early, map workflows, keep a blocker log, and communicate progress predictably &#x2014; Done, Next, Blocked, Ask. To show integrity: make your work traceable, close loops, link your outputs to goals, respect team norms, escalate risks early, and own your mistakes. To show benevolence: ask stakeholders what success looks like from their side, show respect for people&apos;s time, share credit, offer small help early, avoid blame language (describe problems as system issues, not personal failures), and close the onboarding loop by sharing an &quot;onboarding gaps and fixes&quot; document at week four to six, so the next person has it easier.</p><p>This talk was just the tip of the iceberg. Trust has a much broader impact on how teams work &#x2014; and how much waste a lack of it creates. Asya is currently doing research on the &quot;trust tax&quot; topic at BeyondQuality: how missing trust translates into extra process, slower decisions, and unnecessary controls. You can follow and contribute to <a href="https://github.com/BeyondQuality/beyondquality/discussions/26?ref=qase.io">the discussion on GitHub</a>.</p><h2 id="qa-myths-busting-a-practical-guide-to-higher-quality">QA Myths Busting: A Practical Guide to Higher Quality</h2><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/m30y750UAhg?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Myths Busting: A Practical Guide to Higher Quality  | Quality Engineering meetup"></iframe></figure><p>I gave the third talk about four QA myths that are unfortunately still prevalent in the industry. Each one has a detailed companion article:</p><ul><li><a href="https://qase.io/blog/qa-myth-busting-quality-is-the-testers-responsibility/">Quality is testers&apos; responsibility</a></li><li><a href="https://qase.io/blog/qa-myth-busting-more-testing-means-better-quality/">More testing means better quality</a></li><li><a href="https://qase.io/blog/qa-myth-busting-qa-slows-work-down/">QA slows work down</a></li><li><a href="https://qase.io/blog/qa-myth-busting-quality-can-be-measured/">Quality can be measured</a></li></ul><p>As always, a big thank you to the speakers for sharing their work, to W3 Hub for hosting us, and to everyone who came and stayed for the conversations afterwards.</p><p>If you have been thinking about giving a talk &#x2014; whether it is your first one or your fiftieth &#x2014; come talk to me. I am always happy to help you get there. See you at the next meetup!!!</p>]]></content:encoded></item><item><title><![CDATA[Qase Report: Interactive Test Reports in a Single Command]]></title><description><![CDATA[Turn raw test results into interactive HTML reports with dashboards, analytics, failure clusters, and run history using the open-source Qase Report CLI.]]></description><link>https://qase.io/blog/qase-report-interactive-test-reports-in-a-single-command/</link><guid isPermaLink="false">69a799c66d96c20001665866</guid><category><![CDATA[Automated Testing]]></category><dc:creator><![CDATA[Dmitrii Gridnev]]></dc:creator><pubDate>Wed, 04 Mar 2026 02:43:34 GMT</pubDate><media:content url="https://d36r73waboa44k.cloudfront.net/2026/03/dmitrii_gridnev_qase_reports.png" medium="image"/><content:encoded><![CDATA[<img src="https://d36r73waboa44k.cloudfront.net/2026/03/dmitrii_gridnev_qase_reports.png" alt="Qase Report: Interactive Test Reports in a Single Command"><p>Test results are more than just &quot;green/red&quot; in CI logs. Behind every run lies a story: which tests break most often, where performance degrades, which errors keep recurring. The problem is that standard terminal output doesn&apos;t answer these questions.</p><p><strong>Qase Report</strong> is an open-source CLI tool that transforms test results into an interactive HTML report with a dashboard, analytics, screenshot gallery, and run history. A single file you can open in a browser without a server &#x2014; and share with a colleague or attach to a ticket.</p><p>In this article, we&apos;ll walk through how to install and use Qase Report: from first launch to trend analysis across dozens of runs.</p><h2 id="installation">Installation</h2><p>Qase Report is distributed as an npm package and requires Node.js 18+.</p><pre><code class="language-bash">npm install -g qase-report
</code></pre><p>Verify the installation:</p><pre><code class="language-bash">qase-report --help
</code></pre><p>You&apos;ll see two main commands: <code>open</code> and <code>generate</code>. More on each below.</p><h2 id="preparing-data">Preparing Data</h2><p>Qase Report works with data in the <a href="https://github.com/qase-tms/qase-report-format?ref=qase.io">Qase Report Format</a>. If you use <a href="https://github.com/qase-tms?ref=qase.io">Qase reporters</a> for pytest, Playwright, Jest, Cypress, or other frameworks &#x2014; the data is generated automatically.</p><p>The results directory structure looks like this:</p><pre><code>results/
&#x251C;&#x2500;&#x2500; run.json              # Run metadata and statistics
&#x251C;&#x2500;&#x2500; results/
&#x2502;   &#x251C;&#x2500;&#x2500; {uuid-1}.json     # Individual test result
&#x2502;   &#x251C;&#x2500;&#x2500; {uuid-2}.json
&#x2502;   &#x2514;&#x2500;&#x2500; ...
&#x251C;&#x2500;&#x2500; attachments/          # Screenshots, logs, files
&#x2502;   &#x2514;&#x2500;&#x2500; ...
&#x2514;&#x2500;&#x2500; qase-report-history.json  # Optional: run history
</code></pre><p>The <code>run.json</code> file contains general run information: title, environment, execution time, and status statistics. Each file in <code>results/</code> describes a single test case with its steps, attachments, and metadata.</p><h2 id="quick-start-opening-a-report">Quick Start: Opening a Report</h2><p>The simplest way to view a report is the <code>open</code> command. It starts a local server and opens the report in your browser:</p><pre><code class="language-bash">qase-report open ./results
</code></pre><p>The browser will open automatically at <code>http://localhost:3000</code>. You&apos;ll immediately see the test list for the current run.</p><p>If port 3000 is already in use, specify a different one:</p><pre><code class="language-bash">qase-report open ./results -p 8080
</code></pre><h2 id="generating-static-html">Generating Static HTML</h2><p>For sharing or archiving, it&apos;s more convenient to generate a standalone HTML file:</p><pre><code class="language-bash">qase-report generate ./results -o report.html
</code></pre><p>The resulting file contains everything: the application code, styles, and test data. You can open it with a double-click in any browser &#x2014; no server required. This is useful for:</p><ul><li>Attaching to tickets in Jira or GitHub Issues</li><li>Sending to colleagues via email or Slack</li><li>Archiving as CI/CD artifacts</li></ul><h2 id="interface-overview">Interface Overview</h2><p>The report consists of several sections, accessible via tabs at the top of the screen.</p><h3 id="test-cases-%E2%80%94-test-list">Test Cases &#x2014; Test List</h3><p>This is the default view that opens first. Tests are organized in a suite hierarchy that you can expand and collapse.</p><figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/03/01-test-cases-1.png" class="kg-image" alt="Qase Report: Interactive Test Reports in a Single Command" loading="lazy" width="2880" height="1800"></figure><p>Each test displays its status, duration, and stability grade. Available statuses:</p>
<!--kg-card-begin: html-->
<table>
<thead>
<tr>
<th>Status</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>passed</td>
<td>Test completed successfully</td>
</tr>
<tr>
<td>failed</td>
<td>Test failed on assertion</td>
</tr>
<tr>
<td>broken</td>
<td>Test crashed due to a code or environment error</td>
</tr>
<tr>
<td>skipped</td>
<td>Test was skipped</td>
</tr>
<tr>
<td>blocked</td>
<td>Test was blocked by an external dependency</td>
</tr>
<tr>
<td>invalid</td>
<td>Invalid test configuration</td>
</tr>
<tr>
<td>muted</td>
<td>Test failures are ignored</td>
</tr>
</tbody>
</table>
<!--kg-card-end: html-->
<p>Above the list are status filters and a search field for finding tests by name.</p><figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/03/02-filters-1.png" class="kg-image" alt="Qase Report: Interactive Test Reports in a Single Command" loading="lazy" width="2880" height="240"></figure><p>Click on a test to open its details in a side panel: execution steps, error stacktrace, attachments, and parameters.</p><figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/03/03-test-details-1.png" class="kg-image" alt="Qase Report: Interactive Test Reports in a Single Command" loading="lazy" width="2880" height="1800"></figure><h3 id="analytics-%E2%80%94-dashboard">Analytics &#x2014; Dashboard</h3><p>The Analytics tab is an interactive bento-grid dashboard with key run metrics.</p><figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/03/04-analytics-1.png" class="kg-image" alt="Qase Report: Interactive Test Reports in a Single Command" loading="lazy" width="2880" height="1800"></figure><p>Here you&apos;ll find:</p><ul><li><strong>Alerts</strong> &#x2014; notifications about flaky tests and regressions</li><li><strong>Attention Required</strong> &#x2014; tests that need attention: unstable, slow, new failures</li><li><strong>Quick Insights</strong> &#x2014; top failing tests and slowest tests</li><li><strong>Test Health</strong> &#x2014; stability grade for each test on an A+ to F scale</li><li><strong>Suite Health</strong> &#x2014; pass rate by suite</li></ul><p>When run history is connected (more on this in the next section), the dashboard is enriched with trends:</p><figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/03/05-analytics-alerts-1.png" class="kg-image" alt="Qase Report: Interactive Test Reports in a Single Command" loading="lazy" width="2880" height="1800"></figure><ul><li><strong>Pass Rate Trends</strong> &#x2014; pass rate trend chart across runs</li><li><strong>Duration Trends</strong> &#x2014; execution time changes over time</li><li><strong>Recent Runs</strong> &#x2014; cards for recent runs with summary statistics</li></ul><h3 id="failure-clusters-%E2%80%94-error-clustering">Failure Clusters &#x2014; Error Clustering</h3><p>When multiple tests fail with the same error, the Failure Clusters section automatically groups them together.</p><figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/03/06-failure-clusters-1.png" class="kg-image" alt="Qase Report: Interactive Test Reports in a Single Command" loading="lazy" width="2880" height="1800"></figure><p>This helps quickly understand the scope of a problem: if 15 tests failed with the same <code>ConnectionRefusedError</code>, the cause is likely infrastructure, not the tests themselves.</p><h3 id="attachments-%E2%80%94-attachment-gallery">Attachments &#x2014; Attachment Gallery</h3><p>All screenshots, logs, and files attached to tests are gathered in one place.</p><figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/03/07-attachments-1.png" class="kg-image" alt="Qase Report: Interactive Test Reports in a Single Command" loading="lazy" width="2880" height="1800"></figure><p>Available features:</p><ul><li>Filtering by category (screenshots, logs, other files) and test status</li><li>Sorting by name, date, size</li><li>Two display modes: grid with adjustable tile sizes and list</li><li>Full-screen screenshot viewing with zoom and navigation</li><li>Search by file name</li></ul><h3 id="timeline-%E2%80%94-execution-visualization">Timeline &#x2014; Execution Visualization</h3><p>The Timeline section shows how tests were executed over time. Each thread (worker) is displayed as a separate swimlane, and tests appear as colored bars with duration.</p><figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/03/09-timeline.png" class="kg-image" alt="Qase Report: Interactive Test Reports in a Single Command" loading="lazy" width="2880" height="1800"></figure><p>This is useful for:</p><ul><li>Parallelism analysis: is the load evenly distributed across workers?</li><li>Finding bottlenecks: is there one long test holding up the entire run?</li><li>Visually understanding test execution order</li></ul><p>Zoom can be adjusted from 1x to 5x for detailed examination of specific segments.</p><h3 id="comparison-%E2%80%94-run-comparison">Comparison &#x2014; Run Comparison</h3><p>When history is connected, the Comparison tab lets you compare two runs side by side.</p><figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/03/10-comparison.png" class="kg-image" alt="Qase Report: Interactive Test Reports in a Single Command" loading="lazy" width="2880" height="1800"></figure><p>You&apos;ll see:</p><ul><li>Which tests changed status (e.g., passed &#x2192; failed)</li><li>Which tests were added or removed</li><li>How each test&apos;s duration changed</li><li>Summary statistics: pass rate difference, change counts by status</li></ul><h3 id="traces-%E2%80%94-playwright-traces">Traces &#x2014; Playwright Traces</h3><p>If your tests use Playwright and generate trace files, an additional Traces tab appears in the report. It allows interactive playback of browser test recordings: actions, screenshots at each step, and network requests.</p><h2 id="run-history-and-trend-analytics">Run History and Trend Analytics</h2><p>One of Qase Report&apos;s key capabilities is working with history. When a history file is provided, the tool accumulates run data and delivers powerful analytics.</p><h3 id="connecting-history">Connecting History</h3><p>When using the <code>open</code> command, history is saved automatically:</p><pre><code class="language-bash">qase-report open ./results -H ./qase-report-history.json
</code></pre><p>Each time you open a new report, run data is appended to the history file. For the <code>generate</code> command, history must be passed separately:</p><pre><code class="language-bash">qase-report generate ./results -H ./qase-report-history.json -o report.html
</code></pre><h3 id="what-history-provides">What History Provides</h3><p>After accumulating data across several runs, the following become available:</p><p><strong>Pass rate trends</strong> &#x2014; how the test pass rate changed from run to run. Useful for tracking overall project health.</p><p><strong>Flaky test detection</strong> &#x2014; the tool analyzes status alternations (pass &#x2192; fail &#x2192; pass) and identifies unstable tests. Requires a minimum of 5 runs.</p><p><strong>Stability Score</strong> &#x2014; each test receives a grade from A+ to F based on three factors:</p><ul><li>Pass rate (50% weight)</li><li>Flakiness (30% weight)</li><li>Duration consistency (20% weight)</li></ul><p>Requires a minimum of 10 runs to calculate.</p><p><strong>Performance regression detection</strong> &#x2014; if a test&apos;s duration exceeds the mean + 2 standard deviations, the system generates an alert.</p><h3 id="cicd-integration">CI/CD Integration</h3><p>In a CI/CD pipeline, the history file is typically stored as an artifact passed between runs:</p><pre><code class="language-yaml"># GitHub Actions example
- name: Download history
  uses: actions/download-artifact@v4
  with:
    name: qase-report-history
    path: ./results
  continue-on-error: true  # First run &#x2014; no history yet

- name: Run tests
  run: pytest --qase ./results

- name: Generate report
  run: qase-report generate ./results -H ./results/qase-report-history.json -o report.html

- name: Upload report
  uses: actions/upload-artifact@v4
  with:
    name: test-report
    path: report.html

- name: Save history
  uses: actions/upload-artifact@v4
  with:
    name: qase-report-history
    path: ./results/qase-report-history.json
</code></pre><h2 id="sending-results-to-qase-tms">Sending Results to Qase TMS</h2><p>If you use <a href="https://qase.io/?ref=qase.io">Qase TMS</a> for test management, results can be sent directly from the report &#x2014; no additional scripts or configuration needed.</p><p>When using the <code>qase-report open</code> command, a <strong>Send to Qase</strong> button appears in the report header. Click it to open a dialog with three fields:</p><figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/03/13-send-to-qase.png" class="kg-image" alt="Qase Report: Interactive Test Reports in a Single Command" loading="lazy" width="2880" height="1800"></figure><ul><li><strong>Project Code</strong> &#x2014; your project code in Qase (e.g., <code>DEMO</code>)</li><li><strong>API Token</strong> &#x2014; API token from your Qase account settings</li><li><strong>Test Run Title</strong> &#x2014; run title (automatically populated from report data)</li></ul><p>After clicking <strong>Send Results</strong>, the tool:</p><ol><li>Creates a new test run in Qase TMS</li><li>Uploads attachments (screenshots, logs, files)</li><li>Sends all test results with steps, errors, and parameters</li><li>Completes the run</li></ol><p>Once finished, you&apos;ll receive a link to view the run in Qase TMS.</p><blockquote>This feature is only available in server mode (<code>qase-report open</code>). The send button is not shown in static HTML files.</blockquote><h2 id="additional-features">Additional Features</h2><h3 id="dark-and-light-themes">Dark and Light Themes</h3><p>The report uses a dark theme by default. Switch to light theme via the icon in the top-right corner.</p><figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/03/12-light-theme-1.png" class="kg-image" alt="Qase Report: Interactive Test Reports in a Single Command" loading="lazy" width="2880" height="1800"></figure><h3 id="loading-a-new-report">Loading a New Report</h3><p>The &quot;Load Report&quot; button in the header lets you load a different results directory directly in the browser &#x2014; without restarting the server.</p><h3 id="downloading-the-report">Downloading the Report</h3><p>The &quot;Download&quot; button in the header lets you download the current report as a standalone HTML file.</p><h2 id="summary">Summary</h2><p>Qase Report solves three problems:</p><ol><li><strong>Visualization</strong> &#x2014; transforms raw JSON data into a clear, interactive report with filters, search, and navigation.</li><li><strong>Analytics</strong> &#x2014; identifies flaky tests, performance regressions, and error patterns that are impossible to see from terminal output.</li><li><strong>Easy sharing</strong> &#x2014; a single HTML file that opens in any browser without installation, configuration, or servers.</li></ol><p>Getting started is one command:</p><pre><code class="language-bash">npm install -g qase-report &amp;&amp; qase-report open ./results
</code></pre><p>Project repository: <a href="https://github.com/qase-tms/qase-report?ref=qase.io">github.com/qase-tms/qase-report</a></p>]]></content:encoded></item><item><title><![CDATA[Qase Product Updates: February 2026]]></title><description><![CDATA[Test Management Software Updates for February:  Enterprise Features, Shared Steps, Support of Child Steps, Private queries, Donut Chart]]></description><link>https://qase.io/blog/qase-product-updates-february-2026/</link><guid isPermaLink="false">69a6aa186d96c20001665840</guid><category><![CDATA[News & Updates]]></category><dc:creator><![CDATA[Glen Holmes]]></dc:creator><pubDate>Tue, 03 Mar 2026 16:15:51 GMT</pubDate><media:content url="https://d36r73waboa44k.cloudfront.net/2026/03/Product-update-February-2026.png" medium="image"/><content:encoded><![CDATA[<img src="https://d36r73waboa44k.cloudfront.net/2026/03/Product-update-February-2026.png" alt="Qase Product Updates: February 2026"><p>There is a breaking point in every engineering organization. It usually happens when you cross a certain threshold of test cases or team members. Suddenly, the processes that worked for a team of ten become the bottlenecks for a team of a hundred.</p><p>Scale changes everything. It turns flat lists into endless scrolling, simple queries into noisy data, and single-project reporting into a logistical nightmare.</p><p>In February, we focused on that threshold. We shipped updates designed specifically for organizations operating at scale and features that turn your test repository from a simple storage unit into a governed, architected library.</p><p>Here is how we are supporting your growth.</p><hr><h3 id="pillar-1-architecting-for-scale"><strong>Pillar 1: Architecting for Scale</strong></h3><p><strong>From &quot;Lists&quot; to &quot;Asset Management.&quot;</strong></p><p>In a startup, a shared step is a convenience. In an enterprise, it is a dependency. When you have thousands of shared steps across dozens of projects, treating them like a flat list is a liability. It creates duplication, makes maintenance impossible, and slows down onboarding.</p><p>We have fundamentally upgraded how you manage these assets with <a href="https://docs.qase.io/en/articles/5563709-shared-steps?ref=qase.io#h_ca606e4fbc" rel="noreferrer"><strong>Folder Structures for Shared Steps</strong></a>.</p><figure class="kg-card kg-image-card"><img src="https://downloads.intercomcdn.com/i/o/wsaz8vex/2107201367/989fcf3a911e9e59887e11ad3198/97388.png?expires=1772532900&amp;signature=d902bac4f506451181b74b52a7400aab29f3c1eaae32620e732c3d16fe6853db&amp;req=diEnEct%2BnIJZXvMW1HO4zf4QlcHhJ4vsnny3E5dqB9b%2FPMx4OrFke4M0Gjwd%0Av2jngj3egpkOj6UVZ84%3D%0A" class="kg-image" alt="Qase Product Updates: February 2026" loading="lazy"></figure><p>This isn&apos;t just about tidying up. This is about architectural hygiene. You can now structure your shared logic by domain&#x2014;Billing, Auth, Compliance&#x2014;creating a navigable hierarchy that acts as a single source of truth. We&#x2019;ve added bulk actions to move, clone, and rename entire directories, giving you the controls you need to refactor your test library as easily as you refactor your code.</p><p>We also introduced <a href="https://docs.qase.io/en/articles/5563709-shared-steps?ref=qase.io#h_25d3079a10" rel="noreferrer"><strong>Child Steps in Shared Steps</strong></a>.</p><figure class="kg-card kg-image-card"><img src="https://downloads.intercomcdn.com/i/o/wsaz8vex/2100299329/fb6226b8c434d84eda1038ea5aeb/4797.png?expires=1772532900&amp;signature=15ea77c6103998b10f068bb0f89c126ac1307eda9e0d8e2c528e99a7da719d23&amp;req=diEnFst3lIJdUPMW1HO4zfexKZCAHN6OnKg5CJN%2FhWOzrkvzNvMe9vkaaSS3%0AokbjIWgm7LZPO2HN8J4%3D%0A" class="kg-image" alt="Qase Product Updates: February 2026" loading="lazy"></figure><p>Complex systems require modular testing. You can now nest child steps directly inside shared steps, allowing you to build sophisticated, multi-layer &quot;meta-steps.&quot; A &quot;Checkout Flow&quot; shared step can now contain the specific, governed logic for payment, address validation, and inventory checks nested within it. It allows for high-level reuse without sacrificing low-level granularity.</p><hr><h3 id="pillar-2-workspace-governance"><strong>Pillar 2: Workspace Governance</strong></h3><p><strong>Signal over Noise with Private Queries.</strong></p><p>In a large organization, a shared workspace can quickly become a tragedy of the commons. If every engineer saves every exploratory query to the public list, the team&#x2019;s critical dashboards get buried under temporary data.</p><p>We are introducing <a href="https://docs.qase.io/en/articles/6417205-saved-queries?ref=qase.io#h_f7cfe24266" rel="noreferrer"><strong>Private Queries</strong></a> to solve this workspace pollution.</p><figure class="kg-card kg-image-card"><img src="https://downloads.intercomcdn.com/i/o/wsaz8vex/2023316573/96f7536b5e7930578948df2ab5a4/49748.png?expires=1772532900&amp;signature=6f2e0d7ca05acfb3281432b5a2986e492d74dc744e36d55bc742cb6175a2186c&amp;req=diAlFcp%2Fm4RYWvMW1HO4zXVYUhQ56%2Bvqi8xabIMqiHaaWbPbAgDnUEDOPw4w%0AG6kmgJ9VATMNrr50No0%3D%0A" class="kg-image" alt="Qase Product Updates: February 2026" loading="lazy"></figure><p>Now, engineers can save their specific, investigative queries without broadcasting them to the entire organization. It allows for personal productivity workflows that don&apos;t degrade the shared environment. If a query proves valuable for the wider team, it can be promoted to public status but we&#x2019;ve designed the flow to default to hygiene first.</p><p><em>Note: To ensure strict permission governance, private queries are currently restricted from shared dashboards.</em></p><hr><h3 id="pillar-3-data-driven-decision-making"><strong>Pillar 3: Data-Driven Decision Making</strong></h3><p><strong>Executive Visualization with Donut Charts.</strong></p><p>At the executive level, you don&apos;t need rows of data; you need immediate insight into health and distribution. We&#x2019;ve added <a href="https://docs.qase.io/en/articles/13342243-qql-widgets?ref=qase.io" rel="noreferrer"><strong>Donut Chart Mode</strong></a> to QQL to support high-level reporting.</p><figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/03/Donut-chart---Light.png" class="kg-image" alt="Qase Product Updates: February 2026" loading="lazy" width="3825" height="1530"></figure><p>You can now toggle complex datasets from tabular views to visualizations instantly. Whether you are presenting a quarterly quality review or checking the distribution of automation coverage across business units, this view allows stakeholders to understand the &quot;shape&quot; of the data in seconds.</p><hr><h3 id="pillar-4-the-monorepo-reality"><strong>Pillar 4: The Monorepo Reality</strong></h3><p><strong>Native Support for Complex Architectures.</strong></p><p>The modern enterprise doesn&apos;t always fit neatly into &quot;one project, one repo.&quot; You are likely running monorepos, microservices, or complex distributed systems where a single CI pipeline touches multiple business domains.</p><p>Your tooling needs to reflect that reality.</p><ul><li><strong>Multi-Project Routing (JS &amp; Python):</strong> We have engineered our reporters to handle the complexity of enterprise codebases. Our JavaScript and Python reporters now support <strong>Multi-Project routing</strong>. If you have a monorepo that triggers tests across three different product lines, our reporters can now intelligently route those results to the correct Qase projects in a single run. No distinct pipelines required.</li><li><strong>Pest Framework Support:</strong> We are committed to supporting the frameworks your teams use. We&#x2019;ve released and open-sourced a native reporter for <strong>Pest</strong>, bringing first-class PHP testing into the Qase ecosystem.<a href="https://github.com/qase-tms/qase-pest?ref=qase.io"> <u>View the repository</u></a>.</li><li><strong>Unified Reporting Standard:</strong> We&#x2019;ve implemented a consolidated <strong>Qase Report</strong> schema to standardize how data flows from your CI/CD pipelines into our platform, ensuring reliability regardless of the environment scale.<a href="https://github.com/qase-tms/qase-report?ref=qase.io"> <u>View the repository</u></a>.</li></ul><hr><p>February was about ensuring that Qase grows <em>with</em> you. Whether you are managing a monorepo with hundreds of microservices or a shared step library with thousands of assets, our goal is to provide the governance and structure you need to keep moving fast.</p><p>Happy testing!</p>]]></content:encoded></item><item><title><![CDATA[How to Write Test Cases]]></title><description><![CDATA[Learn what a test case is in software testing, its structure, purpose, and how well-written test cases ensure reliable, repeatable validation.]]></description><link>https://qase.io/blog/how-to-write-test-cases/</link><guid isPermaLink="false">6994e8556d96c200016657f5</guid><category><![CDATA[Test Management]]></category><dc:creator><![CDATA[Torben Robertson]]></dc:creator><pubDate>Tue, 17 Feb 2026 23:25:56 GMT</pubDate><media:content url="https://d36r73waboa44k.cloudfront.net/2026/02/how-to-write-test-cases.png" medium="image"/><content:encoded><![CDATA[<h3 id="how-to-write-a-test-case-quick-checklist">How to Write a Test Case (Quick Checklist)</h3><img src="https://d36r73waboa44k.cloudfront.net/2026/02/how-to-write-test-cases.png" alt="How to Write Test Cases"><p>A well-written test case follows a clear, repeatable structure. Use this checklist to create consistent and traceable test cases.</p><ol><li><strong>Test Case ID</strong> &#x2013; Assign a unique identifier for traceability.</li><li><strong>Title / Description</strong> &#x2013; Clearly define the objective and functionality under test.</li><li><strong>Preconditions</strong> &#x2013; Specify required system state, configurations, or dependencies before execution.</li><li><strong>Test Steps</strong> &#x2013; List precise, step-by-step actions the tester must perform.</li><li><strong>Test Data</strong> &#x2013; Provide exact input values required for execution.</li><li><strong>Expected Result</strong> &#x2013; Define the measurable outcome that should occur after the steps are executed.</li><li><strong>Actual Result</strong> &#x2013; Record what actually happened during testing.</li><li><strong>Status</strong> &#x2013; Mark the test as Pass, Fail, or Blocked.</li><li><strong>Comments / Notes</strong> &#x2013; Document observations, defects, or clarifications.</li></ol><h2 id="what-is-a-test-case"><strong>What Is a Test Case?</strong></h2><p>A test case is a structured document used in software testing to verify that a specific functionality behaves as expected. It defines precise test steps, required test data, preconditions, and expected results so a tester can validate system behavior in a controlled and repeatable way. Each test case focuses on a single objective and is written to produce a measurable outcome during test execution.</p><p>A standard test case includes a unique identifier (Test Case ID), a clear test case description, defined preconditions, step-by-step test steps, input data, expected results, and a recorded actual result. The tester executes the steps, compares the actual result with the expected outcome, and assigns a status. This structure supports traceability, validation, and consistent documentation across test plans and test suites.</p><p>Test cases are used across multiple types of testing, including:</p><ul><li><a href="https://qase.io/blog/functional-testing"><u>functional testing</u></a></li><li><a href="https://qase.io/blog/unit-testing"><u>unit testing</u></a></li><li><a href="https://qase.io/blog/smoke-testing"><u>smoke testing</u></a></li><li><a href="https://qase.io/blog/regression-testing"><u>regression testing</u></a></li><li><a href="https://qase.io/blog/acceptance-testing"><u>acceptance testing</u></a></li><li><a href="https://qase.io/blog/end-to-end-testing"><u>end-to-end testing</u></a></li></ul><p>They can be written for manual testing or converted into automated test cases for execution within automation frameworks. In both cases, well-written test cases ensure repeatable validation of application behavior.</p><p>Detailed guidance on organizing and managing test cases within Qase&apos;s TMS is available in the Qase documentation for <a href="https://help.qase.io/en/articles/5563704-test-cases?ref=qase.io"><u>Test Cases</u></a> and the <a href="https://help.qase.io/en/articles/5563688-getting-started?ref=qase.io"><u>Getting Started guide</u></a>.</p><h2 id="why-test-cases-are-important-in-software-testing"><strong>Why Test Cases Are Important in Software Testing</strong></h2><p>Test cases play a central role in effective software testing because they define exactly <strong>how functionality should be validated</strong>. Without clearly written test cases, testing becomes inconsistent, difficult to reproduce, and harder to scale.</p><p>First, test cases establish structured <strong>test procedures</strong>. They define inputs, execution conditions, and expected results in a standardized format. This ensures that each tester follows the same steps and evaluates the same criteria, improving repeatability and reducing ambiguity.</p><p>Second, they improve <strong>test coverage</strong>. By mapping test cases to requirements, user stories, or acceptance criteria, teams can ensure that all critical functionality is validated. This is especially important in areas like <a href="https://qase.io/blog/functional-testing"><u>functional testing</u></a> and when comparing different approaches like<a href="https://qase.io/blog/black-box-vs-white-box-testing"> <u>black-box vs. white-box testing</u></a>.</p><p>Test cases are also foundational for <strong>regression testing</strong>. As applications evolve, regression test suites help confirm that new changes do not break existing functionality. Well-structured test cases make it easier to build and maintain scalable <a href="https://qase.io/blog/regression-suites"><u>regression suites</u></a>, particularly when automation is involved. In fact, understanding the benefits and trade-offs of automation is essential when building long-term testing strategies, as discussed in this guide to <a href="https://qase.io/blog/qa-automation-benefits-challenges-and-best-practices-explained"><u>QA automation benefits and best practices</u></a>.</p><p>Beyond validation, test cases generate valuable data. During test execution, testers compare expected results to actual results, log defects, and contribute insights that help improve product quality and optimize workflows. Over time, this documentation supports traceability, compliance, and informed release decisions.</p><p>Ultimately, effective test cases build confidence. They provide stakeholders with evidence that features work correctly under defined conditions and that the product meets agreed-upon acceptance standards. In modern software development&#x2014;whether Agile or traditional&#x2014;test cases remain one of the most reliable tools for maintaining quality, consistency, and transparency throughout the testing lifecycle.</p><h2 id="types-of-testing-covered-by-test-cases-functional-regression-unit-more"><strong>Types of Testing Covered by Test Cases (Functional, Regression, Unit &amp; More)</strong></h2><p>Test cases are used across all major types of testing in software testing. In <a href="https://qase.io/blog/functional-testing"><u>functional testing</u></a>, a test case validates specific functionality against defined requirements and expected results. In <a href="https://qase.io/blog/unit-testing"><u>unit testing</u></a>, it verifies the behavior of individual modules or components in isolation. Within <a href="https://qase.io/blog/regression-testing"><u>regression testing</u></a>, structured test cases form reusable regression suites that detect defects introduced by code changes and protect previously validated features.</p><p>Test cases support a wide range of additional testing types, including:</p><ul><li>Performance testing, including load testing, stress testing, capacity testing, and recovery testing</li><li>Compatibility testing across browsers, operating systems, and devices</li><li>Portability testing to validate behavior in different environments</li><li>Interoperability testing between integrated systems and APIs</li><li>Security testing to validate protection against vulnerabilities</li><li>Reliability testing, including chaos testing</li><li>Usability testing and accessibility testing</li><li>Localization and conversion testing</li><li>Installability and disaster recovery testing</li><li>Maintainability and procedure testing</li></ul><p>Each of these areas relies on clearly defined preconditions, test steps, test data, expected results, and recorded actual results during test execution.</p><p>A comprehensive overview of testing approaches and execution strategies is available in the guide to<a href="https://qase.io/blog/software-testing-types-approaches-levels-execution-strategies-and-techniques-the-complete-guide"> <u>software testing types, approaches, levels, and execution strategies</u></a>. Regardless of testing type, well-written test cases enforce repeatable validation, structured documentation, and traceability within a test management process</p><h2 id="writing-test-cases-for-manual-vs-automated-testing"><strong>Writing Test Cases for Manual vs. Automated Testing</strong></h2><p>Writing test cases for manual testing differs from writing automated test cases in structure and execution detail. Manual testing relies on human testers to execute step-by-step instructions, observe behavior, and compare actual results with expected results. Manual test cases emphasize clarity in test steps, detailed validation criteria, defined preconditions, and explicit test data so testers can accurately validate functionality.</p><p>The differences between manual and automated test cases appear most clearly in execution and maintenance:</p><ul><li><strong>Execution method:</strong> Manual test cases are executed by a tester following documented test steps. Automated test cases are executed by tools or scripts that run predefined instructions without human intervention.</li><li><strong>Consistency:</strong> Manual testing can introduce variability depending on the tester. Automated tests execute the same steps consistently across environments.</li><li><strong>Speed and scalability:</strong> Automated tests run significantly faster and support repeated execution across builds. Manual testing requires more time as test suites expand.</li><li><strong>Use cases:</strong> Automated testing is commonly applied in<a href="https://qase.io/blog/regression-testing"> <u>regression testing</u></a>, where maintaining scalable<a href="https://qase.io/blog/regression-suites"> <u>regression suites</u></a> is critical. Manual testing is used for exploratory testing, usability validation, and rapidly changing features.</li></ul><p>Automated testing supports continuous integration pipelines and repeated validation cycles. Strategic considerations around scalability and long-term efficiency are examined in <a href="https://qase.io/blog/qa-automation-benefits-challenges-and-best-practices-explained"><u>QA automation benefits and best practices</u></a>. Both approaches require structured test case writing to maintain traceability, validation accuracy, and maintainability within the software testing lifecycle.</p><h2 id="test-case-templates-structure-test-steps-preconditions-and-expected-results"><strong>Test Case Templates: Structure, Test Steps, Preconditions, and Expected Results</strong></h2><p>A structured test case template standardizes how testers document functionality, validation logic, and expected outcomes in software testing. Each test case begins with a clear title that defines the functionality under test, followed by a unique identifier (Test Case ID) that ensures traceability within test management systems and test plans. The description clarifies scope and intent, reducing ambiguity during test execution.</p><p>Preconditions define system state, dependencies, environment setup, and required configurations before execution begins. Test steps are written in a precise, sequential format so the tester can reproduce the validation process without interpretation. Each step references specific test data, including input values, configuration parameters, or API requests. Expected results describe the measurable outcome that should occur after executing the defined steps. During execution, the tester records the actual result, assigns a status, and documents relevant comments. Postconditions specify the required system state after completion, especially when tests affect shared environments or persistent data.</p><p>This structure supports consistent validation across <a href="https://qase.io/blog/functional-testing"><u>functional testing</u></a>, <a href="https://qase.io/blog/smoke-testing"><u>smoke testing</u></a>, and <a href="https://qase.io/blog/regression-testing"><u>regression testing</u></a>. Test cases can be organized into reusable test suites and aligned with broader lifecycle practices described in the <a href="https://qase.io/blog/three-pillars-of-managing-testing-artifacts"><u>three pillars of managing testing artifacts</u></a> and <a href="https://qase.io/blog/a-guide-to-continuous-testing"><u>continuous testing</u></a>.</p><h2 id="how-to-design-effective-and-well-written-test-cases"><strong>How to Design Effective and Well-Written Test Cases</strong></h2><p>Effective test case design applies structured test design techniques to maximize coverage and reduce redundancy. Equivalence class testing groups input data into logical categories so a tester can validate representative values instead of testing every possible input. Boundary value testing targets minimum, maximum, and edge conditions where defects frequently occur. Decision table testing maps combinations of conditions to expected results, exposing logic errors in complex workflows. State transition testing verifies how a system behaves as it moves between defined states. Pairwise testing reduces combinations by validating parameter interactions in controlled pairs. Error guessing leverages prior defect patterns to expose hidden failure points.</p><p>Well-written test cases define explicit test steps, precise test data, and deterministic expected results. They isolate a single functionality, avoid overlapping scope, and remain reproducible across environments. Test cases should be prioritized based on risk, frequency of use, and business impact, then grouped into structured test suites. In regression cycles, these suites become the backbone of scalable<a href="https://qase.io/blog/regression-suites"> <u>regression suites</u></a>. In lower-level validation, they align with <a href="https://qase.io/blog/unit-testing"><u>unit testing</u></a>. Broader testing strategies, including negative scenarios, are detailed in guides like<a href="https://qase.io/blog/how-to-use-negative-testing-to-create-more-resilient-software"> <u>how to use negative testing to create more resilient software</u></a> and the overview of <a href="https://qase.io/blog/software-testing-types-approaches-levels-execution-strategies-and-techniques-the-complete-guide"><u>software testing types and execution strategies</u></a>.</p><p>Design quality directly impacts maintainability. Clear structure, traceable identifiers, reusable steps, and accurate validation criteria reduce rework and debugging time. Effective test cases function as executable documentation within the software testing workflow.</p><h2 id="aligning-test-cases-with-test-plans-acceptance-criteria-and-workflow"><strong>Aligning Test Cases with Test Plans, Acceptance Criteria, and Workflow</strong></h2><p>Test cases must align directly with defined test plans, acceptance criteria, and the broader workflow of the software testing lifecycle. Each test case should map to a specific requirement, user story, or system component to ensure traceability and validation coverage. Identifying components and interfaces early ensures that interactions between modules, APIs, and integrated systems are validated through structured test steps and measurable expected results.</p><p>Stakeholder input shapes scope and validation logic. Product owners, developers, and QA leads define acceptance test criteria that determine whether functionality meets business and technical standards. Test cases must reflect these criteria explicitly, especially in contexts like<a href="https://qase.io/blog/functional-testing"> <u>functional testing</u></a>,<a href="https://qase.io/blog/acceptance-testing"> <u>acceptance testing</u></a>, and<a href="https://qase.io/blog/end-to-end-testing"> <u>end-to-end testing</u></a>. In integration-heavy environments, alignment with<a href="https://qase.io/blog/system-integration-testing-sit"> <u>system integration testing (SIT)</u></a> ensures that cross-system dependencies are validated.</p><p>Test plans define execution scope, test suites, dependencies, and timelines. Test cases should be prioritized according to risk, release impact, and regression exposure. In recurring release cycles, alignment with<a href="https://qase.io/blog/regression-testing"> <u>regression testing</u></a> and<a href="https://qase.io/blog/smoke-testing"> <u>smoke testing</u></a> ensures that core functionality remains stable after changes. Workflow synchronization requires that updates to requirements trigger corresponding updates to test cases, preserving validation accuracy across iterations.</p><p>Acceptance criteria serve as the final validation checkpoint. During execution, actual results must be evaluated against defined acceptance conditions for functionality, performance, and compliance. Structured alignment between test cases, test plans, and workflow enforces traceability, measurable validation, and controlled release readiness in accordance with standards like ISO/IEC/IEEE 29119-3.</p><h2 id="test-execution-and-test-environment-setup"><strong>Test Execution and Test Environment Setup</strong></h2><p>Test execution is the controlled process of running predefined test steps within a stable test environment and comparing actual results against expected results. Execution begins only after confirming environment readiness, including validated hardware, software configurations, dependencies, and test data integrity. Unstable environments invalidate results and compromise traceability.</p><p>Execution can be performed manually or through automated test cases. In both cases, testers follow documented procedures, capture actual outcomes, and record status within a test management system. Discrepancies between expected and actual results are logged as defects, triggering debugging, correction, and retesting cycles. When defects are resolved, the affected test cases are re-executed to confirm validation.</p><p>Inputs to test execution include approved test plans, defined test procedures, validated test items, test basis documentation, and environment readiness reports. Execution supports multiple testing layers, including<a href="https://qase.io/blog/functional-testing"> <u>functional testing</u></a>,<a href="https://qase.io/blog/performance-testing"> <u>performance testing</u></a>, and <a href="https://qase.io/blog/component-testing-a-complete-guide/" rel="noreferrer">component-level validation</a>. </p><p>The test environment must simulate production conditions with controlled variability. Configuration management, dependency tracking, and environment updates must be documented before and after execution cycles. Repeatable execution within stable environments enforces reliability, traceability, and compliance with standards like ISO/IEC/IEEE 29119-1, ISO/IEC/IEEE 29119-2, and ISO/IEC/IEEE 29119-3.</p><h2 id="test-case-management-tools-and-test-suites"><strong>Test Case Management Tools and Test Suites</strong></h2><p><a href="https://qase.io/?ref=qase.io" rel="noreferrer">Test case management tools</a> centralize how teams create test cases, organize test suites, and control test execution across releases. These platforms provide structured repositories where each test case includes a unique identifier, defined test steps, test data, expected results, actual results, and execution history. Centralized storage improves traceability, enforces documentation standards, and reduces duplication across projects.</p><p>Test suites group related test cases based on functionality, feature area, regression scope, or release milestone. Well-structured test suites allow testers to execute targeted validation cycles like smoke testing, regression testing, or performance validation without rebuilding scope from scratch. In recurring release cycles, regression suites become reusable assets that protect core functionality and reduce execution risk. Continuous validation strategies are outlined in<a href="https://qase.io/blog/a-guide-to-continuous-testing"> <u>A Guide to Continuous Testing</u></a>.</p><p>Modern test case management systems integrate with issue trackers, CI/CD pipelines, automation frameworks, and reporting tools. Integration capabilities allow automated test cases to push execution results directly into the platform, consolidating manual testing and automated test data in one environment. Automation scalability and integration strategy are further examined in<a href="https://qase.io/blog/qa-automation-benefits-challenges-and-best-practices-explained"> <u>QA automation benefits and best practices</u></a> and <a href="https://qase.io/blog/how-to-start-with-autotests"><u>How to Start with Autotests</u></a>.</p><p>Core platform features include structured test plans, configurable workflows, defect management, reporting dashboards, and <a href="https://docs.qase.io/aiden-qase-ai/aiden-role-based-access-control?ref=qase.io" rel="noreferrer">role-based access controls</a>. Reporting tools aggregate execution status, pass/fail rates, and defect trends to support release decisions. Comparative evaluations of leading platforms are detailed in the <a href="https://qase.io/blog/best-test-management-tools"><u>Best Test Management Tools</u></a>.</p><p>An integrated solution like Qase combines test case management, test suites, test plans, test runs, and defect tracking within a single system. Consolidated management enforces consistency, improves maintainability, and provides measurable visibility into software testing performance across the development lifecycle.</p><hr><h2 id="frequently-asked-questions-faq">Frequently Asked Questions (FAQ)</h2><h3 id="what-makes-a-good-test-case">What makes a good test case?</h3><p>A good test case is clear, concise, and reproducible. It focuses on a single objective, defines precise test steps and test data, and includes measurable expected results. It should be traceable to a requirement or acceptance criterion and easy to execute without interpretation.</p><hr><h3 id="what-is-the-difference-between-a-test-case-and-a-test-scenario">What is the difference between a test case and a test scenario?</h3><p>A test scenario defines <em>what</em> needs to be tested at a high level. A test case defines <em>how</em> it will be tested, including detailed steps, data, and expected outcomes. Scenarios outline coverage; test cases provide executable validation.</p><hr><h3 id="how-detailed-should-a-test-case-be">How detailed should a test case be?</h3><p>A test case should be detailed enough that any tester can execute it consistently without guessing. The level of detail depends on complexity, risk, and team experience. Critical or high-risk features require more explicit steps and validation criteria.</p><hr><h3 id="are-test-cases-still-relevant-in-agile-development">Are test cases still relevant in Agile development?</h3><p>Yes. Even in Agile workflows, structured test cases ensure traceability, regression coverage, and alignment with acceptance criteria. While exploratory testing is common in Agile, documented test cases remain essential for repeatable validation and scalable regression testing.</p><hr><h3 id="can-test-cases-be-reused-for-automation">Can test cases be reused for automation?</h3><p>Yes. Well-structured manual test cases can serve as the foundation for automated tests. Clear test steps, defined inputs, and deterministic expected results make automation easier and reduce script maintenance over time.</p><hr><h3 id="how-many-test-cases-should-i-write-for-a-feature">How many test cases should I write for a feature?</h3><p>The number depends on feature complexity, risk level, and acceptance criteria. Focus on covering core functionality, edge cases, negative scenarios, and regression impact rather than targeting a specific number. Prioritize based on business and technical risk.</p>]]></content:encoded></item><item><title><![CDATA[Qase Product Updates: January 2026]]></title><description><![CDATA[Test Management Software Updates for January: AIDEN Agentic Mode, Qase Query Language Bar Charts, Self-Correcting Tests, Shared Steps overhaul. ]]></description><link>https://qase.io/blog/qase-product-updates-january-2026/</link><guid isPermaLink="false">698218646d96c20001665658</guid><category><![CDATA[News & Updates]]></category><dc:creator><![CDATA[Glen Holmes]]></dc:creator><pubDate>Wed, 04 Feb 2026 22:25:00 GMT</pubDate><media:content url="https://d36r73waboa44k.cloudfront.net/2026/02/product-update-jan-26.png" medium="image"/><content:encoded><![CDATA[<img src="https://d36r73waboa44k.cloudfront.net/2026/02/product-update-jan-26.png" alt="Qase Product Updates: January 2026"><p>January is usually the month of resolutions. In the software world, that often translates to lofty roadmaps and promises of what&#x2019;s coming <em>eventually</em>.</p><p>But in engineering, we don&apos;t live in &quot;eventually.&quot; We live in the current sprint. We live in the reality of the release cycle where the only thing that matters is shipped code. So for our first update of 2026, we didn&apos;t want to just promise the future. We wanted to ship it.</p><p>This month, we focused on the way you interact with your software quality. We&#x2019;re moving from a world where you have to micromanage every click and selector to one where you simply state your intent.</p><p>Here is how we are setting the pace for the rest of the year.</p><hr><h2 id="pillar-1-intent-based-engineering"><strong>Pillar 1: Intent-Based Engineering</strong></h2><p><strong>Agentic Mode is Here.</strong></p><p>We teased this back in 2025, and today it&#x2019;s live.</p><p>For years, end-to-end testing has been a translation problem. You know what the feature is supposed to do (&quot;Allow a user to buy shoes&quot;), but you spend hours translating that intent into selectors, wait times, and boilerplate code. It&#x2019;s a context switch that pulls you out of the creative work of building.</p><p>With <strong>AIDEN Agentic Mode</strong>, we are removing the translation layer.</p><p>You simply tell AIDEN your goal, <em>&quot;Verify I can purchase shoes on our marketplace,&quot;</em> and it figures out the path. It builds the test for you.</p><p>AIDEN breaks your natural language instructions into actionable steps, provides visual feedback (screenshots) for every action, and generates the code in your preferred language. You can give simple commands, or you can get specific with compound instructions.</p><p>For example, instead of manually scripting input fields, you can just tell AIDEN: <em>&quot;Click on &apos;Get a Demo&apos; and fill the email address as &apos;</em><a href="mailto:apidemo@qase.io"><em><u>apidemo@qase.io</u></em></a><em>&apos;.&quot;</em> AIDEN understands the context, finds the email field, and executes the specific input exactly as requested.</p><p>But we didn&apos;t stop at the UI.</p><p><strong>API Testing and Random Data</strong></p><p>Modern applications aren&apos;t just front-ends. AIDEN can now execute <strong>GET and POST API requests</strong> directly within your flow using CURL commands. This allows you to mix UI interactions with backend validations in a single, fluid test.</p><p>And to ensure your pre-production environment doesn&apos;t get clogged with duplicate data, we&#x2019;ve added a <a href="https://help.qase.io/en/articles/11012497-aiden-qa-architect?ref=qase.io#h_d24525c8bc" rel="noreferrer"><strong>Random Data Generator</strong></a>. Stop hardcoding static users. AIDEN can now generate unique usernames, emails, and passwords on the fly. Whether you are converting a manual test or building from scratch, just mark a field as &quot;random,&quot; and we handle the entropy.</p><p>The real automation!</p><hr><h2 id="pillar-2-reliability-in-the-untestable"><strong>Pillar 2: Reliability in the &quot;Untestable&quot;</strong></h2><p><strong>Self-Healing and Canvas Support.</strong></p><p>There are two things that keep engineering teams up at night: flaky pipelines and black-box UIs. We tackled both this month.</p><p><strong>Self-Correcting Tests</strong></p><p>The most frustrating part of test generation is when AIDEN attempts to create a test but makes incorrect assumptions about the steps needed. AIDEN is now capable of self-correction during test generation. When a test fails to generate properly, AIDEN analyzes what went wrong, adjusts its approach, and re-attempts the generation with a corrected strategy. This means fewer failed test generations and more reliable test creation on the first try.</p><p>While full self-healing&#x2014;where tests automatically update when your codebase changes&#x2014;is on our roadmap, this self-correcting capability during generation is a significant step toward that goal. It keeps your test creation process smooth and reduces the manual intervention needed when working with dynamic applications.</p><p><strong>Conquering the Canvas</strong></p><p>Historically, tools like Miro, Figma, or any heavy &lt;canvas&gt; based applications were automation dead zones. Standard frameworks struggle to &quot;see&quot; inside those elements. AIDEN now supports <strong>Canvas-based UIs</strong>. It interprets visual elements within the canvas just like standard DOM elements, opening up a massive new frontier for test coverage for teams building complex, visual tools.</p><hr><h2 id="pillar-3-visualizing-your-velocity"><strong>Pillar 3: Visualizing Your Velocity</strong></h2><p><strong>Data is only useful if your brain can process it.</strong></p><p>We noticed that many of you were running complex QQL (Qase Query Language) queries, then staring at rows of tabular data trying to spot trends. That creates cognitive friction when you&apos;re trying to make quick decisions about release health.</p><p>We&#x2019;ve introduced <a href="https://help.qase.io/en/articles/13342243-qql-widgets?ref=qase.io" rel="noreferrer"><strong>QQL Bar Charts</strong></a>.</p><p>You can now instantly toggle your query results from a table view to a Bar Chart visualization. Want to see test case distribution by project? Or flaky tests grouped by status?</p><ul><li>SELECT (project, COUNT(*)) GROUP BY project</li><li>SELECT (status, COUNT(*)) isFlaky = true GROUP BY status</li></ul><p>Save these views to your dashboard. Spot the bottleneck in seconds.</p><hr><h2 id="pillar-4-repository-hygiene-at-scale"><strong>Pillar 4: Repository Hygiene at Scale</strong></h2><p><strong>Clean Code, Clean Tests.</strong></p><p>As your product grows, &quot;technical debt&quot; isn&apos;t just in your codebase. It lives in your test repository too. Thousands of duplicate steps scattered across projects slow everyone down.</p><p>We&#x2019;ve overhauled how you <a href="https://help.qase.io/en/articles/5563709-shared-steps?ref=qase.io#h_ee7d0d7d1e" rel="noreferrer">manage <strong>Shared Steps</strong></a> to make spring cleaning easier:</p><ul><li><strong>Bulk Conversion:</strong> Select multiple local steps and promote them to Global Shared Steps in one click. Note that this puts them at the workspace level, making them accessible to your whole organization.</li><li><strong>Bulk Delete with Safety:</strong> Need to deprecate old logic? You can now bulk delete shared steps with options to either cascade the delete (be careful!) or convert them back to local steps within the test cases so you don&apos;t break history.</li></ul><p><strong>Enhanced Test Framework Support</strong></p><p>Your test automation ecosystem is only as strong as the integrations that connect it. This month, we expanded our reporter coverage to meet you wherever you are in your testing stack:</p><p>We&apos;ve added NUnit and Junit4 reporters for Android tests, giving mobile teams the same seamless reporting experience as web teams. The Behave reporter now supports attachments for steps, so your BDD scenarios can include screenshots and logs exactly where they matter. CucumberJS users can now leverage parameters and suite annotations for better test organization. And for JavaScript testing frameworks&#x2014;Mocha, Vitest, and Jest&#x2014;we&apos;ve implemented support for expected results and input data at the step level, making test reports more informative and actionable.</p><p><strong>Jira Data Center 10.x Support</strong></p><p>Seamless integration isn&apos;t a luxury. It is a requirement. We are officially supporting <strong>Jira Data Center 10.x</strong>. Whether you are on the cutting edge of Jira versions or maintaining a stable enterprise environment, your data flows seamlessly between Qase and Jira.</p><hr><p><strong>Our New Feedback Tool</strong></p><p>We moved our feedback and roadmap to a new tool at <a href="https://roadmap.qase.io/?ref=qase.io" rel="noreferrer">roadmap.qase.io</a>. You can submit feature requests, vote on what you want to see next, and track what we&apos;re working on like you used to. We will continue to make sure that your requests are taken good care of.</p><p>If you used our old feedback platform, you can migrate your data. Sign in with your email to claim your posts.</p><hr><p>We removed the barriers in January. Whether that is the barrier between natural language and code, or the barrier of &quot;untestable&quot; canvas elements.</p><p>We&#x2019;re excited to see what you build when you don&apos;t have to sweat the small stuff.</p><p>Happy testing!</p>]]></content:encoded></item><item><title><![CDATA[Q4 2025: Qase Product Updates]]></title><description><![CDATA[What's new in Q4 2025: feature releases, improvements, and fixes to help teams test faster.]]></description><link>https://qase.io/blog/q4-2025-qase-product-updates/</link><guid isPermaLink="false">6964caee6d96c2000166561f</guid><category><![CDATA[News & Updates]]></category><dc:creator><![CDATA[Glen Holmes]]></dc:creator><pubDate>Mon, 12 Jan 2026 10:36:42 GMT</pubDate><media:content url="https://d36r73waboa44k.cloudfront.net/2026/01/Product-update-Q2-2025.png" medium="image"/><content:encoded><![CDATA[<img src="https://d36r73waboa44k.cloudfront.net/2026/01/Product-update-Q2-2025.png" alt="Q4 2025: Qase Product Updates"><p>There&apos;s a specific kind of exhaustion that comes from knowing exactly what needs to be done, but the tools won&apos;t let you do it efficiently. You&apos;ve got the vision&#x2014;automated test coverage across three projects, a clean migration path, tests that actually reflect production&#x2014;but between you and that vision sits a thousand small frictions.</p><p>Q4 was about eliminating them by paying attention to the places where your workflow stutters. The moments where you think &quot;there has to be a better way&quot; and then resign yourself to the manual approach because, well, that&apos;s just how it works.</p><p>Except now it doesn&apos;t have to.</p><h2 id="the-automation-backlog-problem-and-how-we-actually-are-solving-it"><strong>The Automation Backlog Problem (And How We Actually Are Solving It)</strong></h2><p><strong>Eliminating &quot;Automation Paralysis&quot;&#xA0;</strong></p><p>Let&apos;s talk about the elephant in the repository: your mountain of manual test cases that should be automated but aren&apos;t. Not because you don&apos;t value automation(you do), but because the conversion process is soul-crushing. One test at a time. Selectors. Wait times. Edge cases.</p><p>So the backlog grows. And every sprint planning meeting, someone mentions it. And everyone nods. And nothing happens.</p><p><a href="https://help.qase.io/en/articles/11012497-aiden-qa-architect?ref=qase.io#h_8a69157eb3" rel="noreferrer"><strong>AI-Powered Bulk Conversion</strong></a><strong> </strong>changes the entire dynamic. Select a batch of manual tests and click &apos;Automate.&apos; AIDEN runs a pre-flight check, flags the cases that need human attention (vague preconditions, overly complex flows), and converts while you grab coffee.</p><figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/01/data-src-image-9a28b3ea-f7ff-4a39-8496-c4284c9eff97.png" class="kg-image" alt="Q4 2025: Qase Product Updates" loading="lazy" width="1600" height="805"></figure><p><strong>A UI Built for How You Work&#xA0;</strong></p><p>We rebuilt the entire AIDEN interface with a full-screen modal and unified flow whether you&apos;re converting or generating from scratch. Environment setup is baked into the process.</p><p>The JSON editor? Gone. AIDEN&apos;s stability reached the point where the safety net became unnecessary clutter.</p><p>You can now add steps on the fly, edit as you go, and export code from a dedicated tab in the right panel.</p><figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/01/data-src-image-4b29ea81-bcd8-4871-bc99-4609f24af52d.png" class="kg-image" alt="Q4 2025: Qase Product Updates" loading="lazy" width="1600" height="803"></figure><p><strong>API Testing Because Your App Isn&apos;t Just a UI&#xA0;</strong></p><p>Here&apos;s something we heard a lot: &quot;What about my API layer?&quot; Fair question. <a href="https://help.qase.io/en/articles/11012497-aiden-qa-architect?ref=qase.io#h_adcc9f302d"><strong><u>AIDEN now generates autotests that validate APIs</u></strong></a>.&#xA0;</p><figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/01/data-src-image-1d5b3a2f-8e36-4894-8b62-855f6f2bee21.png" class="kg-image" alt="Q4 2025: Qase Product Updates" loading="lazy" width="1600" height="836"></figure><p><strong>Pre-warmed Environments for Speed</strong>&#xA0;</p><p>We&#x2019;ve re-engineered the backend of AIDEN so browser environments are ready the second you hit &quot;Automate.&quot; No loading spinners.</p><p><strong>Real-World Automation&#xA0;</strong></p><p><strong>The 70% Truth</strong>: Most vendors will tell you their AI achieves 95% automation rates in a perfectly curated lab. We&#x2019;d rather tell you the truth. Based on real-world usage in messy, edge-case-riddled production environments, AIDEN is now automating over 70% of manual boilerplate.</p><p>That might not look as &quot;sexy&quot; on a marketing slide, but think about the math: if your team spends 40 hours a week on manual regression, you just got 24 hours back. Every week. To make sure you actually use that time, we&#x2019;ve doubled AIDEN credits across all plans and implemented hard expenditure caps, so you can scale without &quot;Cloud Bill Heart Attacks.&quot;</p><h3 id="qase-is-now-ai-native-with-mcp"><strong>Qase is Now &quot;AI-Native&quot; with MCP</strong></h3><p>We know that for many of you, AI is becoming the &quot;connective tissue&quot; of your entire engineering stack.</p><p>This quarter, we launched <a href="https://github.com/qase-tms/qase-mcp-server?ref=qase.io"><strong><u>the Qase MCP Server</u></strong></a>. The Model Context Protocol (MCP) is a new open standard that allows AI models to safely interact with your data. By plugging Qase into your local AI assistant (like Claude Desktop), you can now ask high-level questions about your quality state without leaving your workspace.</p><p>AIDEN remains your expert for building and running tests, but the MCP server makes Qase a Knowledge Base for your entire engineering workflow.</p><figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/01/data-src-image-9ad2fda2-bad2-4695-904c-6093e856e910.png" class="kg-image" alt="Q4 2025: Qase Product Updates" loading="lazy" width="1600" height="796"></figure><h2 id="architecture-that-scales-with-you"><strong>Architecture That Scales With You</strong></h2><p><strong>Shared Steps Go Global: </strong>Quick question: How many times have you copied the same &quot;user login&quot; shared step across different projects?&#xA0;</p><p>If the answer is &quot;more than once,&quot; you&apos;ve been working too hard.&#xA0;</p><p><a href="https://help.qase.io/en/articles/5563709-shared-steps?ref=qase.io#h_e9184f903d"><strong><u>Global Shared Steps</u></strong></a> live at the workspace level now. Build your authentication flows, payment scenarios, and teardown sequences once. Use them everywhere. Update in one place, deploy to infinity.</p><figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/01/data-src-image-fc930a0a-f5c4-4ce2-9b3f-36f2fde42119.png" class="kg-image" alt="Q4 2025: Qase Product Updates" loading="lazy" width="1600" height="725"></figure><ul><li><strong>Linked Shared Steps:</strong> Refactoring used to come with anxiety. Which tests use this? What will break if I change it? Now, when viewing a Shared Step, <a href="https://help.qase.io/en/articles/5563709-shared-steps?ref=qase.io#h_8415af9269"><strong><u>you instantly see all associated test cases</u></strong></a>. One click shows you the complete impact zone.</li></ul><figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/01/data-src-image-bff0c3d0-68e6-4dc8-aae6-a58689dc38aa.png" class="kg-image" alt="Q4 2025: Qase Product Updates" loading="lazy" width="1600" height="466"></figure><h2 id="the-most-requested-features"><strong>The Most Requested Features</strong></h2><ul><li><a href="https://help.qase.io/en/articles/5563702-test-runs?ref=qase.io#h_18ef56a071"><strong><u>The Failed-Test Clone</u></strong></a><strong>: </strong>Our user feedback board had one feature request that kept climbing: &quot;Let me rerun just the failed and skipped tests.&quot;<strong> </strong>Close a test run with failed/skipped tests only, clone it with just those cases, and retest without wading through hundreds of passing tests.</li><li><strong>Clone Wars Sequel: </strong>Ever noticed how you avoid starting new test cycles because recreating the structure feels harder than it actually is?&#xA0;<a href="https://help.qase.io/en/articles/5563703-test-plans?ref=qase.io#h_03d7d74503"><strong><u>Test Plan cloning</u></strong></a> removes this worry entirely. Last sprint&apos;s plan becomes this sprint&apos;s foundation in three clicks. Keep the assignees or don&apos;t. Rename it inline. Done.</li></ul><figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/01/data-src-image-97a13832-f713-42ef-afc9-34a77cbba5e8.png" class="kg-image" alt="Q4 2025: Qase Product Updates" loading="lazy" width="1600" height="860"></figure><p></p><p>The activation energy to start drops to near-zero, so you start. Momentum builds from there.</p><ul><li><strong>Webhooks: </strong>You can now push <strong>test results via </strong><a href="https://help.qase.io/en/articles/5563718-webhooks?ref=qase.io"><strong><u>Webhooks</u></strong></a> to third-party tools. This unblocked several enterprise customers who needed test data consolidated into platforms like Aha!.</li></ul><figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/01/data-src-image-ba60e339-82e4-4ce0-a6bd-d01899112ec7.png" class="kg-image" alt="Q4 2025: Qase Product Updates" loading="lazy" width="1600" height="963"></figure><p>It&apos;s the kind of integration that sounds mundane until you realize it&apos;s the glue holding your entire quality ops together.</p><ul><li><strong>Data Retention: The Auditor Insurance Policy: </strong>Our default data retention works for most teams. But regulated industries, mature enterprises, and anyone who&apos;s ever heard the phrase &quot;we need historical data for the compliance review&quot;&#x2014;they need more.<strong> </strong><a href="https://help.qase.io/en/articles/12632372-tms-pricing-data-retention-add-on?ref=qase.io"><strong><u>The Data Retention Add-on</u></strong></a> extends test result history to 5 or 10 years. Transparent pricing, prorated per user.&#xA0;</li></ul><figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2026/01/data-src-image-e23f69fd-ad83-40e8-b166-734d9a8be665.png" class="kg-image" alt="Q4 2025: Qase Product Updates" loading="lazy" width="1600" height="816"></figure><h2 id="the-invisible-infrastructure-improvements"><strong>The Invisible Infrastructure Improvements</strong></h2><p>Some updates don&apos;t make headlines but change everything about how the platform feels:</p><ul><li><strong>Async downloads</strong> means exporting 10,000 test cases doesn&apos;t turn into a staring contest with a progress bar.</li><li><strong>Full suite paths in QQL</strong> solved the &quot;which Regression Tests suite are we talking about?&quot; problem that plagued every large repository.</li><li><strong>Include children suites</strong> means querying a parent now returns everything nested beneath it. No more manual enumeration of every child suite.</li><li>We also <strong>patched 20+ security vulnerabilities</strong> in our JS reporters, added <strong>Vitest support</strong>, updated <strong>Mocha to version 11</strong>, improved <strong>Cypress/Cucumber integration</strong>, and shipped quality-of-life fixes for <strong>Robot Framework, Pytest, and Playwright</strong>.</li></ul><p>None of these will change your life individually. Collectively? They&apos;re the difference between a tool that fights you and a tool that flows.</p><h2 id="what-this-actually-means"><strong>What This Actually Means</strong></h2><p>Software releases usually follow a pattern: big promises, modest delivery, PR spin to bridge the gap.</p><p>We&apos;re not interested in that game.</p><p>Everything in this post is live. In production. Available now. The bulk conversion that eliminates your backlog anxiety? Shipped. The AIDEN UI that gives you room to work? Deployed. The global shared steps that scale across your workspace? Running right now.</p><p>QA is evolving from &quot;the people who find bugs&quot; to &quot;the people who design quality systems.&quot; Q4 was about giving you the steering wheel for those systems.</p><p>We&apos;re not asking you to imagine what testing could be like. We&apos;re showing you what it is.</p><p>Happy testing.</p><hr><p>Don&apos;t miss out on future updates and valuable content! Connect with us on<strong>&#xA0;</strong><a href="https://www.linkedin.com/company/qaseio/?ref=qase.io"><strong><u>LinkedIn</u></strong></a><strong>&#xA0;</strong>to stay informed about all things Qase.</p>]]></content:encoded></item><item><title><![CDATA[427,743,515 reasons to celebrate]]></title><description><![CDATA[Qase year-in-review highlighting key moments, product growth, and ecosystem milestones from the past year. 427,743,515 reasons to celebrate.]]></description><link>https://qase.io/blog/427-743-515-reasons-to-celebrate/</link><guid isPermaLink="false">694aa9016ff3ae0001b370b8</guid><category><![CDATA[Community]]></category><dc:creator><![CDATA[Berna Agar]]></dc:creator><pubDate>Wed, 31 Dec 2025 11:17:21 GMT</pubDate><media:content url="https://d36r73waboa44k.cloudfront.net/2025/12/Qase_New_Year_2026_post_v2.png" medium="image"/><content:encoded><![CDATA[<img src="https://d36r73waboa44k.cloudfront.net/2025/12/Qase_New_Year_2026_post_v2.png" alt="427,743,515 reasons to celebrate"><p>That&apos;s not a typo. That&apos;s the number of test results you sent to Qase in 2025.</p><p>If you shipped code this year, you know the grind. But as we look back at the Qase ecosystem, the sheer scale of what you all built (and protected) is staggering.</p><p>This wasn&apos;t just a big year; it was a half-billion-check-marks kind of year:</p><p><strong>&#x1F680; 427.7 M Test Results Sent:</strong> Nearly half a billion potential fires extinguished before they ever hit production.</p><p>&#x1F4C8;<strong> ~15M Results Per Week:</strong> The current heartbeat of our ecosystem. Every seven days, 15 million moments of truth flow through Qase.</p><p><strong>&#x1F916; 415.6M Auto Tests</strong>: Automation is the engine. Our cloud handled 415M+ checks, keeping your CI/CD pipelines screaming fast while you solved the hard problems.</p><p><strong>&#x1F9E0; 12.1 Manual Results</strong>: For the edge cases, the UX nuances, and the &quot;does this actually feel right?&quot; moments. Nearly 10 million human-led tests prove that great software still needs great people.</p><p><strong>&#x1F3C3; 6.6M Test Runs</strong>: Over 6 million distinct &quot;go-to-market&quot; moments.</p><p><strong>The Power of the Ecosystem: </strong>Software doesn&#x2019;t live in a vacuum. This year, you connected Qase to 5,354 different apps.</p><p><strong>But here&apos;s what the numbers don&apos;t capture: </strong>The late-night debugging sessions. The &quot;it works on my machine&quot; moments. The relief when tests finally turn green. The quiet satisfaction of catching a bug before your users did.</p><p><strong>To our ecosystem:</strong> You&#x2019;re the reason we push updates at midnight. The reason we obsess over milliseconds. The reason we built AIDEN.</p><p>Behind every one of these results is a team that cares about their users. You didn&apos;t just ship features; you shipped quality. You didn&apos;t just meet deadlines; you met standards.</p><p><strong>Happy New Year to the builders, the breakers, and the bug-hunters. &#x1F942;</strong></p><p>Thank you for trusting Qase to be the home for your quality data. Here&apos;s to shipping better software, together. &#x1F680;</p>]]></content:encoded></item><item><title><![CDATA[Capturing Database Queries in Tests with Qase]]></title><description><![CDATA[Capturing Database Queries in Tests with Qase: capture and report database queries in tests using Qase for better debugging and visibility.]]></description><link>https://qase.io/blog/capturing-database-queries-in-tests-with-qase/</link><guid isPermaLink="false">6942c71d6ff3ae0001b370a7</guid><category><![CDATA[Automated Testing]]></category><dc:creator><![CDATA[Dmitrii Gridnev]]></dc:creator><pubDate>Wed, 17 Dec 2025 15:08:13 GMT</pubDate><media:content url="https://d36r73waboa44k.cloudfront.net/2025/12/Blog-image---Capturing-Database-Queries-in-Tests-with-Qase.png" medium="image"/><content:encoded><![CDATA[<h3 id="introduction">Introduction</h3><img src="https://d36r73waboa44k.cloudfront.net/2025/12/Blog-image---Capturing-Database-Queries-in-Tests-with-Qase.png" alt="Capturing Database Queries in Tests with Qase"><p>As covered in the articles on <a href="https://qase.io/blog/intercepting-http-requests-in-tests-with-qase/">Network Profiler</a> and <a href="https://qase.io/blog/intercepting-sleep-calls-in-tests/">Sleep Profiler</a>, Qase provides powerful tools for monitoring and analyzing test scenarios. This article explains Database Profiler&#x2014;a tool that automatically tracks and logs database operations.</p><h2 id="the-problem-with-databases-in-tests">The Problem with Databases in Tests</h2><p>In integration and E2E tests, the database is critical. Without profiling it&#x2019;s hard to see:</p><ul><li><strong>Which queries ran</strong> and how long they took</li><li><strong>Why tests are slow</strong>: which specific query is the bottleneck</li><li><strong>Where to debug</strong>: the exact query text and error message</li><li><strong>How much data changed</strong>: number of affected rows</li></ul><h2 id="what-is-database-profiler">What is Database Profiler</h2><p>Database Profiler intercepts SQL queries and sends each call to Qase as a separate step. You see the query text, execution time, affected rows, and connection info.</p><h2 id="how-database-profiler-works">How Database Profiler Works</h2><p>It relies on monkey patching and proxy classes: the profiler wraps query execution methods, measures their time, and sends the data to Qase TestOps as test steps.</p><h2 id="usage-example-postgresql-pytest">Usage Example (PostgreSQL + pytest)</h2><pre><code class="language-python">import psycopg2
from qase.pytest import qase


@qase.title(&quot;User Database Operations Test&quot;)
def test_user_operations():
    conn = psycopg2.connect(
        host=&quot;localhost&quot;,
        port=5432,
        database=&quot;testdb&quot;,
        user=&quot;testuser&quot;,
        password=&quot;testpass&quot;,
    )
    cursor = conn.cursor()

    cursor.execute(&quot;SELECT * FROM users WHERE id = %s&quot;, (1,))
    user = cursor.fetchone()

    cursor.execute(
        &quot;UPDATE users SET email = %s WHERE id = %s&quot;,
        (&quot;newemail@example.com&quot;, 1),
    )
    conn.commit()

    cursor.execute(&quot;SELECT COUNT(*) FROM users&quot;)
    count = cursor.fetchone()[0]

    assert count &gt; 0
    conn.close()
</code></pre><figure class="kg-card kg-image-card"><img src="https://d36r73waboa44k.cloudfront.net/2025/12/article-1.png" class="kg-image" alt="Capturing Database Queries in Tests with Qase" loading="lazy" width="641" height="496"></figure><h2 id="configuration">Configuration</h2><h3 id="via-command-line">Via command line</h3><pre><code class="language-bash"># Enable only Database Profiler
pytest --qase-profilers=db

# Combine with other profilers
pytest --qase-profilers=db,network,sleep
</code></pre><h3 id="via-qaseconfigjson">Via qase.config.json</h3><pre><code class="language-json">{
    &quot;mode&quot;: &quot;testops&quot;,
    &quot;profilers&quot;: [&quot;db&quot;],
    &quot;testops&quot;: {
        &quot;api&quot;: {
            &quot;host&quot;: &quot;qase.io&quot;
        }
    }
}
</code></pre><h2 id="conclusion">Conclusion</h2><p>Database Profiler gives full visibility into SQL queries in tests: query text, timing, affected rows, and connection context. It accelerates bottleneck analysis, simplifies debugging, and helps keep tests stable and fast.</p>]]></content:encoded></item><item><title><![CDATA[How SUSE Matured QA with Qase]]></title><description><![CDATA[SUSE matured their QA and test management system with Qase.]]></description><link>https://qase.io/blog/how-suse-matured-qa-with-qase/</link><guid isPermaLink="false">69401e3a6ff3ae0001b37079</guid><category><![CDATA[Case Study]]></category><dc:creator><![CDATA[Torben Robertson]]></dc:creator><pubDate>Mon, 15 Dec 2025 17:34:15 GMT</pubDate><media:content url="https://d36r73waboa44k.cloudfront.net/2025/12/SUSE_case_study_img.png" medium="image"/><content:encoded><![CDATA[<img src="https://d36r73waboa44k.cloudfront.net/2025/12/SUSE_case_study_img.png" alt="How SUSE Matured QA with Qase"><p>Inside Brandon DePesa&#x2019;s journey from startup spreadsheets to unified, data-driven quality at SUSE&#xA0;</p><p>SUSE is one of the most recognizable names in computing, from open source to enterprise. In its cloud-native business, Rancher Prime is the centerpiece: a platform for Kubernetes cluster provisioning and management.</p><p>For many customers, their first experience with SUSE begins not with a purchase order, but with open-source experimentation.&#xA0;</p><p>&#x201C;We&#x2019;ve found that a lot of our customers start as open-source users,&#x201D; explains Brandon, Director of Quality Engineering for SUSE&#x2019;s cloud-native group. &#x201C;They might be playing with Rancher at home or using it for a POC. We need something that works and works well, so they want to keep using it and recommending it.&#x201D;</p><p>For Mike Latimer, Senior Director of Engineering overseeing QA, security, project management, and infrastructure, it boils down to trust:</p><p>&#x201C;We as a business rely on trust between us and our customers. At the core of that trust is a reliable, functional product.&#x201D;</p><p>That trust has to hold across open-source users and paying enterprise customers with complex hybrid environments. Quality is the foundation; QA is not &#x201C;just another function,&#x201D; it&#x2019;s existential.</p><h2 id="from-three-qa-engineers-to-eighty-when-startup-systems-needed-to-scale">From three QA engineers to eighty: when startup systems needed to scale</h2><p>SUSE&#x2019;s cloud-native business went from scrappy, scattered QA artifacts to a single, data-driven quality hub that leadership and stakeholders across the company can see and trust.</p><p>When Rancher was a startup ten years ago, the QA team was tiny.</p><blockquote>&#x201C;When I started, QA was three people including myself,&#x201D; Brandon recalls.</blockquote><p>At that scale, using lightweight tools was fine. Test cases were tracked in spreadsheets, plans and notes were written in Confluence and internal wikis, and some teams were using GitHub issues or source control as makeshift test repositories.</p><p>&#x201C;So even five years ago,&#x201D; says Brandon, &#x201C;with ten to fifteen people split across a few teams, it wasn&#x2019;t ideal, but it wasn&#x2019;t a huge problem.&#x201D;</p><p>Then Rancher matured into an enterprise product, was acquired by SUSE, and everything changed. Today, Brandon&#x2019;s QA organization is about eighty people, plus &#x201C;tons of developers, project managers, and everyone else who is interested in the QA world.&#x201D; That scale exposed cracks in the old way of working.</p><p>Different teams had different tools, different naming conventions, and different places where test plans and cases lived. If you weren&#x2019;t on a given team, you might have no idea where their critical test artifacts were stored.</p><p>&#x201C;It became very difficult to even understand where test artifacts were located,&#x201D; Brandon says. &#x201C;Test plans, test cases, automated tests, documentation&#x2014;you could only reliably find them if you were on that team.&#x201D;</p><p>For a business unit selling a suite of cloud-native products that have to work together, that was risky. You can&#x2019;t easily validate cross-product scenarios if you can&#x2019;t even see what&#x2019;s being tested where.</p><h2 id="reporting-that-couldn%E2%80%99t-keep-up-with-leadership-questions">Reporting that couldn&#x2019;t keep up with leadership questions</h2><p>The fragmentation problem got even worse when leadership wanted to see the big picture.</p><p>Questions like: &#x201C;What&#x2019;s our overall test coverage?&#x201D;, &#x201C;How many test cases do we have across Rancher Prime?&#x201D;, &#x201C;What percentage of those are automated?&#x201D; were nearly impossible to answer at an aggregate level.</p><p>&#x201C;We had no requirements traceability, no requirement coverage,&#x201D; Brandon says. &#x201C;At best, we could manually count test cases for each area. Things like automation velocity or manual vs. automated ratios across products, were just impossible to provide.&#x201D;</p><p>To produce an answer, teams had to collect test assets from multiple spreadsheets, wikis, and repos, normalize them into a single format, and manually crunch numbers in yet another spreadsheet.</p><p>It was known to be so painful that leadership wouldn&#x2019;t even ask for deeper QA metrics. Everyone knew the data either didn&#x2019;t exist or wasn&#x2019;t worth the effort to assemble.</p><h2 id="the-search-involving-the-people-who%E2%80%99d-use-the-tool">The search: involving the people who&#x2019;d use the tool</h2><p>Brandon didn&#x2019;t want to pick a platform from a slide deck. He wanted real engineers using real tools.</p><blockquote>&#x201C;You need a level of trust with your team,&#x201D; he says. &#x201C;If you just say, &#x2018;Here&#x2019;s the new tool, use it,&#x2019; even the best tool in the world will get resistance.&#x201D;</blockquote><p>So he assembled a group of engineers from multiple QA teams and ran a structured evaluation. They shortlisted four to five tools; a mix of open source and commercial products, including some of the biggest names in the space.</p><p>Each team got trial licenses. Engineers imported real test cases, ran test cycles, created defects, and tried to hook up automation. They documented what worked, what didn&#x2019;t, and what felt painful day-to-day. That feedback loop drove the final recommendation.</p><p>&#x201C;I&#x2019;ve been in software and quality for almost twenty-five years,&#x201D; Brandon says. &#x201C;I still learned new things from the team through this process; especially what was painful for them, and what they actually wanted from a tool.&#x201D;</p><h2 id="why-qase-the-combination-that-finally-checked-all-the-boxes">Why Qase: the combination that finally checked all the boxes</h2><p>In the end, Qase stood out not for one feature, but for a combination that matched SUSE&#x2019;s reality.</p><p><strong>UI that felt natural to engineers</strong></p><blockquote>&#x201C;The UI was incredibly easy to use,&#x201D; Brandon recalls. &#x201C;Even without formal training or demos, very few people had issues. It&#x2019;s just intuitive and it works.&#x201D;</blockquote><p>For an eighty-person QA org, this mattered. Every hour spent teaching people how to click around a tool is an hour not spent improving tests.</p><p><strong>Flexible imports that made migration simple</strong></p><p>One team had 2,500 existing test cases. They assumed migrating that volume would take at least several days. With Qase, they imported everything in a matter of hours.</p><p>&#x201C;A lot of tools are very prescriptive,&#x201D; Brandon says. &#x201C;You have to fit their exact format or enter everything manually. Qase&#x2019;s flexibility around how you import made it super simple to get going.&#x201D;</p><p>That flexibility meant teams could move all their manual test assets into Qase first without derailing sprints.</p><p><strong>Automation-friendly from day one</strong></p><p>SUSE&#x2019;s QA teams run a wide range of automated tests, from UI automation to backend and system tests, with different frameworks and reporting formats across teams.</p><p>Some teams plugged into Qase&#x2019;s built-in integrations (like JUnit). Others had custom test result outputs, which they ran through a lightweight script.</p><p>Qase also helped close the loop between manual and automated worlds. If a result came in for a test that didn&#x2019;t exist yet, Qase auto-created the test case. Teams could later link these auto-created cases back to existing manual tests if needed.</p><p>Once a few teams had working scripts and patterns, they shared them internally, and rollout to the rest of the org became very fast.</p><p><strong>Read-only licenses that enabled real visibility</strong></p><p>Cost was a factor for a large enterprise group. The QA team uses around 75 full Qase licenses. But there are as many as 200 other stakeholders&#x2014;product managers, architects, security engineers, support, and leadership&#x2014;who need to view test data without authoring or executing tests.</p><p>&#x201C;If we had to pay [full price] for another 200 licenses just so people could view test data, that would have been a no-go from the beginning,&#x201D; Brandon says.</p><p>Qase&#x2019;s read-only licenses alleviated that burden. Now, anyone in the organization who needs to see QA data can get visibility for little additional budget.</p><p>&#x201C;That visibility for people not working directly with tests was a huge point in Qase&#x2019;s favor,&#x201D; Brandon notes.</p><h2 id="the-impact-from-invisible-qa-to-a-visible-strategic-advantage">The impact: from invisible QA to a visible, strategic advantage</h2><p>Before Qase, leadership&#x2019;s view of QA was limited and sporadic. Now, they have aggregate dashboards showing QA activity and coverage across all cloud-native products, per-product dashboards for more detailed views, and real-time updates as teams add tests and execute runs.</p><p>Early on, there was a strong focus on the number of test cases added over time, and how many of those were automated. Those metrics were simple but powerful, showing continuous improvement and a growing investment in automation.</p><p>Over time, Brandon realized an even more important metric: test execution volume.</p><p>&#x201C;We got to a point where the numbers for test cases and automation added were fairly consistent,&#x201D; he says. &#x201C;What that didn&#x2019;t show was the overall level of testing actually happening.&#x201D;</p><p>Now, SUSE&#x2019;s dashboards clearly show how many tests are run, against how many versions, and how often those runs happen.</p><p>A simple example: suppose a team has ten test cases, five of them automated. On paper, that&#x2019;s unremarkable. But when Qase reveals those tests are being run across three to five different versions, multiple times a week, you start seeing hundreds of executions for those same tests. That tells a much richer story about the team&#x2019;s real-world effort.</p><blockquote>&#x201C;QA is difficult to measure,&#x201D; Mike says. &#x201C;If the product works, it just works. But if you can show you&#x2019;re improving test coverage and doing a better job today than yesterday, that shows your focus is on quality and you&#x2019;re not getting left behind.&#x201D;</blockquote><h2 id="qa-onboarding-time-cut-from-weeks-to-hours">QA onboarding time cut from weeks to hours</h2><p>Before Qase, a QA engineer moving to a new team or helping another product often needed a week or two just to understand where test cases lived and figure out what had been covered and what hadn&#x2019;t.</p><p>Now, Brandon estimates that the same understanding can usually be achieved in a few hours.</p><p>With Qase, an engineer can browse or search test plans and suites in one place, filter by component, feature, or configuration, see which versions and environments are in scope, and understand the team&#x2019;s existing coverage and known gaps.</p><p>This reduction in onboarding friction lets the QA org be far more fluid in how it deploys people across teams and initiatives.</p><h2 id="cross-product-coverage-testing-the-suite-not-just-the-pieces">Cross-product coverage: testing the suite, not just the pieces</h2><p>SUSE doesn&#x2019;t just sell a single product; it sells a suite that must work together: Rancher management, virtualization, observability, security, and more.</p><p>Qase has become central to understanding end-to-end coverage across these components.</p><p>&#x201C;We&#x2019;re able to look at test coverage for, say, Rancher management and our virtualization efforts,&#x201D; Brandon explains. &#x201C;We can see that one team has tested X, Y, and Z, but we&#x2019;re missing A, B, and C on the virtualization side, or that we&#x2019;ve never tested feature A with feature B enabled at the same time.&#x201D;</p><p>That ability to quickly spot cross-product gaps is crucial for enterprise customers who run SUSE in heterogeneous, high-stakes environments.</p><h2 id="qa-made-visible-and-interesting-to-the-wider-organization">QA made visible and interesting to the wider organization</h2><p>Traditionally, testing is a black box. If things break, customers notice. If they don&#x2019;t, QA has quietly done its job. Qase changed that dynamic inside SUSE.</p><blockquote>&#x201C;Testing is usually very opaque to anyone not in QA or development,&#x201D; Brandon says. &#x201C;The dashboards and metrics open a window into what teams are actually doing.&#x201D;</blockquote><p>Now architects browse Qase to understand test history and outcomes. Product managers dig into runs to reassure customers about specific OS or platform coverage. Project managers use Qase data to plan and track release readiness.</p><p>Brandon shares a story about an architect who saw a dashboard full of failing runs and got worried that releases were going out in that state. A quick walkthrough showed that he was looking at cumulative testing across all builds, including early, intentionally unstable ones. Filtering to only final release-candidate runs showed what actually went out to customers: fully passing test suites.</p><p>&#x201C;You could see his confidence increase as we walked through it,&#x201D; Brandon says. &#x201C;He started asking deeper questions about how we test and why. Those are great conversations to have.&#x201D;</p><p>And this broader engagement is only possible because read-only access is affordable. There&#x2019;s far less financial friction to &#x201C;just log in and poke around.&#x201D;</p><h2 id="productivity-more-time-spent-testing-less-fighting-tools">Productivity: more time spent testing, less fighting tools</h2><p>SUSE doesn&#x2019;t try to measure productivity through micromanagement. No one is counting how many minutes a tester spends writing steps.</p><p>But the qualitative signals are strong. Engineers say it&#x2019;s easier to write test cases and plans in Qase. Creating and assigning runs is simpler and faster than managing spreadsheets. Reporting both manual and automated results is far less tedious.</p><p>And quantitative indicators back that up:</p><blockquote>&#x201C;Our test cases and automation have been continuously increasing,&#x201D; Brandon says. &#x201C;That can&#x2019;t happen if we&#x2019;re getting bogged down by process and manual tools.&#x201D;</blockquote><p>By cutting away the friction of scattered documents and manual compilation, Qase lets Brandon&#x2019;s org spend more time on what actually matters: finding issues early and ensuring releases are solid.</p><h2 id="conclusion-qa-that-meets-the-premium-standard-of-suse%E2%80%99s-cloud-native-offering">Conclusion: QA that meets the premium standard of SUSE&#x2019;s cloud-native offering</h2><p>SUSE&#x2019;s cloud-native business operates in one of the most complex, demanding corners of enterprise infrastructure. Kubernetes, hybrid and edge deployments, an ever-growing matrix of providers and platforms&#x2014;none of that leaves room for imprecise QA.</p><p>Brandon&#x2019;s journey&#x2014;from being one of three QA engineers at a startup, to leading an 80-person quality org using Qase as its backbone&#x2014;is a blueprint for QA leaders facing the same transition.</p><p>From scattered artifacts to a single source of truth; from ad hoc reporting to real-time executive dashboards; from opaque QA work to a visible, respected discipline.</p><p>Qase didn&#x2019;t create SUSE&#x2019;s quality culture, but it gave Brandon and his teams the platform they needed to express that culture at enterprise scale, and to show executives, customers, and colleagues exactly what quality looks like in a cloud-native world.</p>]]></content:encoded></item></channel></rss>