With the need for frequent builds—often many times in a day—AI-led testing is the modern approach as it allows quality engineers to create scripts and autonomously run tests together to find bugs and provide data to get to the root cause. It’s time to get rid of slow and ancient processes and experience this new form of QE improvement!
AI-driven testing could mean different things to different QA engineers. Some may see it as using AI for identifying objects or helping create script-less testing; others might consider it as autonomous generation of scripts while some others would think in terms of leveraging system data to create scripts which mimic real user activity.
In the light of a now maturing AI test technology, these are all possible functions that AI can drive in quality engineering.
While the majority of testers and engineers have yet to experience this form of QE improvement, let’s explore how this kind of QA is done. You don’t want to risk falling behind.
The Waterfall method takes months, and even years. So why are engineers sticking with it?
Old habits die hard. The method we use to test software today stems from the work at Mercury Interactive in the mid 1990’s where they defined a method of coding to automate user actions. This code required maintenance at each new build. But this was fine as long as a new build was needed only a few times a year. However, over a period of time, the code to test the software would exceed the software itself in effort.
Turning the clock forward by 25 years to today, we find that the majority of test automation continues to use this waterfall method of testing. We spend months or even years writing tests that were mandated by a business analyst. Then throw huge resources at maintaining those tests.
While we may have moved on from QTP to UFT to Selenium (which is itself nearly 20 years old), the process flow is the same. However, instead of a new build a few times a year, we have new builds a few times a day, or perhaps a few times an hour.
So, why are we still using a process designed to take months in a workflow? Because the open source offerings didn’t change the process. So that’s what the industry gave us. Basically, new languages but the same method that was used in 1995.
We have found that teams who are able to implement what they can in scripts and manual testing have on average less than 15% code, page, action, and likely user flow coverage. In essence, even if you have 100% test coverage, you are likely testing less than 15% of what users will do. That in itself is a serious issue. Remember, the primary task is to find all the bugs before users do. But you cannot even prioritize them unless you find them.
Re-imagining the testing world
Starting in 2012, Appvance set out to rethink the concept of QA automation.
Currently, two technologies are required in one platform of four million lines of code So, we are talking 100X the size of anything in the open source world.
The idea was this...that what the Business Analysts wants tested is not entirely the best way to find bugs, or even mimic real user activities. It’s their guess as to how people may be using the software, website or mobile app. It’s not a bad guess, but it is a guess. If this is true, we should be able to dramatically enhance our ability to find bugs in and around those use cases and beyond them. In addition, we need to lessen script writing, make it much faster and far more resilient to accessor changes.
And we would need to generate many tests fully autonomously. Why? Because applications today are 10x the size they were just ten years ago. But your team doesn’t have 10X the number of test automation engineers. And you have perhaps 10X less time to do the work than ten years ago. It will require each engineer in your team has to be 100X more productive than they were 10 years ago. But since the test automation tools have not changed, and humans didn’t magically grow 20 arms and more brain power, the automation engineers will never be able to catch up with the needs of the organization and identify bugs before your users do.
Let’s face it, there is no way to catch up. You know this already. We take shortcuts, drop many tests, ignore the results of others. Do the best we can with the limited resources at hand. And the next build will add more features and pages and states. And that means there is more work to do each day and you’ll continue to fall further behind. Forever.
Unless you change something.
And that something is AI.
AI-testing in two steps
We leveraged AI and witnessed over 90% reduction in human effort to find the same bugs. So how does this work?
It’s really a two-stage process.
First, leveraging little bits of AI in Test Designer, Appvance’s codeless test creation system, we make it possible to write scripts faster, identify more resilient accessors, and substantially reduce maintenance of scripts.
The systems can pick the most stable accessors, immediately rerun that script several times to find even better accessors and create a repository of accessors automatically. Fall back to those accessors when tests run and hit an accessor change, and finally self-heal scripts which must be updated with new accessors. These four built-in technologies give you the most stable scripts every time with the most robust accessor methodologies and self-healing. Nothing else comes close.
The final two points above deal with autonomous generation of tests. In order to beat the queue and crush it, you have to get a heavy lift for finding bugs. And as we have learned, go far beyond the use cases that a business analyst listed. If job one is to find bugs and the prioritize them, leveraging AI to generate tests autonomously is a godsend.
In general, an AI engine, which already has been trained with millions of actions, attempts to create real user flows and takes every possible action, reveals every page, fills out every form, gets to every state, and validates the most critical outcomes. All without writing or recording a single script. Fully machine driven. This is called blueprinting an application. And you do this at every new build. Often this will generate 1000 or more scripts in a matter of minutes, run them itself, and hand you the results including bugs, a load of data to help find the root cause, and the scripts to repeat the bug. A further turn of the crank can refine these scripts into exact replicas of what production users are doing and apply them to the new build.
Any modern approach to continuous testing needs to leverage AI in both helping QA engineers create scripts as well as autonomously create tests so that both parts work together to find bugs and provide data to get to root cause. That AI driven future is available today from Appvance.