This article was first published on the personal website [BY Linzi]
I have written an article “Sacred QA” before, which is aimed at graduates who want to work at QA, and it talks about the five basic responsibilities of QA:
- understand and clarify business needs
- develop a strategy and design tests
- implement and execute tests
- defect management and analysis
- quality feedback and risk identification
recently, a friend asked me to share what aspects of systematic testing need to focus on, and i thought of these five basic responsibilities. the five responsibilities are explained in the original article based on the requirements of “producing cups”, and this article will expand on each responsibility to analyze what the test needs to do and how to do it from the perspective of testing practices and method sets.
01 understand and clarify business needs
business requirements are the source of software development, and it is self-evident that the correctness of understanding requirements is self-evident, and understanding and clarifying requirements is also a crucial part of testing efforts.
1. understand and clarify the dimensions of business needs
how to understand and clarify the needs? i think testers can understand and clarify business requirements from the following three dimensions:
the details of these dimensions are described in the article “how agile testing optimizes business value”.
2. testability of requirements
in addition to a correct understanding of business requirements, the quality of the description of requirements also requires attention, of which the testability of requirements is the most important aspect:
- if the requirements are unmeasurable, they are not acceptable, and there is no way to know whether the project has been successfully completed;
- write requirements in a testable way to ensure that they are properly implemented and validated.
the testability of requirements is mainly reflected in the following three dimensions:
the completeness of the requirements mainly means that the process path needs to be considered comprehensively, and the logical link is required to be complete, and there must be both a forward path and an abnormal scenario.
for example, the correct username and password can log in, and what happens to the incorrect username and password login, both of which need to be clearly defined.
the description of the requirements should not be described in subjective terms, but need objective data and examples to illustrate. for example, the following subjective description is a very bad example of a requirement:
the system should be easy to use by experienced engineers and should make as few errors as possible for the user.
it is recommended to use the “instantiation requirements” method to write the requirements document, and the business rules are expressed through examples, which is not only easy to understand the different roles of the team, but also not easy to produce ambiguity.
independence is mainly for a single business function point (user story in agile development), to be as independent as possible, with other functions clear boundaries, reduce the unfathomable function points caused by dependence.
FOR EXAMPLE, THE INPUT AND OUTPUT MUST BE VERIFIABLE IN THE SAME FUNCTION POINT, AND THE INPUT OF THE A FUNCTION POINT CANNOT BE VERIFIED BY THE OUTPUT OF THE B FUNCTION POINT.
User stories in agile development have THE PRINCIPLE OF INVEST, which describes testability and independence separately, and I think independence also affects testability, and here independence is used as a factor in testability.
02 develop a strategy and design tests
developing a strategy and designing tests is the most critical of the five responsibilities, covering a wide range of things. it seems to be two parts of strategy and test design, but it actually contains all aspects of testing that need to be considered. here are a few of the things i think are more valuable.
1. one-page testing of the strategy
strategy is the direction, to do a good job in software testing, it is inseparable from the guidance of testing strategies. testing strategies can often be challenging for less experienced testers. however, the “one-page testing strategy” that i once proposed can be a good way to help testers think about and develop testing strategies that are suitable for their own projects. the one-page testing strategy is shown below:
in the one-page testing strategy, the three parts that need to be considered in the testing strategy are clearly defined:
- guiding principle: the team is responsible for quality
- what to test: what to test
- how to test: test left shift, test right shift, and lean test
for more details, please refer to my article “one-page testing strategies”.
2. test process and quality access control
we often find that the testing process of some teams is also very clearly defined, but there is no strict requirement for what effect is required for each link, and many quality work is not in place, resulting in great pressure on the testers in the later testing stage or the quality of the final delivery is not high.
the previous page of the testing strategy already contains a testing process section, and the separate mention here is mainly to emphasize the importance of quality access control. the testing process may be different for each project or team, but regardless of which parts of the testing process include, the output of each link must be clearly defined, that is, the quality access control of each link must be clearly defined, as shown in the following figure:
note: this diagram is an example only, and the actual situation needs to be adapted to the situation of your own team.
3. typical test type
several different test types are listed in the flowchart example above, and the actual test types are much more than that. due to space limitations and the fact that this section is not the focus of this article, this article only describes four typical types of tests that are very closely related to testers. the classification dimensions of these four types of tests are not the same, and i will not go into detail here. students who are not clear but are interested, please search online by yourself.
the smoke test comes from the test of the circuit board, that is, after the power is turned on, see whether the circuit board smokes, if the smoke indicates that the circuit board is impossible to work properly, there is no need to verify other functions.
the smoke test corresponding to the software is to verify that the most basic behavior of the software is normal, for example: “is the program running?” “,” is the user interface open? or is the click event valid? etc. only if the smoke test is passed, there is a need to further test the function of the verification software, otherwise it needs to be repaired and re-issued with a new version.
we found that some teams only conduct smoke tests on newly developed features, which is actually not quite right, or that this test is not called smoke tests. the smoke test should be a verification of the basic behavior at the system level as a whole, regardless of what is new and old.
the purpose of regression testing is to verify that new features are being developed or bugs are broken when existing features are fixed. therefore, the object of regression testing is mainly for existing functions, and the testing of new functions is not called regression.
there are usually four types of strategies for regression testing:
- full return: this is indiscriminate, all existing functions are comprehensively tested, this strategy costs more, but the coverage rate is more complete, and there are generally more comprehensive return methods for financial products with particularly high quality requirements.
- selective regression: this is generally the case where tests communicate with development to understand the scope of the current code writing that may affect and choose to regress to these affected functional modules. this form may miss functions that are not aware of but are associated with, and there is a risk, but a more economical approach.
- metric regression: this is generally the team’s requirements for the coverage of regression tests, such as covering 50% of existing functional test cases, and performing regression tests cannot be lower than this coverage. this practice of looking at the indicator numbers is the least recommended, although the coverage rate is up to standard, but it is possible that the measured one is not measured.
- precision regression: precision regression is a very popular precision test, which is a technical means to correlate the scope of code changes with test cases and accurately execute the affected use cases. this quality is the most guaranteed, but the cost of accurate test implementation is very high.
regression testing can be performed manually or automated, but usually the amount of regression testing will be relatively large, and it will be more efficient to perform it in an automated manner.
end-to-end testing is based on a granular classification of test coverage, for unit tests, interface tests, and so on.
end-to-end testing validates the entire software and its integration with external interfaces from start to finish, with the aim of testing the dependencies, data integrity, and communication with other systems, interfaces, databases, etc. of the entire software to simulate complete business processes. therefore, end-to-end testing is the test that best reflects the real business behavior of users and has very important value.
however, because end-to-end testing involves various related components and external dependencies of the system, its stability and execution costs are relatively high. this leads to smaller coverage of interface tests and unit tests, which are generally tests implemented through isolated dependencies, which will not be described in detail here.
Exploratory testing was proposed by Dr. Cem Kanner in 1983 for scripted testing.
Cem Kanner defines exploratory testing as follows:
“exploratory testing is a style of software testing that emphasizes the individual freedom and responsibility of independent testers to continuously optimize the value of their work by treating test-related learning, test design, test execution, and analysis of test results as mutually supportive activities, executed in parallel throughout the project process.”
the core of exploratory testing is designed to iterate rapidly as a loop of test learning, test design, test execution, and test result analysis to continuously collect feedback, adjust tests, and optimize value.
exploratory testing especially needs the subjective initiative of testers, need to have a more relaxed environment to encourage test innovation in order to carry out better, if the requirements for test indicators are too high, the subjective initiative of testers is difficult to play, the effect of exploratory testing is also very limited.
exploratory testing is a relatively free testing style, it is not recommended to be trapped by various test models, nor is it recommended to strictly stipulate the execution of exploratory testing, which will affect the performance of exploratory testing.
For more information about exploratory testing, please refer to Thoughtworks colleague Liu Ran’s article “Exploratory Test Landing Practice” and Shi Xiangyang’s article “Exploratory Testing in Agile Projects”.
4. automate testing of hierarchical strategies
when introducing end-to-end testing earlier, we mentioned tests with different coverages, possibly unit tests and interface tests. the automated layering strategy is to layer these different granular test types, and it is recommended that the automated test needs to consider the coverage ratio of different layers according to factors such as cost and stability.
According to the google test law in the figure below, we can clearly see the difference in the cost of repair after the testing of different layers finds the problem, and the unit test is much lower than the cost of the problem found by the end-to-end test, so it is generally recommended that the test layer should be inclined to the pattern of the test pyramid, which is what the right side of the following figure looks like. Thoughtworks colleague Ham Vocke’s article “Test Pyramid Practice” describes this in great detail.
it should be noted that the test pyramid is not a silver bullet, the test strategy is not set in stone, and it needs to be adjusted and evolved according to the actual situation, and meeting the current product/project quality goals is the key.
for more information on automated test layering, you can also refer to the following articles:
- lean test
- thinking and practice of microservices testing
- test pyramid not silver bullet
- the seven-year itch for automated testing of a legacy system
5. test cases
designing test cases is a necessary basic skill for every tester, the quality of test cases directly affects the effectiveness of the test, the importance of test cases is self-evident, but it is not a very simple thing to design a good test case. the test cases described here do not distinguish between manual and automated use cases.
good test cases
first, it’s important to understand what kind of test cases are good use cases.
a good test case should be able to cover the software system under test and be able to detect all problems. therefore, a good test case needs to have the following characteristics:
- overall completeness without over-designing: a collection of valid test cases that can fully cover the test requirements; there will be no use cases that exceed the test requirements.
- accuracy of equivalence class division: each equivalence class guarantees that as long as one of the inputs passes the test, the other inputs must also pass.
- completeness of the set of equivalence classes: all possible boundary values and boundary conditions have been correctly identified.
of course, because of the complexity of the software system, not all test cases can be exactly 100% covered, but can only be as complete as possible.
test case design methodology
in order to achieve complete test cases, it is necessary to understand the corresponding test case design methods. test cases should be a combination of business requirements and system characteristics, combined with the design. there are several commonly recommended use case design approaches:
- Data Flow Method: A method to split a test scenario based on the data flow in a business process. Consider the data flow in the business process, and cut off the process at the point of data storage or change, forming multiple use case scenarios. This is covered in my article “What Comes to Mind When You Think of BDD”.
- equivalence class division: divide all possible input data of the program into several parts, and then select a small number of representative data from each part as test cases. equivalence classes are divided into effective equivalence classes and invalid equivalence classes, and test cases designed according to the equivalence class division method should pay attention to no redundancy and completeness.
- boundary value method: the boundary value analysis method is a supplement to the equivalence class division, usually taking values that are exactly equal to, just greater than, or just less than the boundary as test data, including testing the boundary values of the input and output and the use cases from the equivalence class boundary.
- exploratory test model method: recommend shi liang and gao xiang’s two teachers’ book “the road to exploratory test practice”, which divides the exploratory test into three levels according to different test objects: system interaction test, interactive feature test and single feature test, and each level introduces a different exploration model. while i don’t think exploratory testing needs to be done strictly according to these models, these models can help testers think during the exploration process and are a valuable reference for designing test cases.
for use case design, you can also refer to the following articles:
- business requirements and system functions
- DOES AGILE QA NEED TO WRITE TEST CASES? 》
- writing and managing test cases
03 implement and execute tests
implementing and executing tests is the corresponding testing activities that are guided by test strategies and executed according to the designed tests. this part is relatively simple, from the two dimensions of manual testing and automated testing to briefly introduce.
1. manual testing
manual testing, as the name suggests, is a manually executed test, according to whether there are test cases (scripts) designed in advance can be divided into scripted tests and exploratory tests.
the execution of scripted tests is relatively simple under the premise of mature test cases. however, some test preparations may be more complicated, such as preparing test data through long links, or allowing the system to reach the state triggered by tests, etc., and it may also consider the corresponding configuration adjustments of different environments, including the preparation and management of the environment. these are all things that testers may need to do to do a good job of manual testing.
Regarding exploratory testing, the Session Based Test Management (SBTM) approach to exploratory testing is detailed in The Road to Exploratory Testing Practice: the test charter is broken down into a series of test procedures in which the tester completes the design, execution, and documentation of a specific test charter.
similarly, this method has certain guiding significance for exploratory testing, but it is not recommended to strictly stipulate that it must be implemented according to this model, otherwise it will destroy the essence of exploratory testing and will not achieve the corresponding effect.
2. automated testing
the previous section introduces the hierarchical strategy of automated testing, and the implementation and execution of automated testing are introduced here.
the implementation of automated testing relies on automated testing tools, which are critical to the selection of tools. usually, the following factors need to be considered when selecting tools:
- meet the needs: different projects have different needs, choose according to the needs, do not seek the best, just seek the right fit.
- ease of use: common ease of use, as well as ease of use that matches the skills of the person writing the test, are all things to consider. at the same time, it needs to be easy to use, if a tool is not friendly to newcomers and difficult to get started, it is difficult to mobilize everyone to actively use.
- supported languages: it’s a good idea to write automation scripts in the same language as project development, giving developers the flexibility to add tests.
- compatibility: includes compatibility between browsers, platforms, and operating systems.
- reporting mechanisms: reporting results from automated testing is critical, and tools that improve a complete reporting mechanism are preferred.
- test scripts are easy to maintain: test code is as important as product code, and the maintenance of tests cannot be ignored, requiring an easy-to-maintain tool.
- stability of the tool: instability will lead to a decrease in the effectiveness of the test, first of all to ensure the stability of the tool itself, otherwise the gain is not worth the loss.
- Code execution speed: The execution speed of test code directly affects the test efficiency, for example, selenium and Cypress write test code execution speed is very different.
test the implementation
articles about automated testing are everywhere, and it is emphasized here that test data should not be written in test scripts, but that the data should be independent and data-driven to improve the reusability of test code.
automate the execution of tests
do you think that after the implementation of automated testing, the execution is simply running? nor is it. the execution of tests also requires certain strategies, such as: setting different execution frequencies for different tests on demand, integrating automated tests with pipelines to continuously test, with continuous feedback, and maximizing the value of automated tests.
for automated testing, we recommend reading the following articles:
- “The New Generation of BDD-Enabled Test Frameworks, Gauge+Taiko”
- Thoughtworks Insights Automated Testing Anthology
- anthology of automated testing essays on ms. ran liu’s website
04 defect management and analysis
defects are very valuable for software quality and software testing, and good defect management and analysis will bring great value, but they are often easy to ignore.
an important part of defect management is figuring out what the lifecycle of a defect looks like. often everyone feels that defects can be repaired and verified from discovery to repair, but in fact, there is more than that. i believe that the life cycle of a defect should include the following stages:
- finding defects: this is relatively simple, that is, to find system behavior that is inconsistent with the desired behavior, or non-functional problems such as performance and security. defects may be discovered during testing, may be reported by users, or can be routine log analysis or log monitoring alarms and so on through logs to find.
- positioning and information collection: after the defect is found, the corresponding defect information needs to be collected and the preliminary positioning is made. among them, the relevant information of the defect should be collected as much as possible, including complete reproduction steps, scope of influence, users, platform, data, screenshots, log information, etc. this step may sometimes require help from development or operations personnel.
- recording defects: that is, the collected log information is recorded in the log management system, the corresponding functional modules are associated, and the severity is defined.
- Triage/Prioritization: Not all of the recorded defects must be fixed, so first sort the defects to determine whether the defects are valid, prioritize the valid defects, and determine which ones to fix and when to fix them. This may need to be done with the business and developers.
- fixing defects: this step is left to the developer to fix the defects.
- testing for defect fixes: validates the flaws that the development fixes, ensures that the defects themselves have been fixed, and that appropriate regression testing of the relevant functionality is required.
- ADD THE APPROPRIATE AUTOMATED TESTS: FOR DEFECTS THAT HAVE ALREADY BEEN DISCOVERED, IT IS BEST TO ADD AUTOMATED TESTS, AND THE NEXT TIME IF A SIMILAR PROBLEM OCCURS AGAIN, IT CAN BE FOUND IN A TIMELY MANNER THROUGH AUTOMATED TESTING. AUTOMATED TESTS CAN BE UNIT TESTS, INTERFACE TESTS, OR UI TESTS, WHICH ARE DETERMINED ACCORDING TO THE ACTUAL SITUATION AND THE AUTOMATED TEST LAYERING STRATEGY. THIS STEP MAY BE REVERSED FROM THE PREVIOUS STEP.
- Statistics and analysis of defects: Statistical analysis of the number and severity of defects year-on-year/chain trends, analysis of the root causes of defects using fishbone charts and 5 Why methods, etc., locate the stage of defect introduction, and analyze the implementation effect of previous defect prevention measures.
- develop improvement measures to prevent defects: based on the results of the analysis in step 8, develop corresponding improvement measures that can be implemented to prevent defects.
- regular review and inspection of improvements: combined with the statistical analysis of defects, regularly review the series of activities of defect management and check the implementation of improvement initiatives to continuously optimize the defect management process and better prevent defects.
regarding the management and analysis of defects, i have written the corresponding articles before, and friends are welcome to move to read:
- effective management of software defects
- how defect analysis helps quality build-in
- “it’s all dirty data”
05 quality feedback and risk identification
testing requires a clear understanding of the quality status of the product, the ability to identify quality risks in a timely manner, and feedback to the entire team.
as mentioned earlier in the statistical analysis of defects, in addition to defective information related to quality, there may be many other data, and the collection and statistics of these data, and visual display to the team, will help the different roles of the team to better be responsible for quality. the relevant quality risks can be identified during the statistical and analytical process of quality data, and it is necessary to feedback the risks to the team as well.
quality status information may include test coverage, defect-related data, code freeze period length, test wait time, etc., and what information needs to be collected must be customized according to the actual quality requirements of the project.
quality feedback is recommended periodically, with testers leading the definition of what data needs to be collected, developers working with testers to collect relevant data, and the subsequent analysis process may also require developer involvement.
06 write at the end
this article is the basic article of systematic thinking for the construction of testing, mainly starting from the basic responsibilities of testing, introducing relevant methods, tools and practices, suitable for junior testers; of course, for intermediate and senior testers, you can also check whether these basic responsibilities are usually done and whether the relevant content is covered in their own test system.
Before the new article comes out, you can first pay attention to my self-published small book “More Than Testing” (click the link to download the electronic version for free), which actually introduces what testers or QA need to pay attention to beyond the basic duties of testing.