logo

1. Feature Overview

Harmony now supports automated test design based on user-defined requirements, streamlining the test design process and enhancing accuracy. It generates reliable and close to optimal test cases for domain testing

2. How It Works

  • Step 1: Add Requirements Begin by entering your test requirements in the Test Design panel. These can be written in plain language.
  • Step 2: Select Settings:
Image without caption
  • Step 3: Select both ‘AI test design’ and ‘Domain table’:
Image without caption
  • Step 4: Select “Generate” Harmony validates your input. If there's a problem, it generates a descriptive narrative that explains the issue and suggests corrections.
Image without caption
  • Step 5: Review Generated Tests If the requirements pass validation, Harmony automatically produces relevant test cases for review.
  • Step 6: Regenerate if Needed If you adjust any requirements after test generation, you can use Regenerate to produce updated test cases.
  • Step 7: Review Explanations Harmony adds short explanatory notes when needed, showing:
    • Dependencies between requirements
    • Domain-testing indicators
    • Purpose of each requirement
  • Step 8: Handle Errors If test cases appear incorrect, regeneration can often resolve the issue.

3. Tips for Testers & Analysts

  • Keep requirements clear and concise for better AI interpretation. Unlike your company’s team, AI has no domain knowledge about your planned system. Therefore, you need to add all the necessary information to fully understand the system. This may include additional requirements and explanations. Note that AI may not recognise the missing information, but will generate an incomplete test set. Extend the requirements even if a tester considers the requirements perfect.
  • Add instructions to Harmony when needed. You can add any instructions right after the requirements. For example, in some cases, you don’t want domain testing, as other techniques involve the required tests. In this case, just write a sentence: Please ignore domain testing. In another case, some tests are missing. Adding an instruction to add the missing tests, Harmony usually adds the necessary test cases. Don’t extend the model for the first time, but instruct Harmony to do an even better job. Remember that regeneration is free for you.
  • Check Harmony’s feedback carefully — Harmony not only creates the model, but also gives you explanations and additional information. When you don’t know why some test cases are created, these explanations are important. You can ask for additional explanation, see the bullet point above.
  • Repeat generation when necessary. If Harmony doesn’t generate the model or it’s not good enough, repeat the generation
  • Regenerate the model after any major requirement change.
  • Use explanations to trace back test logic and ensure coverage.

4. How AI applies test design techniques

Harmony applies linear test design techniques and test selection criteria. Therefore, the number of test cases remains manageable. Even if we use only a few test design techniques, they will detect most defects.
We use the following methods:
  • Domain testing
  • Action-state testing
  • Complementary tests
  • Extreme-value tests
We argue that no other test design technique is required.

Domain testing

Domain testing is a generalization of boundary value analysis. Harmony automatically identifies when domain testing is necessary and generates domain tables to support this process. By applying domain testing, all the control flow errors will be detected.
To ensure the domain table is accurate, the specification itself must be precise. This includes clearly defining the type and precision of each parameter. By default, Harmony assumes parameters are of type decimal with a precision of 0.01. If a parameter is an integer, the specification should explicitly state this—for example: “Total price is an integer.”
Our method is highly reliable and capable of detecting all defects in the control flow. However, there are cases where the available test data may prevent the creation of a feasible test case. Consider the following requirement:
Checkout is not possible until the total price reaches 10 euros.
In this scenario, the OFF data point should be 9.99. But if the existing data set doesn’t allow for a total price of exactly 9.99, Harmony may be unable to generate a feasible test case. Does this mean Harmony produced an incorrect test? Absolutely not.
Rather than altering the test case, the correct approach is to adjust the input data to make the test executable. Otherwise, the tests may pass initially, but future changes in input data—without any modification to the code—could expose a defect. This can lead to confusion for developers who won’t understand how the bug emerged.
We strongly recommend executing the original test cases generated by Harmony. If input data needs to be modified, ensure that either:
  • the code remains correct, or
  • the defect is already caught by the initial test.
This approach safeguards against hidden bugs and ensures long-term reliability.

Action-State Testing

Action-state testing is an advanced test design technique that excels at uncovering computational faults. In some systems, it's sufficient to test against the stated requirements. But in others—especially those with complex workflows—additional testing is essential. That’s where state awareness comes into play.
By incorporating system states into the test steps and applying a simple yet effective selection criterion, Harmony’s AI generates additional test cases that go beyond requirement-based coverage.
You’ll see in the reasoning when action-state testing is necessary. These extra test cases are valuable and typically require no additional implementation effort. So don’t dismiss them as redundant—they’re often critical for catching subtle defects.
Want to explore how to perform action-state testing manually? We've got you covered.

Complementary Testing

Negative testing is defined as “a software testing technique that involves inputting invalid, unexpected, or incorrect data to check how a system handles errors and prevents crashes or vulnerabilities.”
Complementary testing includes negative testing—but goes further. It covers alternative scenarios that aren’t explicitly defined in the requirements but could realistically occur.
Example:
Requirement: The user can attach files to messages.
Complementary test: The user sends a message without attaching any files.
These tests help ensure robustness by validating behavior in edge or omitted cases.

Extreme Value Testing

Extreme value testing targets the minimum and maximum input boundaries that may trigger edge-case behavior.
Example:
Requirement: Mask the email address by replacing the middle part of the username and domain (excluding the TLD) with asterisks—keeping only the first and last characters visible.
To validate this, we must test with the shortest possible valid email address, such as:
This ensures the masking logic holds even at the limits of input length.

Let me know if you'd like this adapted for a presentation, documentation, or training material—I can tailor it to fit any format or audience.