logo
Contact Us

system-testing

BDD-Based Integration Testing Framework for NebulaGraph: Part II

BDD-Based Integration Testing Framework for NebulaGraph

In BDD-Based Integration Testing Framework for NebulaGraph: Part I, I introduced the evolution of integration testing for NebulaGraph. In this article, I will introduce how to add a test case into the test set and run all the test cases successfully.

Preparing Testing Environment

At the beginning of building the testing framework for NebulaGraph 2.0, we developed some tool classes to help the testing framework quickly start and stop a single-node NebulaGraph cluster, including checking for port conflicts and modifying part configurations. Here is the original execution procedure:

  1. Using a Python script to start the NebulaGraph services.
  2. Calling pytest.main to execute all the test cases concurrently.
  3. Stopping the NebulaGraph services.

However, parameters need to be passed transparently to the pytest.main function to specify some parameters of pytest and the scripts generated by cmake is needed to execute a single test case, which makes the framework not so convenient for users. What we want to achieve is executing a test case where it is located.

Starting Services

During this improvement of the testing framework, besides the changes to the program entry, most of the original encapsulated logics are reused. A lot of test cases have been accumulated for NebulaGraph, so single-process operation can no longer meet the requirements of fast iteration. We have tried several parallel test executor plugins, considering compatibility requirement, we finally chose pytest-xdist to accelerate the testing procedure.

Pytest supports fixtures across these five scopes: session, module, class, package, and function. However, we need a global fixture to start and initialize the NebulaGraph services. Currently, for a session-scoped fixture, the highest level, each runner needs to be executed once. For example, if there are eight runners, eight NebulaGraph database services must be started, which is not what we want.

According to the documentation of pytest-xdist, a lock file is needed for inter-process communication between runners. To make sure that the control logic is simple enough, we separate the logic for starting and stopping the program and for preparation from the process of executing the test, that is, a single step is used to start NebulaGraph, and when errors occur to some tests, Nebula Console is connected to the NebulaGraph database that is in the process of testing for validation and debugging.

Importing Data

Before the new framework, to import data to NebulaGraph, an entire INSERT statement in nGQL is executed, which causes the following problems:

  1. When the imported dataset is large, the INSERT statement will be very long and a timeout error may occur when the client executes the query.
  2. It is difficult to develop new dataset for testing, because the data of the CSV files must be converted to nGQL files first.
  3. The dataset cannot be reused. For example, the data of a CSV file cannot be directly imported to graph spaces that have VIDs of different data types, because different INSERT statements are needed.

To solve these problems, referring to the implementation of Nebula Importer, we separate the importing logic from the dataset completely and implement a new importing module in Python. However, so far, only CSV files are supported and one CSV file can store only one tag or edge type data.

According to the new importing module, the structure of the dataset for NebulaGraph testing becomes clear.

nebula-graph/tests/data
├── basketballplayer
│   ├── bachelor.csv
│   ├── config.yaml
│   ├── like.csv
│   ├── player.csv
│   ├── serve.csv
│   ├── team.csv
│   └── teammate.csv
├── basketballplayer_int_vid
│   └── config.yaml
└── student
    ├── config.yaml
    ├── is_colleagues.csv
    ├── is_friend.csv
    ├── is_schoolmate.csv
    ├── is_teacher.csv
    ├── person.csv
    ├── student.csv
    └── teacher.csv

3 directories, 16 files

Each directory has all the CSV files for one graph space. The description of each file and the details of a graph space are configured in config.yaml in each directory. In the preceding example, the two graph spaces, "basketballplayer" and "basketballplayer_int_vid", share the same dataset. To add a new dataset, only a directory like "basketballplayer" is needed. For more information about config.yaml, see the nebula-graph repository.

Installing Dependencies

Besides pytest and nebula-python, the commonly used libraries, some plugins such as pytest-bdd and pytest-xdist are used in the testing framework. In addition, to better unify the format of the FEATURE files for adding test cases, reformat-gherkin is introduced and some format modifications are made to align the format with that of openCypher TCK FEATURE files.

Currently, nebula-python and reformat-gherkin are installed with their source code. To simplify the installation, a makefile is provided in the nebula-graph/tests directory. To prepare the environment for the testing, run the following command.

$ cd nebula-graph/tests && make init-all

The format check procedure has been added to the GitHub Action CI process. If your files are not compliant with the expected format, run make fmt to format them.

Writing Test Cases

According to Part I, the BDD-based testing framework for NebulaGraph is a black box testing process, which means you do not need to know how your statements are called or which function to call is more in line with the expected result. What you only need to do is writing a FEATURE file in natural language based on the rules. Here is a test case example.

Feature: Variable length pattern match (m to n)
  Scenario: both direction expand with properties
    Given a graph with space named "basketballplayer"
    When executing query:
      """
      MATCH (:player{name:"Tim Duncan"})-[e:like*2..3{likeness: 90}]-(v)
      RETURN e, v
      """
    Then the result should be, in any order, with relax comparison:
      | e                                                                                  | v                  |
      | [[:like "Tim Duncan"<-"Manu Ginobili"], [:like "Manu Ginobili"<-"Tiago Splitter"]] | ("Tiago Splitter") |

In a FEATURE file, the Given section is for the initial conditions for the test. In this example, it creates a graph space named "basketballplayer". The When section contains the inputs for testing, that is, nGQL statements. In this example, the Then section gives the expected results and the expected comparing method, that is, the records of the table should be compared in a relax and unordered manner.

FEATURE File Format

The FEATURE files are written in Gherkin language. A FEATURE file is composed of the following sections:

  • Feature: It contains detailed description and/or the title of the file.
  • Background: It contains the steps that are common to all the scenarios in the same FEATURE file.
  • Scenario: It contains the steps necessary for a test case.
  • Examples: It separates the scenario from the data further to simplify editing the Scenario section.

Each scenario has its own steps. Each step is composed of these:

  • Given: Specifies the initial conditions for the test case. In the Background section, only Given steps should be used.
  • When: Specifies the inputs for the scenario.
  • Then: Describes the expected result when the steps inside the When section are done.
  • And: Optional. It follows Given, When, or Then to further describe these steps.
  • Examples: Similar with the description of the Examples above. However, it is limited to the scenario where it is located. It has no effect on the testing of other scenarios in the same FEATURE file.

Steps

According to the preceding description, each Scenario is composed of several steps. In NebulaGraph testing framework, its format is compliant with the rules of openCypher TCK and some special steps are developed to simplify editing of the test cases:

  1. Given a graph with space named "basketballplayer": Using a graph space with pre-imported the "basketballplayer" dataset.
  2. creating a new space with following options: Creating a new graph space by specifying these parameters: name, partition_num , replica_factor, vid_type, charset, and collate.
  3. load "basketballplayer" csv data to a new space: Importing the "basketballplayer" dataset to the new graph space.
  4. profiling query: Executing the PROFILE statement. An execution plan will be returned.
  5. wait 3 seconds: Waiting three seconds. Time is needed for data synchronization during the schema operations, so such a step is necessary.
  6. define some list variables: Defining some variables of the LIST type to have some lists returned as expected.
  7. the result should be, in any order, with relax comparison: Performing a comparison in a relax and unordered manner to the result. It means that only the expected result are compared, but the unexpected result are not compared.
  8. the result should contain: Containing the expected content in the result.
  9. the execution plan should be: Comparing the returned execution plans.

Besides the preceding steps, more steps can be defined to speed up the development of test cases.

Parser

openCypher TCK defines the format of the expected results. The format of vertices and edges is borrowed from the pattern of MATCH, so if you are familiar with openCypher query language, you can easily understand the results of the TCK test cases. For example, the format of part graph semantics is as follows:

  1. Describing a vertex: (:L {p:1, q:"string"});
  2. Describing an edge: [:T {p:0, q:"string"}];
  3. Describing a path: <(:L)-[:T]->(:L2)>

However, NebulaGraph differs a little from Neo4j in their graph models. For example, in NebulaGraph, each tag can have its own properties, so according to the existing openCypher TCK rules, a vertex with multiple tags that have their own properties cannot be described. The description of edges has the similar problem. In NebulaGraph, an edge key is of four-tuples , but the existing openCypher TCK rules do not support src, dst, and rank. Therefore, to solve these problems, we expand the expressions of the expected results:

  1. Describing a vertex with multiple tags that have their own properties: ("VID" :T1{p:0} :T2{q: "string"})
  2. Describing edges with src, dst, and rank: [:type "src"->"dst"@rank {p:0, q:"string"}]
  3. Describing a path by adding the vertices and edges. Refer to the TCK rules.

The expanded expressions of vertices and edges are compatible with the existing TCK test cases and fit the design of NebulaGraph. In addition to the expression problem, the next one we faced was how to efficiently and accurately convert the expression into a specific data structure so that it can be compared with the actual query results. After considering solutions such as regular expression matching and parsing by a parser, we decided to construct a parser to process the strings with specific syntax rules. This solution has the following advantages:

  1. According to specific syntax rules, the generated AST can have the data structure that is compliant with expected result rules. And then in the validation phase, only the specific fields in the specific structures are validated.
  2. Processing complex regular expression matching of strings can be avoided, which can reduce parsing errors.
  3. Parsing other strings, such as regular expressions, lists, and collections, can be supported.

With ply.yacc and ply.lex, we can use a small amount of code lines to implement the complex requirements described above. For more information about the implementation, see nbv.py.

Testing Procedure

Currently, the testing procedure is as follows:

1) Edit the FEATURE files.

Currently, all the FEATURE files for NebulaGraph can be found in the tests/tck/features directory of the github.com/vesoft-inc/nebula-graph repository.

2) Start the NebulaGraph services.

$ cd /path/to/nebula-graph/tests
$ make up # Starts NebulaGraph services.

3) Execute the testing locally.

$ make fmt # Formats
$ make tck # Executes the TCK tests.

4) Stop the NebulaGraph services.

$ mak
e down

Debugging

When the test cases need to be debugged, you can use some methods supported by pytest to debug them further. For example, run these commands to execute the test cases that failed in the last process.

$ pytest --last-failed tck/ # Executes the test cases in the TCK directory that failed in the last execution process.
$ pytest -k "match" tck/    # Executes the test cases that contain the match statements.

Alternatively, you can add a mark to a specific scenario in the FEATURE file and execute the marked case only. For example, run these commands:

# in feature file
  @testmark
  Scenario: both direction expand with properties
    Given a graph with space named "basketballplayer"
    ...

# in nebula-graph/tests directory
$ pytest -m "testmark" tck/ # Executes the test case with the "testmark" mark.

Summary

Standing on the shoulders of our predecessors allows us to find a more suitable test solution for NebulaGraph, so we would like to thank all the open-source projects mentioned in this article.

In the process of practicing pytest-bdd, we have found some imperfections. For example, it has compatibility issues with plugins such as pytest-xdist (gherkin-reporter), and pytest does not natively provide fixtures of the global scope level. However, in general, the benefits it brings to NebulaGraph far outweigh these problems.

In Part I, I mentioned that the new testing framework enables no-programming. It is not a fantasy. When the mentioned mode is fixed, we can develop a scaffold for adding test cases, allowing users to "fill in the blanks" with data on pages to automatically generate corresponding FEATURE files, which can further facilitate users. If you are interested, welcome to contribute to this idea.