Jump to: navigation, search

Integration/Test/Test Case Expectations

System tests are a valuable in many ways. They help us detect regressions. They tell us about the health of the projects involved in the system tests. They give us confidence in what we are building and releasing will work as designed.

As OpenDaylight system tests have grown in number its becoming increasingly more time consuming to debug and understand any failures. To help with this, we ask that all system tests meet the below expectations. We expect this will also result in healthier system test code.


Guidelines And Expectations

100% Pass Rate

The goal for each project should be to have 100% passing and reliable system tests.

What Failures Are Ok

If a test case does fail it should be for a well known reason and likely associated and linked to an upstream bug in an OpenDaylight project. There are keywords to aid in this. In short, you can add a test case teardown for those test cases with keyword "Report_Failure_Due_To_Bug" and pass the bug id to that keyword. An Example Is Here.

Dealing With Other Failures

Test cases can fail for a variety of reasons. 3rd party software could be flaky; the infra could be unreliable; the test case itself is not proper and needs more development. For all of these test cases, they should have the robot tag of "exclude".

Test Case Tagging

Any tags can be used for your own purposes, but the tag "exclude" is reserved for those test cases that do not pass 100% of the times they are run (unless clearly failing due to an upstream bug (i.e. in bugzilla)). Test case analysis for each release will be done for all test cases that are not tagged with "exclude". Separate jobs can be run as needed that will ignore this tag and execute those test cases. This will be left up to the project to manage.