Skip to content

Custom test settings a.k.a. test metadata #4409

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
tomaspekarovic opened this issue Jul 15, 2022 · 11 comments
Open

Custom test settings a.k.a. test metadata #4409

tomaspekarovic opened this issue Jul 15, 2022 · 11 comments

Comments

@tomaspekarovic
Copy link

It would be great If we have keyword that will be able to set test metadata to test report. This is not issue, when you are running tests using robot, but when you're running tests using pabot the suite metadata is not enough to cover all the information that should be visible at the end of the run.

Not mentioning that rebot merge will mess suite metadata when they are added dynamically.

I'm testing insurance systems and when i'm running tests I get a lot of different URLs where insurances are created. Each testcase have its own url and I'm looking for way how to expose that url in the report.
For now I have custom keyword that logs this url to report, but it is not sufficient as you need to open test steps, open keyword and look for url there. So it should be really cool, if there are any test metadata that I can use to extract information and show them at on place in report

@pekkaklarck
Copy link
Member

I assume you mean name-value pairs like with suite metadata? Tests already have tags and documentation and I don't think them getting such metadata would add too much value. Could you clarify what benefits it would give over using tags and doc?

What we can consider is enhancing tags so that you could use them as name value pairs like name: value, name = value, or name-value. We could show such tags differently than normal tags in logs and reports and possibly add new functionality that operates only with tag names or values. We basically needed to decide what separator to use or should we make the separator configurable. I believe having a common separator would be best and like name: value the most. Anyway, this is something that should probably get a separate issue.

The main reason we added metadata, not tags, to suites was that we wanted to make it explicit that tags are always related to tests and metadata to suites. We have, however, already long time ago added keyword tags so nowadays it's not only tests that have tags. Another reason to add metadata for suites was that it's not clear that Force Tags and Default Tags settings are related to tests. We have recently made a decision to replace them with more explicit Test Tags so also that reason isn't so relevant anymore. I've been thinking we could thus change suites to also have tags, not metadata, in the future. That would require better support for using tags as name-value pairs as well as a very long deprecation period.

@tomaspekarovic
Copy link
Author

tomaspekarovic commented Jul 29, 2022

Hi, thanks for reply.
Tags - for me, tags are something that groups test cases into some sort of 'folders' (regression, quick, en/pl/ro tests ... )
Something that I can use for separate or select subset of tests I need to run. So from my point of view storing information should not be part of this. (but i can be wrong). This also makes report much more difficult to read, because if I use this, I will have 200 tags per full suite run.

Documentation is something descriptive about what test is doing. For example we're using it to document which part of application is covered with this tests and little description of test, if the name is not descriptive enough.

E.g. PhotoScanner eforms contract professions onepagerForNa productRecommendation needAnalysisByRole can be Documentation.
We also using this to generate coverage map of testcases against the source code.

As I read your comment I realises that I could be able to use
Set Test Documentation aaaaa append=${TRUE}
to append information that I want to show at the first place. This information can be something like :
login: xy, password: ab, url_to_debug: https:.... and maybe any other information I need.

But again from my point of view tags are not something I want to bend to serve my purpose. From the documentation point of view, this makes bigger sense.

At the end I just want to point out some of the information, and key/value pairs looks like a good idea. For me it is much more readable to have (in log.html):

Full Name:     my test xy
Documentation:    sdfasfa
Tags:    bg region    quick    regression
URL:     https://...
USER:    admin
PASSWORD:  admin
Start / End / Elapsed:    ....
Status:    FAIL
Message:   sffsdf

than

Full Name:     my test xy
Documentation:    sdfasfa
                  URL:     https://...
                  USER:    admin
                  PASSWORD:  admin
Tags:   bg region    quick    regression
Start / End / Elapsed:    ....
Status:    FAIL
Message:   sffsdf

Extending either tags or documentation with information like URL, USER, PASSWORD etc is not so readable. But the beauty is in the eye of the beholder. To be honest I didn't realise that I can append information to Documentation :)

But this key/value pairs is something that suite metadata is doing and therefore the idea of test metadata comes into my head.

@Noordsestern
Copy link
Member

Hello all together,

I stumbled across the need for test metadata recently, too. The background is that I need robot test cases to map to test requirements in a test management tool (HP ALM, for instance). Test management tools use all kind of metadata and all kind of styles that sometimes violate styles of robot tags. For instance, some metadata is case sensitive, but robotidy formats by default all tags to lowercase - which is good style, imho. I do not want to deactive that rule for tags only because i mix metadata in to the tags. Because when i execute all test cases with a certain tag, i don't want to remember casing.

I need test metadata that is not really processed by Robot itself, but that is availabe through robot api to tools like listeners and result visitors. Using tags, which are an vital part of Robot Framework specs, are for me not suitable, as the metadata add a lot of noise to tags.

In a later iteration, I may also consider inheriting of metadata, but that is for later. For now, it would help tremendously, if test cases would support metadata.

@Snooz82
Copy link
Member

Snooz82 commented Jun 26, 2024

@pekkaklarck

As discussed in the Roadmap-Workgroup here the conclusion of the new feature:

  1. Test Metadata will not be implemented as actual test metadata.
  2. A new feature Custom Test Settings will be implemented and can be used as test metadata

Feature description

It shall be possible to define Custom Test/Task Settings via CLI so that the parser can parse these custom settings properly and IDE-Plugins like RobotCode can offer code completion.
These custom settings will be transferred to the running model similar to the metadata as they are, while line continuations are line breaks and spaces in a line are preserved but stripped.
During execution the Variables in their values are resolved and in the result model existing variables are replaced with their value. (tbd if this happens on start_test or end_test) .
In output.xml/json the Custom Test/Task Settings are stored and they are visualised similar to Metadata or other Test settings at the Test/Task level.

In the next step, we could also introduce Custom Suite Settings which also could be configured via cli/parser so that also these custom settings could be proposed by code completion, which would be a clear advantage compared to Suite Metadata.

Also Custom Settings on suite and test level could be typed to allow different value types.
For example could a custom setting allow keyword calls which would allow to build features like "Test Failure Handler" which could get a keyword call defined and a listener that reads that setting, could then call that keyword in case of an error. Typing, also i.e. with enums, would allow the IDE-plugins to offer correct completions.

*** Test Cases ***
Test
    [Documentation]    Das ist doku
    [Tags]    1    2    3    4    jira:itbpm-89012    os:win
    [Timeout]    100 sec
    [Setup]    Log    Setup
    [Teardown]    Log    Teardown
    [Error Handler]    Take Screenshot    failure_${TEST_NAME}.png
    [Issue]    \#4409
    Log     test

In the first mvp this could just be logged in log.html like this:
image

By adding typed Custom Test/Task/Suite Settings to the parser and the running and result model, many new features become possible
Also a much better support for custom parser, so that they can store additional settings in the model and use them in libraries, modifier, listener and via prerebotmodifier also differently in the logs.

We (imbus) together with a customer are interested to contribute that feature and wish to discuss it further.
Thanks @pekkaklarck for reading and please contact me with a proposal for a meeting.

@d-biehl
Copy link
Contributor

d-biehl commented Jun 27, 2024

Just as a thought for a future extension and so I don't forget it,
Would it be possible to also provide a public API so that, for example, a PreRunModifier can also register "Custom Settings"? If such a PreRunModifier needs its own settings, you wouldn't have to specify them separately...

However, this would probably be more of a kind of plugin API 🤔

@Snooz82 Snooz82 modified the milestones: v8.0, v7.1 Jun 27, 2024
@pekkaklarck
Copy link
Member

Very quick comments:

  • The overall functionality described above by @Snooz82 sounds good.
  • Failure handlers would be a good feature in the core and should be implemented separately.
  • Pre-run modifiers, listeners, etc. could be allowed to modify metada freely, including adding new entries. The CLI setting would be needed by the parser to recognize valid entries. Otherwise typos like [Setpu] would create metadata instead of an error being reported.
  • This functionality would be more convenient to use if Robot supported configuration files. That's a separate topic, though, amd requires a separate issue.
  • The plan has been to keep RF 7.1 pretty small and have it out in July. The scope and timeline have been agreed with the core team and the board, and adding a pretty big feature like this to the scope has risks and requires new discussion.
  • I personally believe we should continue with RF 7.1 as planned, but if RF 8 is too far, we can consider RF 7.2 where this would be the key feature.

@Snooz82
Copy link
Member

Snooz82 commented Jun 27, 2024

i do agree!
👍

lets see when it can be implemented and if it is after 7.1 we can do a 7.2.

sounds all good

@pekkaklarck
Copy link
Member

I moved this to RF 7.2 scope to keep the RF 7.1 scope smallish with an idea to get it released in July or early August. If you think you can implement this before that, we can move it back to RF 7.1 scope. Before that we needed to agree on the remaining open design decisions, though. We can have a call or a Slack discussion if needed.

@d-biehl
Copy link
Contributor

d-biehl commented Jul 2, 2024

I have a meeting with the customer tomorrow who wants to implement it with us. We want to discuss some details and look at where we need to make code adjustments and do some planning.
I will ask Daniel (that's also his name) when he is available. What would be your preferred time, or when are you available?

@pekkaklarck
Copy link
Member

I'm available tomorrow the whole day. Then again next week, but when depends on the schedule of my kids football cup. The week after ought to be possible as well. Please suggest times that are good for all Daniels and I try to pick one that's good for me too.

@pekkaklarck pekkaklarck changed the title Test Metadata Custom test settings a.k.a. test metadata Sep 9, 2024
@pekkaklarck
Copy link
Member

pekkaklarck commented Sep 9, 2024

We had a RF 7.2 planning session today where we agreed to keep this in RF 7.2 scope and also discussed a bit about the design.

Here are some design decisions based on the various discussions. As you can see, some design decisions are still open.

  1. The TestCase object gets a new metadata (or possibly settings) attribute containing the custom metadata/settings as a dictionary. Most likely it should be the same Metadata object TestSuite uses.
  2. Items in the metadata dictionary are written to the log file the same way as suite level metadata.
  3. How to show them in report hasn't yet been discussed. Options are basically a separate column for each metadata entry or having one column where all metadata entries are shown. The latter is considerably easier to implement and I believe we should at least start with it.
  4. Tests can use [Metadata] Name Value syntax to add items to the metadata dictionary. This syntax isn't that convenient, but it is consistent with suite level Metadata and also works without any extra configuration.
  5. A new command line option like --allowed-metadata or --test-settings is added to for configuring metadata/settings the parser should support. This allows specifying metadata using more convenient [Name] Value syntax with the cost of extra configuration. The actual setting name and its semantics need to be decided.
  6. For consistency and convenience, we add same configuration support also for suite level metadata. This allows replacing current Metadata Example Value with just Example Value. The option name and semantics needs to be decided along with test level option. This enhancement could get a separate issue, but I believe including it as part of this issue is fine at least for now.
  7. Pre-run modifiers, listeners, and other such tools can modify TestCase.metadata and TestSuite.metadata freely regardless on possible configuration from the command line. That configuration is only for the parser.
  8. We need to decide how to handle conflicts with built-in settings. We probably don't want users to be able to override e.g. [Setup].
  9. The syntax used with the metadata values need to be discussed. We use Robot's custom formatting syntax with suite level metadata and cannot change it at least without deprecation. For consistency I believe we should use the same syntax also with test metadata. We could consider supporting other formats as well, but that requires a separate issue.
  10. With tags we support --tag-stat-link and --tag-doc for adding custom links and documentation, respectively. Something similar, especially links, would likely be useful with test metadata as well. This kind of enhancements should work also with suite metadata and require separate issues.
  11. Also other features that utilize metadata (e.g. selecting tests based on that) require separate issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants