Skip to content

"Stored in Table" customer reporter #535

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Shoelace opened this issue Dec 13, 2017 · 9 comments
Closed

"Stored in Table" customer reporter #535

Shoelace opened this issue Dec 13, 2017 · 9 comments

Comments

@Shoelace
Copy link
Member

the main page doco has the line

Test Output    
DBMS_OUTPUT Yes Yes (clean formatting)
File Yes (to db server only) Yes (on client side)
Stored in Table Yes No (can be added as custom reporter)

is there an existing implementation of "Stored in table" custom report?
has anyone made one? (or started one even)

or do i need to make my own?

@jgebal
Copy link
Member

jgebal commented Dec 13, 2017

You would need your own reporter.
If you want to spend time creating one, we could add it to the framework.
A guide for creating reporters cold probably be helpful too.

@Shoelace
Copy link
Member Author

okay.. so I've started created a table_reporter.. i started with copying one of the existing ones (i think xunit _reporter)
it works by outputting all the data at the end in teh after_calling_run method

this was working fine until i tried to save the expectation results and found there is only
ut_test.failed_expectations and not an equivalent passed/skipped expectations.

is this correct or am i missing something? can i get a list of passed/skipped in teh after_run event or do i need to rewrite and log stuff incrementally ?

@jgebal
Copy link
Member

jgebal commented Dec 29, 2017

Storing results in table is not providing much value as one needs to build something on top of it to be able to use it.
We deliberately made decision to make the framework stateless, so developers can say "i don't care if it's the same database, as long as it is acting the same way"
Imagine you're running your development in the cloud where you get one of 1000 databases as your development database.
You wouldn't probably go looking for your test results in one of those 1000 db's.
You'd need something that is outside of those 1000 db's to collect test results and give full overview of all executions.
This is where CI/CD tools come in play.

@Shoelace
Copy link
Member Author

Shoelace commented Jan 1, 2018

my particular use case it to integrate with other tooling that already uses tables created as part of a home grown unit test framework.

i hope its more of an interim step before updating those tools also.

@jgebal
Copy link
Member

jgebal commented Jan 2, 2018

OK, got it.
For the failed_expectations - this is my doing in #500. I've removed the successful expectations some time ago to minimize the memory hit for large suites - storing each expectation result can be costly.

It seems like we we actually could use the full set of expectation results as well as list of failed expectations only.

This will mean a bit of refactoring and moving responsibilities between objects.

@jgebal
Copy link
Member

jgebal commented Jan 2, 2018

@Shoelace .
I'm adding all_expectations as an attribute for ut_test class.
The existing failed_expectations will still be in place.

@jgebal
Copy link
Member

jgebal commented Jan 2, 2018

You can see changes in #545.
That should get you sorted.

@pesse
Copy link
Member

pesse commented Feb 28, 2018

With #589 we will be able to have custom reporters. I think we can close this and #541 once the PR is merged, right?

@pesse
Copy link
Member

pesse commented Mar 8, 2018

Possible with custom reporters.
Closed by #589

@pesse pesse closed this as completed Mar 8, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants