diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml deleted file mode 100644 index 1829940..0000000 --- a/.github/workflows/ci.yml +++ /dev/null @@ -1,16 +0,0 @@ -name: publish_documentation - -on: - push: - branches: - - main -jobs: - deploy: - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v2 - - uses: actions/setup-python@v2 - with: - python-version: 3.x - - run: pip install mkdocs-material - - run: mkdocs gh-deploy --force \ No newline at end of file diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000..e69de29 diff --git a/404.html b/404.html new file mode 100644 index 0000000..86159c2 --- /dev/null +++ b/404.html @@ -0,0 +1,767 @@ + + + +
+ + + + + + + + + + + +Looking to how other companies implemented UI/Snapshot testing
may help you to choose proper tools for your project
+and save your time.
We would be really grateful if you +could contribute and share your experience directly +to this page, to help other people.
+UI testing
Write:
Kaspresso
+Who write:
Android Engineers
+Runner:
Marathon locally and on the CI
+Where:
Headless emulators in Docker (Avito Image)
+How often:
Each 4h and before each release
+Network:
Mock, by Custom OkReplay
+Test report:
Allure
Other:
We use custom OkReplay
to achieve requests indexing, and the same request time as it was while recording.
+We're going to open-source this solution
Snapshot testing
Tools:
Screenshot tests for Android
+How often to run:
Each commit to the Design System module
Other:
We write snapshot tests per each component in the design system in all possible states, we don't write them
+for screens which implemented by using that components.
UI testing
Write:
Kaspresso
+Who write:
QA and developers
+Runner:
AndroidJUnitRunner locally and on CI we use Marathon + custom tooling on top
+Where:
On CI: Emulators (we use custom Docker container) and real devices (custom integration with STF)
+How often:
Each pull request (functional tests), before the release (e2e tests) and nightly (e2e tests)
+Test report:
On CI we use custom internal solution
Snapshot testing
Tools:
Kaspresso
+How often:
Many times per new feature to check new strings and translations
UI testing
Write:
Kaspresso
+Who write:
developers
+Runner:
AndroidJUnitRunner with Android Orchestrator
+Where:
On CI: real devices
+How often:
at noon and at night
+Test report:
Junit4
Snapshot testing
Tools:
Shot
+Who write:
developers
+Runner:
TestButler + Composer for test sharding (Shot support out of the box)
+Where:
On CI: emulators
+How often:
On every PR
+Test report:
Shot report, which includes image diffs when tests fail
UI testing
Write:
Kaspresso wrapped with custom DSL for creating a test data
+Who writes:
QA with support of Android Engineers
+Runner:
Marathon on the CI
+Where:
Headless emulators in Docker (Custom Image) at k8s
+How often:
Every night on every Portfolio branch(protected branch for each business feature) and develop; Every PR to develop.
+Test data:
End2End testing with test stands
+Test report:
Allure
+Test stability monitoring:
Custom tool for success rate visualization of each test between CI runs; Grafana for common graphs.
UI testing
Write:
Kaspresso
+Who:
QA and developers
+Runner:
Delivery Club fork of Avito Runner, Argo Workflows
+Where:
Redroid AiC, Redroid in DockerHub, Fork of Avito Emulator, Fork of Avito Emulator in DockerHub
+How often:
Each commit for Courier App and Consumer App, Before regress testing
+Network:
MockWebServer
+Test report:
Kaspresso Allure Integration + Avito Runner Integration
+Other:
Run Marathon in cloud
UI testing & Screenshot testing
Write:
Espresso, Screenshot tests from facebook
+Who:
Android Developers and QA Engineers
+Runner:
Custom runner based on Android Orchestrator
+Where:
Emulators
+How often:
Each commit, nightly and before release
+Network:
MockWebServer
+Test report:
Allure
+Test monitoring:
Collecting allure info in Postgres db and displaying it in DataLens, finding flaky packages, common errors, alerts, etc.
UI testing
Write:
Kaspresso
+Who:
Android Developers and QA Engineers
+Runner:
Marathon on the CI
+Where:
Emulators
+How often:
On every PR and one time per day on Main branch
+Network:
Custom mockapi server
+Test report:
Allure
+Test monitoring:
Using Allure reports and Grafana monitoring for stable, resources work`
UI testing
Write:
Kaspresso
+Who writes:
QA Automation and QA Engineers
+Runner:
Marathon on the CI
+Where:
Emulators
+How often:
Each merge request
+Test report:
Allure
+Network:
Custom mock system
UI testing
Write:
Espresso, UIAutomator, Ultron, compose ui-test
+Who write:
QA Automation Engineers
+Runner:
AllureAndroidJUnitRunner
+Where:
on CI: emulators
+How often:
nightly and before each release
+Test report:
Allure TestOps
+Test monitoring:
Allure TestOps
\n {translation(\"search.result.term.missing\")}: {...missing}\n
\n }\n