Skip to content

chore: fix concurrent CommitQuota transactions for unrelated users/orgs #15261

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 19 commits into from
Nov 1, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
update tests
  • Loading branch information
Emyrk committed Oct 28, 2024
commit 746203d89123cd8e6823f0a9372b5cf1f69e93b1
3 changes: 3 additions & 0 deletions coderd/database/querier.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

10 changes: 7 additions & 3 deletions coderd/database/queries.sql.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

11 changes: 7 additions & 4 deletions coderd/database/queries/quotas.sql
Original file line number Diff line number Diff line change
Expand Up @@ -23,11 +23,14 @@ SELECT
wb.daily_cost
FROM
workspace_builds wb
-- This INNER JOIN prevents a
-- This INNER JOIN prevents a seq scan of the workspace_builds table.
-- Limit the rows to the absolute minimum required, which is all workspaces
-- in a given organization for a given user.
INNER JOIN
workspaces on wb.workspace_id = workspaces.id
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so in the query plan you have now, it's doing this nested loop where it finds the workspace IDs, then for each workspace ID it's doing this bitmap query:

                          ->  Bitmap Heap Scan on workspace_builds wb  (cost=4.16..9.50 rows=2 width=28)
                                Recheck Cond: (workspace_id = workspaces_1.id)
                                ->  Bitmap Index Scan on workspace_builds_workspace_id_build_number_key  (cost=0.00..4.16 rows=2 width=0)
                                      Index Cond: (workspace_id = workspaces_1.id)

That is, it first scans the index to find the pages to load, then scans the pages with the Recheck Cond.

Do you know whether this results in page locks for the transaction, or tuple locks (I'm assuming these are row-level locks)? Page locks have a greater likelihood of catching unrelated transactions.

And, some suggestions:

  1. Can we just move the daily_cost directly to the workspace table? We only ever do computations with the most recent cost. If we really needed to keep it on the workspace_build for compatibility or querying history, we could put it in both places, and have this query only look at workspaces. That would remove a join as well. If we built an index of (org_id, owner_id) then we'd also be very unlikely to ever need to Seq scan the workspaces table for quotas.

  2. If we don't want to go that far, we could add an index ON workspace_builds (workspace_id, build_number, daily_cost) that would allow the quota query to compute the results right from the index, so it'd never have to a bitmap scan and read whole pages.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The particular locks acquired during execution of a query will depend on the plan used by the query, and multiple finer-grained locks (e.g., tuple locks) may be combined into fewer coarser-grained locks (e.g., page locks) during the course of the transaction to prevent exhaustion of the memory used to track the locks.

I was under the impression it was row-level locks, but I admit that was a result of my experiments with very few rows. The docs seem to imply it could take out larger locks.

Can we just move the daily_cost directly to the workspace table? We only ever do computations with the most recent cost. If we really needed to keep it on the workspace_build for compatibility or querying history, we could put it in both places, and have this query only look at workspaces. That would remove a join as well. If we built an index of (org_id, owner_id) then we'd also be very unlikely to ever need to Seq scan the workspaces table for quotas.

Originally there was a suspicion we were getting this error from different transactions interring with CommitQuota. But the error reported can only occur between 2 serializable transactions, meaning it is CommitQuota interfering with itself.

I thought about moving daily_cost to it's own table entirely, but I'm not sure that would actually improve much.

I think this index + sorting by build number would be a large win: workspace_builds (workspace_id, build_number, daily_cost)

As I look at this query more, I don't see why we need to inner join workspaces again. All the information can be pulled from the latest_build subquery.

If we add the index (org_id, owner_id) on workspaces as well that is another win here.

According to the quote I pulled though, is the goal just to reduce the number of rows touched to prevent the lock from having to go more "coarse" to the page lock? The text implies the behavior depends on the memory availability.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that we don't need to join to workspaces twice.

And yeah, my understanding is that with page locks, updating quota for an unrelated user could cause a serialization error if the builds happen to share a page. So it's undesirable to use page locks if we could get away with finer grained locking.

I realize that postgres can use page locks for memory reasons, and maybe there isn't anything we can do about that, but I'm also wondering whether it automatically uses page locks when it does a bitmap query, rather than tuple locks if we were able to use the index.

WHERE
workspaces.owner_id = @owner_id
workspaces.owner_id = @owner_id AND
workspaces.organization_id = @organization_id
ORDER BY
wb.workspace_id,
wb.created_at DESC
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be faster to use wb.build_number, since the query planner has access to the workspace_builds_workspace_id_build_number_key index. That could be why it's having to do the

              ->  Sort  (cost=22.66..22.66 rows=2 width=44)
"                    Sort Key: wb.workspace_id, wb.created_at DESC"

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wondered about that too. I had a specific goal and did not want to start mutating the query too much to solicit feedback on different things.

Can I throw this up in another PR?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, that works

Expand All @@ -38,8 +41,8 @@ FROM
workspaces
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like it's still doing a Seq scan on workspaces

                          ->  Seq Scan on workspaces workspaces_1  (cost=0.00..13.12 rows=1 width=16)
                                Filter: (owner_id = 'b4ed5c8a-725e-482d-b5a7-368a1dd7cd77'::uuid)

Doesn't that mean any update to the workspaces table will still cause a serialization error?

However! I think the query planner takes into account cardinality of the columns relative to the rows in the table, so it might make a different plan in a "real" deployment.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we still do a seq scan on the workspaces.

To guarantee true serializability PostgreSQL uses predicate locking, which means that it keeps locks which allow it to determine when a write would have had an impact on the result of a previous read from a concurrent transaction, had it run first.

The serializable lock only affects tables (rows) that it written to. The CommitQuota transaction only writes to workspace_builds. I have a test case that writes to the workspaces table, and it does not cause a serialization error.

Copy link
Contributor

@spikecurtis spikecurtis Oct 31, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do you square this with the observation that a Seq Scan on workspace_builds caused a serialization error? I think the PG docs say Seq Scan locks the entire relation.

Maybe it depends on the number of workspaces and the assumed cardinality.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Experimentally, I am running this:

  1. Begin TX
  2. GetQuotaConsumedForUser(user, org)
  3. UpdateWorkspaceBuildCostByID(workspace, 10)

The TX mode that is important here is SIReadLock.

Predicate locks in PostgreSQL, like in most other database systems, are based on data actually accessed by a transaction. These will show up in the pg_locks system view with a mode of SIReadLock.

What I see my change doing is GetQuotaConsumedForUser used to do:

992-<nil> [granted] workspace_builds/relation/SIReadLock: 
992-<nil> [granted] workspace_builds_pkey/page/SIReadLock: page=1
992-<nil> [granted] workspaces/relation/SIReadLock: 
992-<nil> [granted] workspaces_owner_id_lower_idx/page/SIReadLock: page=1

And now does:

929-<nil> [granted] workspace_builds_pkey/page/SIReadLock: page=1
929-<nil> [granted] workspace_builds_workspace_id_build_number_key/page/SIReadLock: page=1
929-<nil> [granted] workspaces/relation/SIReadLock: 
929-<nil> [granted] workspaces_owner_id_lower_idx/page/SIReadLock: page=1

So we went from a full relation lock on the table to one on the pkey (was there before too) and build number.

The workspaces table still has the same locks, and would be improved by your index suggestions. For this specific error mode, we insert into the workspace_builds table though, so I was honestly not even looking at other tables at this point.

With my query changes raw
930-<nil> [granted] <nil>/virtualxid/ExclusiveLock: waiting to acquire virtual tx id lock
929- 3848 [granted] <nil>/transactionid/ExclusiveLock: ???
929-<nil> [granted] <nil>/virtualxid/ExclusiveLock: waiting to acquire virtual tx id lock
930-<nil> [granted] pg_locks/relation/AccessShareLock: 
929-<nil> [granted] workspace_builds/relation/AccessShareLock: 
929-<nil> [granted] workspace_builds/relation/RowExclusiveLock: 
929-<nil> [granted] workspace_builds_job_id_key/relation/AccessShareLock: 
929-<nil> [granted] workspace_builds_job_id_key/relation/RowExclusiveLock: 
929-<nil> [granted] workspace_builds_pkey/relation/RowExclusiveLock: 
929-<nil> [granted] workspace_builds_pkey/relation/AccessShareLock: 
929-<nil> [granted] workspace_builds_pkey/page/SIReadLock: page=1
929-<nil> [granted] workspace_builds_workspace_id_build_number_key/relation/RowExclusiveLock: 
929-<nil> [granted] workspace_builds_workspace_id_build_number_key/relation/AccessShareLock: 
929-<nil> [granted] workspace_builds_workspace_id_build_number_key/page/SIReadLock: page=1
929-<nil> [granted] workspaces/relation/AccessShareLock: 
929-<nil> [granted] workspaces/relation/SIReadLock: 
929-<nil> [granted] workspaces_owner_id_lower_idx/relation/AccessShareLock: 
929-<nil> [granted] workspaces_owner_id_lower_idx/page/SIReadLock: page=1
929-<nil> [granted] workspaces_pkey/relation/AccessShareLock: 
Before my changes raw
993-<nil> [granted] <nil>/virtualxid/ExclusiveLock: waiting to acquire virtual tx id lock
992- 4039 [granted] <nil>/transactionid/ExclusiveLock: ???
992-<nil> [granted] <nil>/virtualxid/ExclusiveLock: waiting to acquire virtual tx id lock
993-<nil> [granted] pg_locks/relation/AccessShareLock: 
992-<nil> [granted] workspace_builds/relation/AccessShareLock: 
992-<nil> [granted] workspace_builds/relation/SIReadLock: 
992-<nil> [granted] workspace_builds/relation/RowExclusiveLock: 
992-<nil> [granted] workspace_builds_job_id_key/relation/AccessShareLock: 
992-<nil> [granted] workspace_builds_job_id_key/relation/RowExclusiveLock: 
992-<nil> [granted] workspace_builds_pkey/relation/RowExclusiveLock: 
992-<nil> [granted] workspace_builds_pkey/relation/AccessShareLock: 
992-<nil> [granted] workspace_builds_pkey/page/SIReadLock: page=1
992-<nil> [granted] workspace_builds_workspace_id_build_number_key/relation/RowExclusiveLock: 
992-<nil> [granted] workspace_builds_workspace_id_build_number_key/relation/AccessShareLock: 
992-<nil> [granted] workspaces/relation/AccessShareLock: 
992-<nil> [granted] workspaces/relation/SIReadLock: 
992-<nil> [granted] workspaces_owner_id_lower_idx/relation/AccessShareLock: 
992-<nil> [granted] workspaces_owner_id_lower_idx/page/SIReadLock: page=1
992-<nil> [granted] workspaces_pkey/relation/AccessShareLock: 

Disclaimer: I recognize the docs suggest these lock patterns can change with table size and memory availability. This is a very small dataset.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting that serializable transactions only interfere with other serializable transactions --- OK for now, but I feel like this puts us on shaky ground.

Today we insert new builds with repeatable read isolation, but do not set the daily_cost. If we ever started setting the cost but didn't up the isolation to serializable, we'd start seeing quota anomalies.

Or, if we ever introduce a new update query on workspaces that runs at serializable isolation, it will start interfering with every quota transaction.

Not necessarily for this PR, but I think we should still aim to remove the Seq Scan on workspaces.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or, if we ever introduce a new update query on workspaces that runs at serializable isolation, it will start interfering with every quota transaction.

Yes.

Not necessarily for this PR, but I think we should still aim to remove the Seq Scan on workspaces.

I agree. I will make a follow up PR (or issue if it takes to long) to address all these extra concerns.

In this PR I did disable retries, so a unit test is more likely to fail if we hit this (would be a valid flake).


I do not see a way to implement forward thinking protections if someone adds a new serializable TX. So you are correct, a change in the future can break things further, especially because the seq scan on the workspaces.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting that serializable transactions only interfere with other serializable transactions --- OK for now, but I feel like this puts us on shaky ground.

Just to clarify, I specifically meant the error pq: could not serialize access due to read/write dependencies among transactions, which was reported.

There is another error you can get that is not between 2 serializable transactions.

pq: could not serialize access due to concurrent update

JOIN latest_builds ON
latest_builds.workspace_id = workspaces.id
WHERE NOT
deleted AND
WHERE
NOT deleted AND
workspaces.owner_id = @owner_id AND
workspaces.organization_id = @organization_id
;
138 changes: 127 additions & 11 deletions enterprise/coderd/workspacequota_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -356,6 +356,8 @@
// - BeginTX -> Bump! -> GetQuota -> GetAllowance -> UpdateCost -> EndTx
// - BeginTX -> GetQuota -> GetAllowance -> UpdateCost -> Bump! -> EndTx
t.Run("UpdateBuildDeadline", func(t *testing.T) {
t.Skip("Expected to fail. As long as quota & deadline are on the same " +
" table and affect the same row, this will likely always fail.")
// +------------------------------+------------------+
// | Begin Tx | |
// +------------------------------+------------------+
Expand All @@ -371,7 +373,7 @@
// +------------------------------+------------------+
// pq: could not serialize access due to concurrent update
ctx := testutil.Context(t, testutil.WaitLong)
ctx = dbauthz.AsSystemRestricted(ctx)

Check failure on line 376 in enterprise/coderd/workspacequota_test.go

View workflow job for this annotation

GitHub Actions / lint

ruleguard: Using 'AsSystemRestricted' is dangerous and should be accompanied by a comment explaining why it's ok and a nolint. (gocritic)

myWorkspace := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: org.Org.ID,
Expand Down Expand Up @@ -424,7 +426,7 @@
// +------------------------------+------------------+
// Works!
ctx := testutil.Context(t, testutil.WaitLong)
ctx = dbauthz.AsSystemRestricted(ctx)

Check failure on line 429 in enterprise/coderd/workspacequota_test.go

View workflow job for this annotation

GitHub Actions / lint

ruleguard: Using 'AsSystemRestricted' is dangerous and should be accompanied by a comment explaining why it's ok and a nolint. (gocritic)

myWorkspace := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: org.Org.ID,
Expand Down Expand Up @@ -470,6 +472,8 @@
})

t.Run("ActivityBump", func(t *testing.T) {
t.Skip("Expected to fail. As long as quota & deadline are on the same " +
" table and affect the same row, this will likely always fail.")
// +---------------------+----------------------------------+
// | W1 Quota Tx | |
// +---------------------+----------------------------------+
Expand All @@ -487,7 +491,7 @@
// +---------------------+----------------------------------+
// pq: could not serialize access due to concurrent update
ctx := testutil.Context(t, testutil.WaitShort)
ctx = dbauthz.AsSystemRestricted(ctx)

Check failure on line 494 in enterprise/coderd/workspacequota_test.go

View workflow job for this annotation

GitHub Actions / lint

ruleguard: Using 'AsSystemRestricted' is dangerous and should be accompanied by a comment explaining why it's ok and a nolint. (gocritic)

myWorkspace := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: org.Org.ID,
Expand Down Expand Up @@ -528,7 +532,7 @@
// +---------------------+----------------------------------+
// | GetAllowance(w1) | |
// +---------------------+----------------------------------+
// | | UpdateWorkspaceBuildDeadline(w1) |
// | | UpdateWorkspaceLastUsedAt(w1) |
// +---------------------+----------------------------------+
// | UpdateBuildCost(w1) | |
// +---------------------+----------------------------------+
Expand All @@ -536,7 +540,7 @@
// +---------------------+----------------------------------+
// pq: could not serialize access due to concurrent update
ctx := testutil.Context(t, testutil.WaitShort)
ctx = dbauthz.AsSystemRestricted(ctx)

Check failure on line 543 in enterprise/coderd/workspacequota_test.go

View workflow job for this annotation

GitHub Actions / lint

ruleguard: Using 'AsSystemRestricted' is dangerous and should be accompanied by a comment explaining why it's ok and a nolint. (gocritic)

myWorkspace := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: org.Org.ID,
Expand All @@ -549,11 +553,9 @@
one.GetQuota(ctx, t)
one.GetAllowance(ctx, t)

err := db.UpdateWorkspaceBuildDeadlineByID(ctx, database.UpdateWorkspaceBuildDeadlineByIDParams{
Deadline: dbtime.Now(),
MaxDeadline: dbtime.Now(),
UpdatedAt: dbtime.Now(),
ID: myWorkspace.Build.ID,
err := db.UpdateWorkspaceLastUsedAt(ctx, database.UpdateWorkspaceLastUsedAtParams{
ID: myWorkspace.Workspace.ID,
LastUsedAt: dbtime.Now(),
})
assert.NoError(t, err)

Expand Down Expand Up @@ -581,7 +583,7 @@
// +---------------------+----------------------------------+
// Works!
ctx := testutil.Context(t, testutil.WaitShort)
ctx = dbauthz.AsSystemRestricted(ctx)

Check failure on line 586 in enterprise/coderd/workspacequota_test.go

View workflow job for this annotation

GitHub Actions / lint

ruleguard: Using 'AsSystemRestricted' is dangerous and should be accompanied by a comment explaining why it's ok and a nolint. (gocritic)
var err error

myWorkspace := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
Expand Down Expand Up @@ -610,7 +612,7 @@

// QuotaCommit 2 workspaces in different orgs.
// Workspaces do not share templates, owners, or orgs
t.Run("DoubleQuotaWorkspaces", func(t *testing.T) {
t.Run("DoubleQuotaUnrelatedWorkspaces", func(t *testing.T) {
// +---------------------+---------------------+
// | W1 Quota Tx | W2 Quota Tx |
// +---------------------+---------------------+
Expand All @@ -634,9 +636,8 @@
// +---------------------+---------------------+
// | | CommitTx() |
// +---------------------+---------------------+
// pq: could not serialize access due to read/write dependencies among transactions
ctx := testutil.Context(t, testutil.WaitLong)
ctx = dbauthz.AsSystemRestricted(ctx)

Check failure on line 640 in enterprise/coderd/workspacequota_test.go

View workflow job for this annotation

GitHub Actions / lint

ruleguard: Using 'AsSystemRestricted' is dangerous and should be accompanied by a comment explaining why it's ok and a nolint. (gocritic)

myWorkspace := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: org.Org.ID,
Expand Down Expand Up @@ -667,9 +668,124 @@
assert.NoError(t, two.Done())
})

// Autobuild, then quota, then autobuild read agin in the same tx
// https://www.richardstrnad.ch/posts/go-sql-how-to-get-detailed-error/
// https://blog.danslimmon.com/2024/01/10/why-transaction-order-matters-even-if-youre-only-reading/
// QuotaCommit 2 workspaces in different orgs.
// Workspaces do not share templates or orgs
t.Run("DoubleQuotaUserWorkspacesDiffOrgs", func(t *testing.T) {
// +---------------------+---------------------+
// | W1 Quota Tx | W2 Quota Tx |
// +---------------------+---------------------+
// | Begin Tx | |
// +---------------------+---------------------+
// | | Begin Tx |
// +---------------------+---------------------+
// | GetQuota(w1) | |
// +---------------------+---------------------+
// | GetAllowance(w1) | |
// +---------------------+---------------------+
// | UpdateBuildCost(w1) | |
// +---------------------+---------------------+
// | | UpdateBuildCost(w2) |
// +---------------------+---------------------+
// | | GetQuota(w2) |
// +---------------------+---------------------+
// | | GetAllowance(w2) |
// +---------------------+---------------------+
// | CommitTx() | |
// +---------------------+---------------------+
// | | CommitTx() |
// +---------------------+---------------------+
ctx := testutil.Context(t, testutil.WaitLong)
ctx = dbauthz.AsSystemRestricted(ctx)

Check failure on line 698 in enterprise/coderd/workspacequota_test.go

View workflow job for this annotation

GitHub Actions / lint

ruleguard: Using 'AsSystemRestricted' is dangerous and should be accompanied by a comment explaining why it's ok and a nolint. (gocritic)

myWorkspace := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: org.Org.ID,
OwnerID: user.ID,
}).Do()

myOtherWorkspace := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: otherOrg.Org.ID, // Different org!
OwnerID: user.ID,
}).Do()

one := newCommitter(t, db, myWorkspace.Workspace, myWorkspace.Build)
two := newCommitter(t, db, myOtherWorkspace.Workspace, myOtherWorkspace.Build)

var _, _ = one, two
// Run order
one.GetQuota(ctx, t)
one.GetAllowance(ctx, t)

one.UpdateWorkspaceBuildCostByID(ctx, t, 10)

two.GetQuota(ctx, t)
two.GetAllowance(ctx, t)
two.UpdateWorkspaceBuildCostByID(ctx, t, 10)

// End commit
assert.NoError(t, one.Done())
assert.NoError(t, two.Done())
})

// QuotaCommit 2 workspaces in the same org.
// Workspaces do not share templates
t.Run("DoubleQuotaUserWorkspaces", func(t *testing.T) {
t.Skip("Setting a new build cost to a workspace in a org affects other " +
"workspaces in the same org. This is expected to fail.")
// +---------------------+---------------------+
// | W1 Quota Tx | W2 Quota Tx |
// +---------------------+---------------------+
// | Begin Tx | |
// +---------------------+---------------------+
// | | Begin Tx |
// +---------------------+---------------------+
// | GetQuota(w1) | |
// +---------------------+---------------------+
// | GetAllowance(w1) | |
// +---------------------+---------------------+
// | UpdateBuildCost(w1) | |
// +---------------------+---------------------+
// | | UpdateBuildCost(w2) |
// +---------------------+---------------------+
// | | GetQuota(w2) |
// +---------------------+---------------------+
// | | GetAllowance(w2) |
// +---------------------+---------------------+
// | CommitTx() | |
// +---------------------+---------------------+
// | | CommitTx() |
// +---------------------+---------------------+
// pq: could not serialize access due to read/write dependencies among transactions
ctx := testutil.Context(t, testutil.WaitLong)
ctx = dbauthz.AsSystemRestricted(ctx)

Check failure on line 759 in enterprise/coderd/workspacequota_test.go

View workflow job for this annotation

GitHub Actions / lint

ruleguard: Using 'AsSystemRestricted' is dangerous and should be accompanied by a comment explaining why it's ok and a nolint. (gocritic)

myWorkspace := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: org.Org.ID,
OwnerID: user.ID,
}).Do()

myOtherWorkspace := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: org.Org.ID,
OwnerID: user.ID,
}).Do()

one := newCommitter(t, db, myWorkspace.Workspace, myWorkspace.Build)
two := newCommitter(t, db, myOtherWorkspace.Workspace, myOtherWorkspace.Build)

var _, _ = one, two
// Run order
one.GetQuota(ctx, t)
one.GetAllowance(ctx, t)

one.UpdateWorkspaceBuildCostByID(ctx, t, 10)

two.GetQuota(ctx, t)
two.GetAllowance(ctx, t)
two.UpdateWorkspaceBuildCostByID(ctx, t, 10)

// End commit
assert.NoError(t, one.Done())
assert.NoError(t, two.Done())
})
}

func deprecatedQuotaEndpoint(ctx context.Context, client *codersdk.Client, userID string) (codersdk.WorkspaceQuota, error) {
Expand Down
Loading