-
Notifications
You must be signed in to change notification settings - Fork 746
Some FIFO WIP #2157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Some FIFO WIP #2157
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @otoolep, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly refactors the consumption mechanism of the cdc/fifo.go
queue. The traditional blocking Dequeue()
method has been replaced with a more idiomatic Go channel-based approach, allowing consumers to receive items asynchronously via a dedicated output channel. This change aims to simplify client-side consumption patterns and streamline the internal item delivery logic.
Highlights
- Changed Dequeue Mechanism: The explicit
Dequeue()
method has been removed. Consumers will now receive items from the queue via a newC()
method, which returns a<-chan Item
. - New
Item
Struct: A newItem
struct, comprisingIdx
(uint64) andVal
([]byte), has been introduced to encapsulate the data received by clients from the new item channel. - Internal Refactoring of
run
Goroutine: The internaldequeueChan
and thewaitingDequeues
logic within therun
goroutine have been removed. Therun
goroutine now proactively drains available items from the database and sends them asynchronously to the newitemsChan
. - API Surface Reorganization: Existing public methods such as
Enqueue
,DeleteRange
,HighestKey
,Empty
, andHasNext
have been relocated to appear earlier in theQueue
struct definition, improving API discoverability.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request refactors the fifo.Queue
to change its consumption model from a pull-based Dequeue()
method to a push-based channel via the C()
method. This is a significant API change that introduces a critical issue with the new item draining logic, a high severity issue where the consumer channel itemsChan
is not closed on shutdown, and a high severity design issue where the item draining loop can block other queue operations, making the queue unresponsive.
// XXX no guarantee this goroutine will run in order with a later goroutine. | ||
nextKey = nk | ||
item := Item{Idx: idx, Val: val} | ||
go func(it Item) { | ||
q.itemsChan <- it | ||
}(item) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This draining loop has a major issue that breaks the queue's contract: Spawning a new goroutine for each item send (go func...
on line 270) introduces a race condition. The Go scheduler provides no guarantee on the execution order of goroutines, which means items can be sent to itemsChan
out of order. This violates the fundamental FIFO property of the queue. The XXX
comment on line 267 acknowledges this.
case <-q.done: | ||
for _, waiter := range waitingDequeues { | ||
waiter.respChan <- dequeueResp{err: ErrQueueClosed} | ||
} | ||
return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The C()
method's documentation states that the returned channel "closes when Close() is called". However, when <-q.done
is received, the run
goroutine returns without closing q.itemsChan
. This will cause consumers of C()
to block forever instead of detecting that the queue has been closed.
The channel should be closed before returning to fulfill the API contract.
case <-q.done:
close(q.itemsChan)
return
for nextKey != nil { | ||
var idx uint64 | ||
var val, nk []byte | ||
|
||
var resp dequeueResp | ||
err := q.db.View(func(tx *bbolt.Tx) error { | ||
if err := q.db.View(func(tx *bbolt.Tx) error { | ||
c := tx.Bucket(bucketName).Cursor() | ||
_, val := c.Seek(nextKey) | ||
if val == nil { | ||
return fmt.Errorf("item not found for key %x", nextKey) | ||
k, v := c.Seek(nextKey) | ||
if v == nil { | ||
return fmt.Errorf("missing key %x", nextKey) | ||
} | ||
resp.idx = btouint64(nextKey) | ||
resp.val = make([]byte, len(val)) | ||
copy(resp.val, val) | ||
|
||
nk, _ := c.Next() | ||
if nk != nil { | ||
copy(nextKey, nk) | ||
} else { | ||
nextKey = nil // No more items available | ||
idx = btouint64(k) | ||
val = append([]byte(nil), v...) | ||
if k2, _ := c.Next(); k2 != nil { | ||
nk = append([]byte(nil), k2...) | ||
} | ||
return nil | ||
}) | ||
resp.err = err | ||
waiter.respChan <- resp | ||
} | ||
} | ||
} | ||
|
||
// Enqueue adds an item to the queue. Do not call Enqueue on a closed queue. | ||
func (q *Queue) Enqueue(idx uint64, item []byte) error { | ||
req := enqueueReq{idx: idx, item: item, respChan: make(chan enqueueResp)} | ||
q.enqueueChan <- req | ||
resp := <-req.respChan | ||
return resp.err | ||
} | ||
|
||
// Dequeue removes and returns the next available item from the queue. | ||
// If the queue is empty, Dequeue blocks until an item is available. | ||
// Do not call Dequeue on a closed queue. | ||
func (q *Queue) Dequeue() (uint64, []byte, error) { | ||
req := dequeueReq{respChan: make(chan dequeueResp)} | ||
q.dequeueChan <- req | ||
resp := <-req.respChan | ||
return resp.idx, resp.val, resp.err | ||
} | ||
}); err != nil { | ||
break | ||
} | ||
|
||
// DeleteRange deletes all items in the queue with indices less than or equal to idx. | ||
func (q *Queue) DeleteRange(idx uint64) error { | ||
req := deleteRangeReq{ | ||
idx: idx, | ||
respChan: make(chan error), | ||
// Send asynchronously so we never block here | ||
// XXX no guarantee this goroutine will run in order with a later goroutine. | ||
nextKey = nk | ||
item := Item{Idx: idx, Val: val} | ||
go func(it Item) { | ||
q.itemsChan <- it | ||
}(item) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The item draining loop can block other queue operations, making the queue unresponsive. If there are many items to drain, this loop will prevent the run
goroutine from handling other requests like Enqueue
, DeleteRange
, or Close()
. While the go func
attempts to mitigate blocking on send, it introduces the ordering issue.
No description provided.