Skip to content

Commit 9fe1c98

Browse files
Govindarajulu Varadarajandavem330
authored andcommitted
enic: fix rq disable
When MTU is changed from 9000 to 1500 while there is burst of inbound 9000 bytes packets, adaptor sometimes delivers 9000 bytes packets to 1500 bytes buffers. This causes memory corruption and sometimes crash. This is because of a race condition in adaptor between "RQ disable" clearing descriptor mini-cache and mini-cache valid bit being set by completion of descriptor fetch. This can result in stale RQ desc being cached and used when packets arrive. In this case, the stale descriptor have old MTU value. Solution is to write RQ->disable twice. The first write will stop any further desc fetches, allowing the second disable to clear the mini-cache valid bit without danger of a race. Also, the check for rq->running becoming 0 after writing rq->enable to 0 is not done properly. When incoming packets are flooding the interface, rq->running will pulse high for each dropped packet. Since the driver was waiting for 10us between each poll, it is possible to see rq->running = 1 1000 times in a row, even though it is not actually stuck running. This results in false failure of vnic_rq_disable(). Fix is to try more than 1000 time without delay between polls to ensure we do not miss when running goes low. In old adaptors rq->enable needs to be re-written to 0 when posted_index is reset in vnic_rq_clean() in order to keep rq->prefetch_index in sync. Signed-off-by: Govindarajulu Varadarajan <_govind@gmx.com> Signed-off-by: David S. Miller <davem@davemloft.net>
1 parent 06bd2b1 commit 9fe1c98

File tree

1 file changed

+23
-9
lines changed

1 file changed

+23
-9
lines changed

drivers/net/ethernet/cisco/enic/vnic_rq.c

Lines changed: 23 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -169,19 +169,28 @@ int vnic_rq_disable(struct vnic_rq *rq)
169169
{
170170
unsigned int wait;
171171
struct vnic_dev *vdev = rq->vdev;
172+
int i;
172173

173-
iowrite32(0, &rq->ctrl->enable);
174+
/* Due to a race condition with clearing RQ "mini-cache" in hw, we need
175+
* to disable the RQ twice to guarantee that stale descriptors are not
176+
* used when this RQ is re-enabled.
177+
*/
178+
for (i = 0; i < 2; i++) {
179+
iowrite32(0, &rq->ctrl->enable);
174180

175-
/* Wait for HW to ACK disable request */
176-
for (wait = 0; wait < 1000; wait++) {
177-
if (!(ioread32(&rq->ctrl->running)))
178-
return 0;
179-
udelay(10);
180-
}
181+
/* Wait for HW to ACK disable request */
182+
for (wait = 20000; wait > 0; wait--)
183+
if (!ioread32(&rq->ctrl->running))
184+
break;
185+
if (!wait) {
186+
vdev_neterr(vdev, "Failed to disable RQ[%d]\n",
187+
rq->index);
181188

182-
vdev_neterr(vdev, "Failed to disable RQ[%d]\n", rq->index);
189+
return -ETIMEDOUT;
190+
}
191+
}
183192

184-
return -ETIMEDOUT;
193+
return 0;
185194
}
186195

187196
void vnic_rq_clean(struct vnic_rq *rq,
@@ -212,6 +221,11 @@ void vnic_rq_clean(struct vnic_rq *rq,
212221
[fetch_index % VNIC_RQ_BUF_BLK_ENTRIES(count)];
213222
iowrite32(fetch_index, &rq->ctrl->posted_index);
214223

224+
/* Anytime we write fetch_index, we need to re-write 0 to rq->enable
225+
* to re-sync internal VIC state.
226+
*/
227+
iowrite32(0, &rq->ctrl->enable);
228+
215229
vnic_dev_clear_desc_ring(&rq->ring);
216230
}
217231

0 commit comments

Comments
 (0)