Skip to content

Commit fb420d5

Browse files
Eric Dumazetdavem330
authored andcommitted
tcp/fq: move back to CLOCK_MONOTONIC
In the recent TCP/EDT patch series, I switched TCP and sch_fq clocks from MONOTONIC to TAI, in order to meet the choice done earlier for sch_etf packet scheduler. But sure enough, this broke some setups were the TAI clock jumps forward (by almost 50 year...), as reported by Leonard Crestez. If we want to converge later, we'll probably need to add an skb field to differentiate the clock bases, or a socket option. In the meantime, an UDP application will need to use CLOCK_MONOTONIC base for its SCM_TXTIME timestamps if using fq packet scheduler. Fixes: 72b0094 ("tcp: switch tcp_clock_ns() to CLOCK_TAI base") Fixes: 142537e ("net_sched: sch_fq: switch to CLOCK_TAI") Fixes: fd2bca2 ("tcp: switch internal pacing timer to CLOCK_TAI") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Leonard Crestez <leonard.crestez@nxp.com> Tested-by: Leonard Crestez <leonard.crestez@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
1 parent 0ed3015 commit fb420d5

File tree

3 files changed

+5
-5
lines changed

3 files changed

+5
-5
lines changed

include/net/tcp.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -732,7 +732,7 @@ void tcp_send_window_probe(struct sock *sk);
732732

733733
static inline u64 tcp_clock_ns(void)
734734
{
735-
return ktime_get_tai_ns();
735+
return ktime_get_ns();
736736
}
737737

738738
static inline u64 tcp_clock_us(void)

net/ipv4/tcp_timer.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -758,7 +758,7 @@ void tcp_init_xmit_timers(struct sock *sk)
758758
{
759759
inet_csk_init_xmit_timers(sk, &tcp_write_timer, &tcp_delack_timer,
760760
&tcp_keepalive_timer);
761-
hrtimer_init(&tcp_sk(sk)->pacing_timer, CLOCK_TAI,
761+
hrtimer_init(&tcp_sk(sk)->pacing_timer, CLOCK_MONOTONIC,
762762
HRTIMER_MODE_ABS_PINNED_SOFT);
763763
tcp_sk(sk)->pacing_timer.function = tcp_pace_kick;
764764

net/sched/sch_fq.c

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -412,7 +412,7 @@ static void fq_check_throttled(struct fq_sched_data *q, u64 now)
412412
static struct sk_buff *fq_dequeue(struct Qdisc *sch)
413413
{
414414
struct fq_sched_data *q = qdisc_priv(sch);
415-
u64 now = ktime_get_tai_ns();
415+
u64 now = ktime_get_ns();
416416
struct fq_flow_head *head;
417417
struct sk_buff *skb;
418418
struct fq_flow *f;
@@ -776,7 +776,7 @@ static int fq_init(struct Qdisc *sch, struct nlattr *opt,
776776
q->fq_trees_log = ilog2(1024);
777777
q->orphan_mask = 1024 - 1;
778778
q->low_rate_threshold = 550000 / 8;
779-
qdisc_watchdog_init_clockid(&q->watchdog, sch, CLOCK_TAI);
779+
qdisc_watchdog_init_clockid(&q->watchdog, sch, CLOCK_MONOTONIC);
780780

781781
if (opt)
782782
err = fq_change(sch, opt, extack);
@@ -831,7 +831,7 @@ static int fq_dump_stats(struct Qdisc *sch, struct gnet_dump *d)
831831
st.flows_plimit = q->stat_flows_plimit;
832832
st.pkts_too_long = q->stat_pkts_too_long;
833833
st.allocation_errors = q->stat_allocation_errors;
834-
st.time_next_delayed_flow = q->time_next_delayed_flow - ktime_get_tai_ns();
834+
st.time_next_delayed_flow = q->time_next_delayed_flow - ktime_get_ns();
835835
st.flows = q->flows;
836836
st.inactive_flows = q->inactive_flows;
837837
st.throttled_flows = q->throttled_flows;

0 commit comments

Comments
 (0)