Skip to content

Consider switching from chrono to jiff #7852

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
drinkcat opened this issue Apr 28, 2025 · 16 comments
Open

Consider switching from chrono to jiff #7852

drinkcat opened this issue Apr 28, 2025 · 16 comments

Comments

@drinkcat
Copy link
Contributor

drinkcat commented Apr 28, 2025

Based on discussion in #7849, it seems like it would be interesting to use jiff instead of chrono, at the very least in date: this would make our life easier when handling timezones.

Dirty branch here: https://github.com/drinkcat/coreutils/tree/jiff-dirty .

I'll keep modifying this comment as I find issues.

  • Format specifiers that are not supported:
    • %q (quarters)
    • %N (nanoseconds, for chrono we manually replaced this with %f)
    • %X (localization issue, can be replaced manually with %H:%M:%S for now, I think localization is also TBD in coreutils)
    • %:::z (see Some input in date are panicking the binary #3780)
  • In ls, we need to print dates using the same format over and over again. chrono provides an optimized API for that use case: we parse the format string once, then use that to print many times (see fc6b896). I didn't do any benchmarking yet, but I suspect this might be an issue.

@BurntSushi FYI

@drinkcat
Copy link
Contributor Author

Prototype in 133b7cb, updating uu_date only. Basic stuff seems to work, apart from 2 tests failing because of missing format support (see above):

-rwxr-xr-x 2 drinkcat drinkcat 4828144 Apr 27 13:57 target/release/date ## HEAD
-rwxr-xr-x 2 drinkcat drinkcat 3279432 Apr 28 15:42 target/release/date ## jiff
-rwxr-xr-x 2 drinkcat drinkcat 2949080 Apr 27 13:58 target/release/date ## with #7849

CI in progress here: https://github.com/drinkcat/coreutils/actions?query=branch%3Ajiff-dirty

@BurntSushi
Copy link

Aye. For %c, %r, %X and %x (all available in GNU date), that should be addressed by BurntSushi/jiff#338

@drinkcat
Copy link
Contributor Author

Converted ls as well, same branch https://github.com/drinkcat/coreutils/tree/jiff-dirty .

Size is "okay":

cargo build -r -p uu_ls && ls -l target/release/ls
-rwxr-xr-x 2 drinkcat drinkcat 3769336 Apr 27 13:57 target/release/ls # main
-rwxr-xr-x 2 drinkcat drinkcat 2048312 Apr 28 21:18 target/release/ls
-rwxr-xr-x 2 drinkcat drinkcat 1894176 Apr 27 13:59 target/release/ls # with #7849

Performance loss isn't good (14%), there's some minor optimization I can do (not call Timestamp::now() repeatedly), but it'd be best to have a way to "prescan" the format string:

 cargo build -r -p uu_ls && taskset -c 0 hyperfine --warmup 100 -L ls target/release/ls,./ls-main "{ls} -lR /var/lib .git || true"
Benchmark 1: target/release/ls -lR /var/lib .git || true
  Time (mean ± σ):      33.9 ms ±   1.5 ms    [User: 14.1 ms, System: 19.0 ms]
  Range (min … max):    33.2 ms …  47.3 ms    86 runs
  
Benchmark 2: ./ls-main -lR /var/lib .git || true
  Time (mean ± σ):      29.6 ms ±   0.3 ms    [User: 12.6 ms, System: 16.4 ms]
  Range (min … max):    29.0 ms …  30.7 ms    98 runs
 
Summary
  ./ls-main -lR /var/lib .git || true ran
    1.14 ± 0.05 times faster than target/release/ls -lR /var/lib .git || true

@BurntSushi
Copy link

I don't think scanning the format string is the long pole in the tent here. I don't think Jiff's strftime implementation has had any optimization work done to it, so there are likely some low hanging fruits there. I'll take a closer look today or tomorrow.

@drinkcat
Copy link
Contributor Author

Maybe! For reference, chrono implementation without "pre-scan" is 4% faster than jiff's ("pre-scanning" saves another 10%). -- all within that ls command above, raw performance gap will be larger of course.

@BurntSushi
Copy link

So I've been trying to reproduce your benchmark, but haven't had much luck:

$ taskset -c 0 hyperfine --warmup 10 -L ls target/release/ls-jiff-0.2.10,./target/release/ls-main "{ls} -lR /var || true"
Benchmark 1: target/release/ls-jiff-0.2.10 -lR /var || true
  Time (mean ± σ):      54.7 ms ±   0.3 ms    [User: 24.9 ms, System: 29.2 ms]
  Range (min … max):    54.0 ms …  55.5 ms    54 runs

Benchmark 2: ./target/release/ls-main -lR /var || true
  Time (mean ± σ):      63.2 ms ±   0.5 ms    [User: 33.3 ms, System: 29.3 ms]
  Range (min … max):    62.4 ms …  65.4 ms    47 runs

Summary
  target/release/ls-jiff-0.2.10 -lR /var || true ran
    1.16 ± 0.01 times faster than ./target/release/ls-main -lR /var || true

I've tried a few different directories, but your branch seems consistently faster than main.

If I look at a profile (on a bigger directory, my checkout of the Linux kernel), it looks like Jiff is a very small percentage of time here?

Image

So I'm curious if perhaps I am missing a component of the benchmark here. I built ls with cargo build -r -p uu_ls.

BurntSushi added a commit to BurntSushi/jiff that referenced this issue Apr 28, 2025
This actually never got any optimization attention, and
it looks like its perf matters to coreutils. So let's
take a look at it!

Ref uutils/coreutils#7852
BurntSushi added a commit to BurntSushi/jiff that referenced this issue Apr 28, 2025
This actually never got any optimization attention, and
it looks like its perf matters to coreutils. So let's
take a look at it!

Ref uutils/coreutils#7852
@BurntSushi
Copy link

This PR brought in some perf improvements to Jiff's strftime. Can you give it a try? I'm not quite yet ready to put out a release yet (will do so later today or tomorrow), so you'll want a [patch.crates-io] somewhere.

@BurntSushi
Copy link

One thing that also sticks out to me, is that TimeStyle::format is returning a String. It was doing that before, of course, but if that's really a bottleneck, it might be worth trying to refactor that to write directly into a buffer or a std::io::Write implementation or whatever. I'm not sure how difficult that is for you. But Jiff has StdFmtWrite and StdIoWrite adapters to make this easy on Jiff's end at least. You can see the strftime benchmark for an example.

@drinkcat
Copy link
Contributor Author

drinkcat commented Apr 29, 2025

First, thanks for being so responsive! This is awesome ,-)

So I've been trying to reproduce your benchmark, but haven't had much luck:

Are you sure that you are using the latest main? (or at least the base of this stack) I recently did a lot of optimization work.

If I look at a profile (on a bigger directory, my checkout of the Linux kernel), it looks like Jiff is a very small percentage of time here?

I like samply. In debug mode it's easier to see what takes time (ideally we should create a profiling profile, but I think it's ok for ballpark...)
cargo build -p uu_ls && samply record target/debug/ls -lR /var/lib .git > /dev/null (piping to /dev/null makes quite a bit of difference)

https://share.firefox.dev/4jziUdL
15% uu_ls::display_date, a lot of this in jiff subfunctions.

BurntSushi/jiff#338 brought in some perf improvements to Jiff's strftime.

jiff@f21740ee5fc577c8cf4c2cab18f4049124203c3e doesn't seem to help at all ,-(

cargo build --config 'patch.crates-io.jiff.path="../jiff"' -r -p uu_ls && taskset -c 0 hyperfine --warmup 100 -L ls target/release/ls,./ls-jiff-0.2.10,./ls-main "{ls} -lR /var/lib .git || true"

Benchmark 1: target/release/ls -lR /var/lib .git || true
  Time (mean ± σ):      34.0 ms ±   2.3 ms    [User: 14.0 ms, System: 19.4 ms]
  Range (min … max):    32.8 ms …  48.1 ms    84 runs
  
Benchmark 2: ./ls-jiff-0.2.10 -lR /var/lib .git || true
  Time (mean ± σ):      33.8 ms ±   2.4 ms    [User: 14.2 ms, System: 19.0 ms]
  Range (min … max):    32.8 ms …  51.7 ms    85 runs
  
Benchmark 3: ./ls-main -lR /var/lib .git || true
  Time (mean ± σ):      29.4 ms ±   0.2 ms    [User: 12.7 ms, System: 16.4 ms]
  Range (min … max):    29.0 ms …  30.0 ms    99 runs
 
Summary
  ./ls-main -lR /var/lib .git || true ran
    1.15 ± 0.08 times faster than ./ls-jiff-0.2.10 -lR /var/lib .git || true
    1.16 ± 0.08 times faster than target/release/ls -lR /var/lib .git || true

One thing that also sticks out to me, is that TimeStyle::format is returning a String.

Yes there are still optimizations to be chased there, there are likely many unneeded string copies (some may be needed for alignment, some probably not, we could audit).

So, yes, this saves maybe 1% performance (but it's difficult to measure):

        //output_display.extend(display_date(md, config).as_bytes());
        if let Some (time) = get_time(md, config) {
            write!(output_display, "{}", config.time_style.format(time))?;
        } else {
            output_display.extend(b"???");
        }

@BurntSushi
Copy link

BurntSushi commented Apr 29, 2025

OK, it looks like I was using main from your fork which was probably out of date. Updating to latest main and re-building ls in release mode I get:

$ taskset -c 0 hyperfine --warmup 10 -L ls ./target/release/ls-jiff-pr,./target/release/ls-jiff-0.2.10,./target/release/ls-main "{ls} --full-time -lR /usr || true"
Benchmark 1: ./target/release/ls-jiff-pr --full-time -lR /usr || true
  Time (mean ± σ):     867.6 ms ±   2.6 ms    [User: 412.0 ms, System: 453.4 ms]
  Range (min … max):   864.3 ms … 872.9 ms    10 runs

Benchmark 2: ./target/release/ls-jiff-0.2.10 --full-time -lR /usr || true
  Time (mean ± σ):     875.1 ms ±   1.3 ms    [User: 416.9 ms, System: 456.0 ms]
  Range (min … max):   873.5 ms … 877.6 ms    10 runs

Benchmark 3: ./target/release/ls-main --full-time -lR /usr || true
  Time (mean ± σ):     856.5 ms ±   1.4 ms    [User: 404.0 ms, System: 450.4 ms]
  Range (min … max):   854.6 ms … 859.0 ms    10 runs

Summary
  ./target/release/ls-main --full-time -lR /usr || true ran
    1.01 ± 0.00 times faster than ./target/release/ls-jiff-pr --full-time -lR /usr || true
    1.02 ± 0.00 times faster than ./target/release/ls-jiff-0.2.10 --full-time -lR /usr || true

Given that ls-jiff-pr is using a strftime that is twice as fast as the one in jiff-0.2.10, I think this suggests that the perf of strftime is seemingly not significant for this particular benchmark.

As for profiling, doing it in debug mode without optimizations enabled is not meaningful. Like, that profile you shared is totally bunk for measuring real world performance because there's no optimization happening. e.g., Almost no inlining. The number of samples there is also super low.

I too also use samply. But it and perf record use the same sampling architecture for profiling. The right way to do this is to keep optimizations enabled but also enable debug symbols.

Let me try that.

So from your fork, here's how I built ls-jiff-pr:

$ git remote -v
origin  git@github.com:drinkcat/coreutils (fetch)
origin  git@github.com:drinkcat/coreutils (push)

$ git rev-parse HEAD
982d533627bfc313aa64e88cc252b2154855aae7

$ git diff Cargo.toml
diff --git a/Cargo.toml b/Cargo.toml
index 87d33b586..9833110fc 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -269,6 +269,9 @@ license = "MIT"
 readme = "README.package.md"
 version = "0.0.30"

+[patch.crates-io]
+jiff = { git = "https://github.com/BurntSushi/jiff" }
+
 [workspace.dependencies]
 ansi-width = "0.1.0"
 bigdecimal = "0.4"
@@ -553,6 +556,10 @@ name = "uudoc"
 path = "src/bin/uudoc.rs"
 required-features = ["uudoc"]

+[profile.profiling]
+inherits = "release"
+debug = true
+
 # The default release profile. It contains all optimizations, without
 # sacrificing debug info. With this profile (like in the standard
 # release profile), the debug info and the stack traces will still be available.

$ cargo b -p uu_ls --profile profiling
    Finished `profiling` profile [optimized + debuginfo] target(s) in 18.44s

$ cp target/profiling/ls target/profiling/ls-jiff-pr

Then I did the same for main:

$ git remote -v
origin  git@github.com:uutils/coreutils (fetch)
origin  git@github.com:uutils/coreutils (push)

$ git rev-parse HEAD
053e6b4d08f41a8783f81bc54d62ffa912beb65e

$ git diff Cargo.toml
diff --git a/Cargo.toml b/Cargo.toml
index cde946b68..2048c316b 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -548,6 +548,10 @@ name = "uudoc"
 path = "src/bin/uudoc.rs"
 required-features = ["uudoc"]

+[profile.profiling]
+inherits = "release"
+debug = true
+
 # The default release profile. It contains all optimizations, without
 # sacrificing debug info. With this profile (like in the standard
 # release profile), the debug info and the stack traces will still be available.

$ cargo b -p uu_ls --profile profiling
    Finished `profiling` profile [optimized + debuginfo] target(s) in 16.95s

$ cp target/profiling/ls ../drinkcat-coreutils/target/profiling/ls-main

In order to make it possible to fully reproduce this, I decided to run ls on my checkout of the Linux kernel (note the git commit sha):

$ git remote -v
origin  git@github.com:torvalds/linux (fetch)
origin  git@github.com:torvalds/linux (push)
$ git rev-parse HEAD
dd83757f6e686a2188997cb58b5975f744bb7786

And here's the actual benchmark. I also added --full-time to try and get a chunkier strftime operation:

$ taskset -c 0 hyperfine --warmup 10 -L ls ./target/profiling/ls-jiff-pr,./target/profiling/ls-main "{ls} --full-time -lR /home/andrew/data/benchsuite/linux || true"
Benchmark 1: ./target/profiling/ls-jiff-pr --full-time -lR /home/andrew/data/benchsuite/linux || true
  Time (mean ± σ):     197.5 ms ±   0.3 ms    [User: 93.4 ms, System: 103.6 ms]
  Range (min … max):   196.9 ms … 198.2 ms    15 runs

Benchmark 2: ./target/profiling/ls-main --full-time -lR /home/andrew/data/benchsuite/linux || true
  Time (mean ± σ):     187.5 ms ±   0.3 ms    [User: 88.0 ms, System: 98.9 ms]
  Range (min … max):   187.0 ms … 188.0 ms    15 runs

Summary
  ./target/profiling/ls-main --full-time -lR /home/andrew/data/benchsuite/linux || true ran
    1.05 ± 0.00 times faster than ./target/profiling/ls-jiff-pr --full-time -lR /home/andrew/data/benchsuite/linux || true

Now let's profile. And indeed, in my profile above, I was redirecting output to /dev/null, and I do that here too:

$ perf record -R -m 4096 --all-cpus -F 10000 -o prof.data -g --call-graph dwarf ./target/profiling/ls-main --full-time -lR /home/andrew/data/benchsuite/linux > /dev/null
$ perf report -M att -i prof.data

What I see is that almost no time is spent in formatting:

Image

I also see that the vast majority of time is being spent in directory traversal. As the author of walkdir and ignore, this is about what I'd expect:

Image

Now, I also ran this under samply, including redirecting to /dev/null as I did above. And here's the profile I get: https://share.firefox.dev/42Tun0I

Then I did the same, but with Jiff (using ls-jiff-pr, as compiled above): https://share.firefox.dev/3SbMutz

Looking at the flamegraphs, it seems like with Jiff, formatting has almost disappeared from the profile. And even then, and similar to Chrono, it looks like a big chunk of the time is just spent in allocating the String. (Although this is somewhat difficult to tease apart.)

So I took a closer look at the actual code, and it looks really easy to avoid this intermediate alloc. (Your approach doesn't quite do it, because your format function is still seemingly returning a String.) Here's my patch:

$ diff --git a/src/uu/ls/src/ls.rs b/src/uu/ls/src/ls.rs
index 140f50fb5..e59a2c289 100644
--- a/src/uu/ls/src/ls.rs
+++ b/src/uu/ls/src/ls.rs
@@ -279,26 +279,43 @@ fn is_recent(time: Timestamp) -> bool {
 
 impl TimeStyle {
     /// Format the given time according to this time format style.
-    fn format(&self, date: Zoned) -> String {
+    fn format(&self, date: Zoned, out: &mut Vec<u8>) {
+        use jiff::fmt::{strtime::BrokenDownTime, StdIoWrite};
+
         let recent = is_recent(date.timestamp());
+        let tm = BrokenDownTime::from(&date);
+        let out = StdIoWrite(out);
+
         match (self, recent) {
-            (Self::FullIso, _) => date.strftime("%Y-%m-%d %H:%M:%S.%f %z").to_string(),
-            (Self::LongIso, _) => date.strftime("%Y-%m-%d %H:%M").to_string(),
-            (Self::Iso, true) => date.strftime("%m-%d %H:%M").to_string(),
-            (Self::Iso, false) => date.strftime("%Y-%m-%d ").to_string(),
+            (Self::FullIso, _) => {
+                tm.format("%Y-%m-%d %H:%M:%S.%f %z", out).unwrap();
+            }
+            (Self::LongIso, _) => {
+                tm.format("%Y-%m-%d %H:%M", out).unwrap();
+            }
+            (Self::Iso, true) => {
+                tm.format("%m-%d %H:%M", out).unwrap();
+            }
+            (Self::Iso, false) => {
+                tm.format("%Y-%m-%d ", out).unwrap();
+            }
             // spell-checker:ignore (word) datetime
             //In this version of chrono translating can be done
             //The function is chrono::datetime::DateTime::format_localized
             //However it's currently still hard to get the current pure-rust-locale
             //So it's not yet implemented
-            (Self::Locale, true) => date.strftime("%b %e %H:%M").to_string(),
-            (Self::Locale, false) => date.strftime("%b %e  %Y").to_string(),
+            (Self::Locale, true) => {
+                tm.format("%b %e %H:%M", out).unwrap();
+            }
+            (Self::Locale, false) => {
+                tm.format("%b %e  %Y", out).unwrap();
+            }
             (Self::Format(fmt), _) => {
                 // Workaround for unsupported specifiers (TODO: remove)
                 let fmt = fmt.replace("%X", "%H:%M:%S");
                 let fmt = fmt.replace("%N", "%9f");
 
-                date.strftime(&fmt).to_string()
+                tm.format(&fmt, out).unwrap();
             }
         }
     }
@@ -2878,7 +2899,8 @@ fn display_item_long(
         };
 
         output_display.extend(b" ");
-        output_display.extend(display_date(md, config).as_bytes());
+        display_date(md, config, &mut output_display);
         output_display.extend(b" ");
 
         let item_name = display_item_name(
@@ -3082,10 +3104,10 @@ fn get_time(md: &Metadata, config: &Config) -> Option<Zoned> {
     time.try_into().ok()
 }
 
-fn display_date(metadata: &Metadata, config: &Config) -> String {
+fn display_date(metadata: &Metadata, config: &Config, out: &mut Vec<u8>) {
     match get_time(metadata, config) {
-        Some(time) => config.time_style.format(time),
-        None => "???".into(),
+        Some(time) => config.time_style.format(time, out),
+        None => out.extend_from_slice(b"???"),
     }
 }

And then re-running the benchmark:

$ taskset -c 0 hyperfine --warmup 10 -L ls ./target/profiling/ls-jiff-pr,./target/profiling/ls-main,./target/profiling/ls-jiff-no-alloc "{ls} --full-time -lR /home/andrew/data/benchsuite/linux || true"
Benchmark 1: ./target/profiling/ls-jiff-pr --full-time -lR /home/andrew/data/benchsuite/linux || true
  Time (mean ± σ):     197.2 ms ±   0.5 ms    [User: 98.5 ms, System: 98.1 ms]
  Range (min … max):   196.6 ms … 198.3 ms    15 runs

Benchmark 2: ./target/profiling/ls-main --full-time -lR /home/andrew/data/benchsuite/linux || true
  Time (mean ± σ):     186.8 ms ±   0.5 ms    [User: 87.1 ms, System: 99.2 ms]
  Range (min … max):   185.9 ms … 187.4 ms    15 runs

Benchmark 3: ./target/profiling/ls-jiff-no-alloc --full-time -lR /home/andrew/data/benchsuite/linux || true
  Time (mean ± σ):     184.3 ms ±   0.4 ms    [User: 88.5 ms, System: 95.2 ms]
  Range (min … max):   183.7 ms … 185.0 ms    16 runs

Summary
  ./target/profiling/ls-jiff-no-alloc --full-time -lR /home/andrew/data/benchsuite/linux || true ran
    1.01 ± 0.00 times faster than ./target/profiling/ls-main --full-time -lR /home/andrew/data/benchsuite/linux || true
    1.07 ± 0.00 times faster than ./target/profiling/ls-jiff-pr --full-time -lR /home/andrew/data/benchsuite/linux || true

Which gives a slight improvement. This IMO pretty firmly solidifies that strftime formatting is not a sizeable bottleneck in this program. It seems to matter somewhat, but the runtime is dominated by other things. Moreover, I think Chrono's and Jiff's strftime performance seem to be about on par with each other. At least, on Jiff master. You can see the microbenchmark results here.

@drinkcat
Copy link
Contributor Author

drinkcat commented Apr 29, 2025

Interesting thanks. Yeah I just added the profiling profile, best to use that (and yes my sampling was too little) -> #7862.

I wouldn't call what you did "easy", couldn't find the right API to deal with the Display (I didn't post that part, I should have -- format returned a Display, I basically dropped the to_string), looks like I should not use strftime at all ,-P But thanks, let me look at that ,-)

Benchmarks are interesting... I also see extremely little difference on /usr, but /var/lib .git shows more difference (on my system...), and {ls} --full-time -lR .git .git .git .git as well. Thinking about it now, I guess there's some cache effect happening on smaller trees, and the formatting ends up being more important? (but.... printing hot small-ish trees is arguably not a terribly interesting or relevant use case...).

  ./ls-main --full-time -lR .git .git .git .git || true ran
    1.13 ± 0.06 times faster than ./ls-jiff-0.2.10 --full-time -lR .git .git .git .git || true
    1.16 ± 0.07 times faster than target/release/ls --full-time -lR .git .git .git .git || true

(an idea of the tree sizes:

$ ls --full-time -lR .git | wc -l
6446
$ ls --full-time -lR /var/lib | wc -l
9429
$ ls --full-time -lR /usr | wc -l
456490

)

Anyway, thanks again, I'll integrate your changes, so that we're on par performance wise ,-) And I'll keep looking at what it'll take to convert more of the chrono usage.

@BurntSushi
Copy link

I didn't post that part, I should have -- format returned a Display

Oh that might work too. I avoided that because it avoids going through std's formatting machinery, which has overhead on its own.

I'm eager to unblock y'all on switching to Jiff. I plan to get the remaining strftime items you need added in the next couple days. It's next on my list.

@BurntSushi
Copy link

Oh and sorry, by "easy," I meant that your code was structured in a way where the intermediate alloc wasn't load bearing. I definitely grant that discovering the right APIs in Jiff may not be easy, especially the lower level ones.

When it comes to parsing and formatting, Jiff is more like an onion. There's the nice and convenient APIs on the datetime/duration types directly, and then there's the more flexible but less convenient APIs inside of jiff::fmt.

drinkcat added a commit to drinkcat/coreutils that referenced this issue Apr 29, 2025
From code provided in uutils#7852 by @BurntSushi.

Depending on the benchmarks, there is _still_ a small performance
difference (~4%) vs main, but it's seen mostly on small trees
getting printed repeatedly, which is probably not a terribly
interesting use case.
@BurntSushi
Copy link

RE lenient strftime formatting: BurntSushi/jiff#350

One thing worth pointing out here is that, as far as I can tell, POSIX itself doesn't seem to require this sort of lenient parsing.

My understanding is that uutils is trying to port GNU coreutils and not POSIX coreutils, which I think means any sub-optimal user experience that GNU offers that could in theory be fixed within the boundary of POSIX compatibility isn't something uutils will do. I think this is somewhat unfortunate, but I get it.

@BurntSushi
Copy link

@drinkcat Do you want to give current master a whirl and see how it works for you? You should now have %N, %::z, %:::z, %c, %x, %X, %r and %q. And lenient formatting can now be opted into. Example:

use jiff::{civil, fmt::strtime::{BrokenDownTime, Config}};

let tm = BrokenDownTime::from(civil::date(2025, 4, 30));
assert_eq!(
    tm.to_string("%F %z").unwrap_err().to_string(),
    "strftime formatting failed: %z failed: \
     requires offset to format time zone offset",
);

// Now enable lenient mode:
let config = Config::new().lenient(true);
assert_eq!(
    tm.to_string_with_config(&config, "%F %z").unwrap(),
    "2025-04-30 %z",
);

// Lenient mode also applies when using an unsupported
// or unrecognized conversion specifier. This would
// normally return an error for example:
assert_eq!(
    tm.to_string_with_config(&config, "%+ %0").unwrap(),
    "%+ %0",
);

drinkcat added a commit to drinkcat/coreutils that referenced this issue May 1, 2025
From code provided in uutils#7852 by @BurntSushi.

Depending on the benchmarks, there is _still_ a small performance
difference (~4%) vs main, but it's seen mostly on small trees
getting printed repeatedly, which is probably not a terribly
interesting use case.
@drinkcat
Copy link
Contributor Author

drinkcat commented May 1, 2025

@BurntSushi awesome thanks! Looks like lenient code and the new formats work! I just updated my branch so we'll see what CI says, but local testing is good at least ,-)

Edit: CI passes too!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants