-
Notifications
You must be signed in to change notification settings - Fork 45
[BUG] Performance is unusably slow #653
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
You know what, This needs to be fixed. This is bad. Hope this helps |
Please attach a repro (link to a GH repository) so that we can take a look. |
You cannot exactly compare react router with Next. Next does a lot more than just serving a route. When you request a page on Next.js, it requires loading a full NextServer, which by itself is quite heavy. After that, it needs to fetch the cache entry or render the route in case of SSR. There's a recent pull request that has been merged, which should significantly reduce CPU times. You can find more details about it here: #656. If you want an explanation #656 (comment) It's also worth noting that your tests are only measuring a full cold start. Based on the KV graph you shared, it seems that KV didn't even have enough time to replicate to the regional cache. |
I wanted to compare SSR vs SSR, I understand nextjs is much heavier. I was under the impression that SSR would be a large contributor to the slower performance.
Great, can’t wait. :) Will nextjs’es adapters functionality improve this as well? It seems a little nuts we have to load a full next app on every request.
I’ve allowed the KV plenty of time (let it overnight) as well as tried to warm it up using a bunch of different requests to different regions. The read performance is still quite disappointing even when everything’s warmed up and replicated. Thanks for your reply. |
But even that is not a fair comparison, if you use App router you're comparing RSC SSR vs SSR
In theory it should, but it will depends on how they'll implement it Regarding KV, you should read this https://developers.cloudflare.com/kv/concepts/how-kv-works/ |
Describe the bug
I have all the caching functionality enabled but ttfb still ranges from 1.2s to almost 3 seconds!
Most of my pages are staticky rendered. My landing page reads the 'CF-IPCountry' header to localise pricing to the customers IP address. I don't have any middleware
CPU & wal times vary massively (i'd assume some pages need to be rendered, others can be pulled from cache?)
Unfortunately this makes this library almost completely un usable for almost all websites.
Am I doing something wrong? 3 seconds TTFB will cause customers to stop navigating to the site.
Steps to reproduce
Almost any @opennextjs/cloudflare deployment.
Expected behavior
Pages not to take 1-3 seconds to send first byte.
@opennextjs/cloudflare version
1.0.1
Wrangler version
^4.14.1
next info output
Additional context
open-next.config.ts:
the https://vercel-commerce-on-workers.web-experiments.workers.dev/ deployment's TTFB seems to range from 500-800ms which is much more reasonable. Is anyone able to link to its repo so I can compare my configs/versions? I can't find it.
The text was updated successfully, but these errors were encountered: