Fractal Patterns May Unravel The Intelligence in Next-Token Prediction
Fractal Patterns May Unravel The Intelligence in Next-Token Prediction
Fractal Patterns May Unravel The Intelligence in Next-Token Prediction
1
Fractal Patterns May Unravel the Intelligence in Next-Token Prediction
2
Fractal Patterns May Unravel the Intelligence in Next-Token Prediction
per-byte (BPB). Specifically, we introduce a new metric bits using a large language model (LLM). Specifically, we
averaging H with 1/BPB and show that using it to predict use PaLM2-L (Unicorn) (Anil et al., 2023b) to calculate the
downstream performance can increase the adjusted R2 from probability of the next token wt conditioned on its entire pre-
approximately 0.65 when using solely BPB, to over 0.86 fix w[t−1] = (w0 , w1 , . . . , wt−1 ). By the chain rule (Cover,
with the new metric. We do not observe improvements when 1999), the corresponding number of bits assigned to wt is
predicting rankings, however. zt = − log p(wt |w[t−1] ). Unlike in prior works, which rely
on simplifications such as by substituting a word with its
Statement of Contribution. In summary, we:
length (Ausloos, 2012) or by focusing on the recurrence of
a single word (Najafi & Darooneh, 2015; Altmann et al.,
1. highlight how the fractal structure of language can 2012), we use the LLM to approximate the full joint dis-
offer a unique perspective on the intelligent behavior tribution of language. We carry out these calculations for
exhibited by LLMs, and provide a precise formalism prefixes of up to 2048 tokens (≈ 8 pages of text). Since
to quantify properties, such as long-range dependence. language is a stochastic process, the sequence of bits of each
token conditioned on its past converges asymptotically to
2. establish that language is self-similar and long-range
the average number of bits required to encode the entire
dependent. We provide concrete estimates in language
sequence (Cover, 1999). Hence, a suitable normalization,
of the three parameters: the self-similarity (Hölder) ex-
such bits-per-byte (BPB), results in a standardized descrip-
ponent, the Hurst parameter, and the fractal dimension.
tion of text, consistent across tokenizers. BPB is a widely
We also estimate the related Joseph exponent.
used as a tokenizer-agnostic metric to compare language
3. carry out a comparative study across different model modeling performance, e.g. for The Pile (Gao et al., 2020).
architectures and scales, and different domains, such Besides PaLM2, we also experiment and report on various
as ArXiv, GitHub, and Wikipedia, among others. model sizes of PaLM (Chowdhery et al., 2022) and decoder-
only T5 (Raffel et al., 2019). Namely, we report results for
4. connect fractal patterns with learning. Notably, we models: PaLM2 XXS (Gecko), XS (Otter), S (Bison), M,
show that a “median” Hurst exponent (defined in Sec- and L (Unicorn); PaLM 8B, 62B, 540B; and decoder-only
tion 3) improves upon perplexity-based bits-per-byte T5.1.1 at Base (110M), Large (341M), XL (1.2B), and XXL
(BPB) in predicting downstream performance. (5B) sizes. For PaLM and PaLM2, we use the checkpoints
pretrained in Chowdhery et al. (2022) and Anil et al. (2023b).
2. Fractal Structure of Language All T5.1.1 decoder baselines, on the other hand, are trained
with a casual language modeling objective for 262B tokens
2.1. Preliminaries of C4 (Raffel et al., 2019). More details on how we train
Suppose we have a discrete-time, stationary stochastic pro- our T5.1.1 baselines can be found in Appendix A.
cess (xt )t∈N , with E[xt ] = 0 and E[x2t ] = 1. We will refer To rely on LLM for such analysis, it must provide proba-
to (xt )t∈N as the increment process to distinguish
Pt it from the bility scores that are reasonably well-calibrated. Generally,
integral process (Xt )t∈N defined by Xt = k=0 xk . While LLMs are known to produce calibrated probability scores
(xt )t∈N and (Xt )t∈N are merely different representations of at the token level (Kadavath et al., 2022). In Figure 3, we
the same data, it is useful to keep both representations in reconfirm this by comparing the logits, − log p(word), pre-
mind. For example, self-similarity is typically studied in dicted by one of the small language models we use in our
the context of integral processes whereas long-range depen- study (PaLM-8B) with the actual log probabilities derived
dence (LRD) is defined on increment processes. from the Google Web Trillion Word Corpus (Brants & Franz,
In the literature, it is not uncommon to mistakenly equate pa- 2006) based on word frequencies. We use histogram binning
rameters that are generally different. For example, the Hurst (by grouping similar logits together) and plot their averaged
parameter has had many different definitions in the past that actual log probabilities, similar to how the expected calibra-
were not equivalent, and Mandelbrot himself had cautioned tion error (ECE) is calculated (Guo et al., 2017). Notably,
against this (Mandelbrot, 2002). The reason behind this is we find a strong agreement for the most frequently occurring
because different parameters can agree in the idealized frac- words, i.e., when the word probability exceeds p ≫ 10−9 .
tional Brownian motion setting, leading some researchers Once zt is computed for a document, we construct the incre-
to equate them in general (Watkins, 2019). We will keep ment process (xt )t∈N by normalizing zt to have a zero-mean
the self-similarity exponent S and the Hurst parameter H and unit variance. The integral process (Xt )t∈N is calcu-
separate in our discussion. lated based on (xt )t∈N , as described earlier and depicted in
Figure 1 (top). Normalizing bits (to have zero mean and
Experimental Setup. In order to establish self-similarity unit variance) models language as a random walk. It is a
and LRD in language, we convert texts into sequences of
3
Fractal Patterns May Unravel the Intelligence in Next-Token Prediction
Figure 2. Peak probability pϵ (τ ) is plotted against the granularity level τ (see Section 2.2). We observe a power law pϵ (τ ) ∼ τ −S in all
domains, indicating a self-similar structure, with a median self-similarity exponent of S = 0.59 ± 0.08.
30
2.2. Self-similarity exponent
Actual log-probability
4
Fractal Patterns May Unravel the Intelligence in Next-Token Prediction
Figure 4. Rescaled range R(n)/S(n) is plotted against the number of normalized bits n. We observe a power law R(n)/S(n) ∼ nH in
all domains. When aggregating all datasets, H = 0.70 ± .01, indicating long-range dependence (LRD).
estimated by fitting a power law relation R(n)/S(n) ∼ nH . By convention, an object is referred to as “fractal” if D is
As stated earlier, for completely random processes, such as a different from its topological dimension. For example, the
simple Brownian motion, it can be shown that H = 1/2. In fractal dimension of the Koch curve is about 1.26 when
addition, H > 1/2 implies dependence over time (Crovella its topological dimension is 1. Fractals explain some puz-
& Bestavros, 1995; Willinger et al., 1995; Aref, 1998). zling observations, such as why estimates of the length
of the coast of Britain varied significantly from one study
Writing ρn = E[(xt+n xt ] for the autocovariance function
to another, because lengths in fractals are scale-sensitive.
of the increment process (xt )t∈N , the Hurst parameter sat-
Mandelbrot estimated the fractal dimension of the coast of
isfies H = 1 − β/2 when ρn ∼ n−β as n → ∞ (Gneiting
Britain to be 1.25 (Mandelbrot, 1967).
& Schlather, 2004; Crovella & Bestavros, 1995). Since in
self-similar processes, H > 1/2 implies long-range depen- The definition above for the fractal dimension D applies
dence (LRD), LRD is equivalent to the condition that the to geometric shapes, but an analogous definition has been
autocovariances are not summable. In terms of the integral introduced for stochastic processes. Let (xt )t∈R be a sta-
process, it can be shown that (Samorodnitsky, 2006): tionary process with autocovariance ρn . Then, its fractal
∞ dimension D is determined according to the local behavior
Var(Xn ) X
lim =1+2 ρi . (1) of ρn at the vicinity of n = 0, by first normalizing (xt )t∈R
n→∞ n i=1 to have a zero-mean and a unit variance, and modeling ρn
Hence, if H < 1/2, the auto-covariances are summable and using a power law ρn ∼ 1 − nα as n → 0+ , for α ∈ (0, 2].
Var(Xn ) grows, at most, linearly fast on n. On the other Then, the fractal dimension D ∈ [1, 2] of (xt )t∈R is defined
hand, if the process has LRD, Var(Xn ) grows superlinearly by D = 2 − α/2 (Gneiting & Schlather, 2004). A value
on n. In particular, using the Euler-Maclaurin summation D ≫ 1 indicates a significant fractal structure.
formula (Apostol, 1999; Alabdulmohsin, 2018), one obtains It can be shown that D = 2−S, where S is the self-similarity
Var(Xn ) ∼ n2H if H > 1/2. Figure 4 plots the rescaled exponent (Gneiting & Schlather, 2004). For language, this
range R(n)/S(n) against n. We observe a power law rela- gives a median fractal dimension of D = 1.41 ± 0.08.
tion with a median Hurst parameter of H = 0.70 ± 0.09.
2.5. Joseph effect
2.4. Fractal dimension
Next, we examine another related parameter that is com-
Broadly speaking, the fractal dimension of an object de- monly studied in self-similar processes. The motivation
scribes its local complexity. For a geometric object Z, such behind it comes from the fact that in processes with LRD,
as the Koch curve, let τ be a chosen scale (e.g. a short one often observes burstiness as shown in Figure 1; i.e. clus-
ruler for measuring lengths or a small square for areas). Let ters over time in which the process fully resides on one side
N (τ ) be the minimum number of objects of scale τ that of the mean, before switching to the other. This is quite
cover Z. Then, the fractal dimension of Z, n also calledo its unlike random noise, for instance, where measurements are
log N (τ )
Hausdorff dimension, is: D = − limτ →0 log τ (Pil- evenly distributed on both sides of the mean. The effect is of-
grim & Taylor, 2018). For example, a line has a fractal ten referred to as the Joseph effect, named after the biblical
dimension 1, in agreement with its topological dimension, story of the seven fat years and seven lean years (Willinger
because N (τ ) = C/τ for some constant C > 0. et al., 1995; Mandelbrot & Wallis, 1968; Watkins, 2019).
5
Fractal Patterns May Unravel the Intelligence in Next-Token Prediction
Table 1. A comparison of the fractal parameters across 8 different domains with > 1000 documents each in The Pile benchmark (see
Section 2.1 for selection criteria). DM-Mathematics is markedly different because each document consists of questions, with no LRD.
Table 2. A comparison of the estimated median fractal parameters by various LLMs over the entire Pile validation split. Estimates are
generally robust to the choice of the LLM, but the tiny variations in median H reflect improvements in the model quality. See Section 3.
A common way to quantify the Joseph effect for integral mance that improves upon using a perplexity-based metric
processes (Xt )t∈N is as follows (Watkins, 2019). First, let like bits-per-byte (BPB) alone.
στ be the standard deviation of the τ -increments Xt+τ −Xt .
To test this hypothesis, we evaluate the 12 models in Ta-
Then, fit a power law relation στ ∼ τ J . The exponent J
ble 2 on challenging downstream zero- and few-shot bench-
here is called the Joseph exponent. In an idealized fractional
marks focusing on language understanding and reasoning.
Brownian motion, both J and the self-similarity exponent S
We include results for 0-shot (0S) and 3-shot (3S) evalu-
coincide. Figure 5 provides the detailed empirical results.
ation for BIG-Bench Hard tasks (Srivastava et al., 2022;
Overall, we obtain an estimate of J = 0.49 ± 0.08, which
Suzgun et al., 2022) reporting both direct and chain-of-
is intriguing because J = 0.5 corresponds to self-similar
thought (CoT) prompting results following Chung et al.
processes with independent increments.
(2022). In addition we report 0-shot and 5-shot (5S) MMLU
(Hendrycks et al., 2020), and 8-shot (8S) GSM8K (Cobbe
3. Analysis et al., 2021) with CoT. Raw accuracy is reported for all tasks.
BBH and MMLU scores are averaged across all 21 tasks
Comparative Analysis. Table 1 compares the estimated
and 57 subjects, respectively. All prompt templates for our
fractal parameters across different domains, such as ArXiv,
evaluation are taken from Chung et al. (2022); Longpre et al.
Github and Wikipedia. In general, most domains share simi-
(2023), which we refer the reader to for more details. We
lar self-similarity and Hurst exponents with a few exceptions.
prompt all models using a 2048 context length. See Table 9
The first notable exception is DM-Mathematics, which has a
of Appendix C for the full results.
Hurst parameter of about 0.5. To recall, a value of H = 0.5
indicates that the data does not exhibit long-range depen- The first (surprising) observation is that the median Hurst
dence (LRD). Upon closer inspection, however, a value of parameter is itself strongly correlated with the BPB scores
H = 0.5 is not surprising for DM-Mathematics because its with an absolute Pearson correlation coefficient of 0.83, even
documents consist of independent mathematical questions though the Hurst exponent is calculated after normalizing
as shown in Figure 7. The second notable observation is the all token losses to zero-mean and unit variance! Informally,
relatively larger value of H = 0.79 in GitHub, indicating this implies that second-order statistics on the sequence of
more structure in code. This is in agreement with earlier token losses of a particular model can predict its mean!
findings by Kokol & Podgorelec (2000) who estimated LRD The self-similarity exponent, by contrast, has an absolute
in computer languages to be greater than in nature language. Pearson correlation of 0.23 with BPB.
In Table 2, we compare the three fractal parameters S, H
Figure 6 displays downstream performance against both the
and J using different families of LLM and different model
median Hurst exponent and the median BPB score, where
sizes. Overall, we observe that the estimated parameters are
median values are calculated on the 8 domains in The Pile
generally robust to the choice of the architecture.
benchmark listed in Table 1. In general, both the BPB score
and the median Hurst are good predictors of downstream
Downstream Performance. By definition, fractal param- performance. However, we observe that improvements in
eters are calculated on the sequence of log-perplexity scores BPB alone without impacting the median Hurst exponent
after normalizing them to zero-mean and unit variance. do not directly translate into improvements downstream.
Hence, they may offer an assessment of downstream perfor-
6
Fractal Patterns May Unravel the Intelligence in Next-Token Prediction
Figure 5. The standard deviation σ of the τ -increments Xt+τ − Xt is plotted against the scale τ . We, again, observe another power law
relation σ ∼ τ J , with a Joseph exponent J = 0.49 ± 0.08.
Magnitude Ranking 2K 4K 8K
BPB H HB BPB HB 0S BBH Direct 1.81 1.68 1.76
0S MMLU 25.73 26.04 25.81
0S BBH Direct 0.785 0.841 0.883 0.958 0.958 0S BBH+MMLU 13.39 13.49 13.42
0S MMLU 0.653 0.831 0.825 0.769 0.769
0S BBH+MMLU 0.685 0.849 0.852 0.930 0.930 3S BBH Direct 21.35 24.76 23.14
3S BBH CoT 16.87 12.21 7.14
3S BBH Direct 0.767 0.895 0.926 1.000 1.000 5S MMLU 26.57 26.69 27.07
3S BBH CoT 0.881 0.892 0.979 1.000 1.000 8S GSM8K CoT 1.06 1.21 1.74
5S MMLU 0.660 0.853 0.832 0.783 0.783 FS BBH + MMLU+GSM8K 15.58 15.46 14.65
8S GSM8K CoT 0.654 0.867 0.851 0.993 0.993
FS BBH+MMLU+GSM8K 0.717 0.890 0.891 1.000 1.000
7
Fractal Patterns May Unravel the Intelligence in Next-Token Prediction
1.1 0S BBH Direct 1.1 0S MMLU 1.1 3S BBH Direct 1.1 3S BBH CoT
Median BPB
Median BPB
Median BPB
Median BPB
0.4 0.4 0.4 0.4
0.62 0.67 0.72 0.62 0.67 0.72 0.62 0.67 0.72 0.62 0.67 0.72
Median Hurst Median Hurst Median Hurst Median Hurst
1.1 5S MMLU 1.1 8S GSM8K CoT 1.1 0S BBH+MMLU 1.1 FS BBH+MMLU+GSM8K
Median BPB
Median BPB
Median BPB
Median BPB
0.4 0.4 0.4 0.4
0.62 0.67 0.72 0.62 0.67 0.72 0.62 0.67 0.72 0.62 0.67 0.72
Median Hurst Median Hurst Median Hurst Median Hurst
Figure 6. Downstream metric, indicated by bubble size, is plotted vs. the median Hurst and the median BPB for all 12 language models.
Document I: What is the square root of 211269 to the in natural language, and suggest that its LRD is close to that
nearest integer? 460. What is the square root of of pure noise! They conjecture this was due to the use of
645374 to the nearest integer? 803... ASCII encoding. In computer languages, they observe LRD
Document II: Suppose 5*l = r - 35, -2*r + 5*l - 15 = and suggest this is because computer languages are formal.
-70. Is r a multiple of 4? True. Suppose 2*l + 11 -
Besides the above concerns in prior studies that examined
1 = 0. Does 15 divide (-2)/l - 118/(-5)? False...
the self-similar structure in language, another concern is that
they sometimes give extremely large values of the fractal
Figure 7. Two examples of documents from the DM-Mathematics dimension, sometimes even exceeding 10 (Andres, 2009).
subset of The Pile benchmark (Gao et al., 2020). Each document Such values are difficult to interpret because classical defini-
comprises of multiple independent questions. The lack of LRD in tions of the fractal dimension restrict its value to the range
this data is reflected in its Hurst parameter of H = 0.50 ± 0.01 [1, 2] for time series. We do not observe such issues in our
analysis. In our case, D = 1.41 ± 0.08.
8
Fractal Patterns May Unravel the Intelligence in Next-Token Prediction
6. Acknowledgement F., Maggioni, M., Mahendru, A., Maynez, J., Misra, V.,
Moussalem, M., Nado, Z., Nham, J., Ni, E., Nystrom, A.,
The authors would like to thank Justin Gilmer and Olivier Parrish, A., Pellat, M., Polacek, M., Polozov, A., Pope,
Bousquet for their feedback on earlier drafts of this R., Qiao, S., Reif, E., Richter, B., Riley, P., Ros, A. C.,
manuscript, and both Google Deepmind and Google Re- Roy, A., Saeta, B., Samuel, R., Shelby, R., Slone, A.,
search teams at large for the insightful discussions and pro- Smilkov, D., So, D. R., Sohn, D., Tokumine, S., Valter,
viding a supportive research environment. D., Vasudevan, V., Vodrahalli, K., Wang, X., Wang, P.,
Wang, Z., Wang, T., Wieting, J., Wu, Y., Xu, K., Xu, Y.,
7. Potential Broader Impact Xue, L., Yin, P., Yu, J., Zhang, Q., Zheng, S., Zheng,
C., Zhou, W., Zhou, D., Petrov, S., and Wu, Y. PaLM 2
This paper presents work whose goal is to advance the field technical report. arXiv:2305.10403v3 [cs.CL], 2023b.
of Machine Learning. There are many potential societal
consequences of our work, none which we feel must be Apostol, T. M. An elementary view of Euler’s summation
specifically highlighted here. formula. The American Mathematical Monthly, 106(5):
409–418, 1999.
References Aref, S. Hurst phenomenon and fractal dimensions in long-
Abry, P., Gonçalvés, P., and Flandrin, P. Wavelets, spectrum term yield data. In Conference on Applied Statistics in
analysis and 1/f processes. Wavelets and statistics, pp. Agriculture, 1998.
15–29, 1995. Ausloos, M. Generalized Hurst exponent and multifractal
Alabdulmohsin, I. M. Summability calculus: A comprehen- function of original and translated texts mapped into fre-
sive theory of fractional finite sums. Springer, 2018. quency and length time series. Physical Review E, 86(3):
031108, 2012.
Altmann, E. G., Cristadoro, G., and Esposti, M. D. On the
Bradbury, J., Frostig, R., Hawkins, P., Johnson, M. J., Leary,
origin of long-range correlations in texts. Proceedings of
C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J.,
the National Academy of Sciences, 109(29):11582–11587,
Wanderman-Milne, S., and Zhang, Q. JAX: composable
2012.
transformations of Python+NumPy programs, 2018. URL
Andres, J. On de Saussure’s principle of linearity and visu- http://github.com/google/jax.
alization of language structures. Glottotheory, 2(2):1–14, Brants, T. and Franz, A. Web 1T 5-gram Version 1,
2009. 2006. URL https://catalog.ldc.upenn.edu/
Anil, R., Borgeaud, S., Wu, Y., Alayrac, J.-B., Yu, J., Sori- LDC2006T13. Web Download. Philadelphia: Linguistic
cut, R., Schalkwyk, J., Dai, A. M., Hauth, A., Millican, Data Consortium.
K., Silver, D., Petrov, S., Johnson, M., Antonoglou, I., Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J.,
Schrittwieser, J., Glaese, A., Chen, J., Pitler, E., et al. Horvitz, E., Kamar, E., Lee, P., Lee, Y. T., Li, Y., Lund-
Gemini: A family of highly capable multimodal models. berg, S., Nori, H., Palangi, H., Ribeiro, M. T., and Zhang,
arXiv:2312.11805v1 [cs.CL], 2023a. Y. Sparks of artificial general intelligence: Early experi-
ments with GPT-4, 2023.
Anil, R., Dai, A. M., Firat, O., Johnson, M., Lepikhin, D.,
Passos, A., Shakeri, S., Taropa, E., Bailey, P., Chen, Z., Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra,
Chu, E., Clark, J. H., Shafey, L. E., Huang, Y., Meier- G., Roberts, A., Barham, P., Chung, H. W., Sutton, C.,
Hellstern, K., Mishra, G., Moreira, E., Omernick, M., Gehrmann, S., et al. PaLM: Scaling language modeling
Robinson, K., Ruder, S., Tay, Y., Xiao, K., Xu, Y., Zhang, with pathways. arXiv preprint arXiv:2204.02311, 2022.
Y., Abrego, G. H., Ahn, J., Austin, J., Barham, P., Botha,
J., Bradbury, J., Brahma, S., Brooks, K., Catasta, M., Chung, H. W., Hou, Le and, L. S., Zoph, B., Tay, Y., Fedus,
Cheng, Y., Cherry, C., Choquette-Choo, C. A., Chowd- W., and et al. Scaling instruction-finetuned language
hery, A., Crepy, C., Dave, S., Dehghani, M., Dev, S., models. arXiv:2210.11416v5 [cs.LG], 2022.
Devlin, J., Dı́az, M., Du, N., Dyer, E., Feinberg, V., Feng, Cobbe, K., Kosaraju, V., Bavarian, M., Hilton, J., Nakano,
F., Fienber, V., Freitag, M., Garcia, X., Gehrmann, S., R., Hesse, C., and Schulman, J. Training verifiers to
Gonzalez, L., Gur-Ari, G., Hand, S., Hashemi, H., Hou, solve math word problems. arXiv:2110.14168v2 [cs.LG],
L., Howland, J., Hu, A., Hui, J., Hurwitz, J., Isard, M., It- 2021.
tycheriah, A., Jagielski, M., Jia, W., Kenealy, K., Krikun,
M., Kudugunta, S., Lan, C., Lee, K., Lee, B., Li, E., Li, Cover, T. M. Elements of information theory. John Wiley &
M., Li, W., Li, Y., Li, J., Lim, H., Lin, H., Liu, Z., Liu, Sons, 1999.
9
Fractal Patterns May Unravel the Intelligence in Next-Token Prediction
Crovella, M. E. and Bestavros, A. Explaining world wide Jouppi, N. P., Yoon, D. H., Kurian, G., Li, S., Patil, N.,
web traffic self-similarity. Technical report, Boston Uni- Laudon, J., Young, C., and Patterson, D. A domain-
versity Computer Science Department, 1995. specific supercomputer for training deep neural networks.
Communications of the ACM, 63(7):67–78, 2020.
Efron, B. and Tibshirani, R. J. An introduction to the boot-
strap. CRC press, 1994. Kadavath, S., Conerly, T., Askell, A., Henighan, T., Drain,
D., Perez, E., Schiefer, N., Hatfield-Dodds, Z., DasSarma,
Eftekhari, A. Fractal geometry of texts: An initial ap- N., Tran-Johnson, E., et al. Language models (mostly)
plication to the works of Shakespeare. Journal of know what they know. arXiv preprint arXiv:2207.05221,
Quantitative Linguistics, 13(2-3):177–193, 2006. doi: 2022.
10.1080/09296170600850106.
Kidd, C. and Hayden, B. Y. The psychology and neuro-
Embrechts, P. and Maejima, M. An introduction to the the- science of curiosity. Neuron, 88(3):449–460, 2015.
ory of self-similar stochastic processes. International
journal of modern physics B, 14(12n13):1399–1420, Kokol, P. and Podgorelec, V. Complexity and human writ-
2000. ings. Complexity, 7:1–6, 2000.
Feller, W. The Asymptotic Distribution of the Range of Kolmogorov, A. N. Wienersche spiralen und einige andere
Sums of Independent Random Variables. The Annals of interessante kurven in hilbertscen raum, cr (doklady).
Mathematical Statistics, 22(3):427 – 432, 1951. doi: 10. Acad. Sci. URSS (NS), 26:115–118, 1940.
1214/aoms/1177729589. URL https://doi.org/
10.1214/aoms/1177729589. Leland, W. E. and Wilson, D. V. High time-resolution
measurement and analysis of LAN traffic: Implications
Gao, L., Biderman, S., Black, S., Golding, L., Hoppe, T., for LAN interconnection. In IEEE INFCOM, 1991.
Foster, C., Phang, J., He, H., Thite, A., Nabeshima, N.,
Presser, S., and Leahy, C. The Pile: An 800GB dataset of Leland, W. E., Taqqu, M. S., Willinger, W., and Wilson, D. V.
diverse text for language modeling. arXiv:2101.00027v1 On the self-similar nature of Ethernet traffic. IEEE/ACM
[cs.CL], 2020. Transactions on networking, 2(1):1–15, 1994.
Geweke, J. and Porter-Hudak, S. The estimation and appli- Longpre, S., Hou, L., Vu, T., Webson, A., Chung, H. W., Tay,
cation of long memory time series models. Journal of Y., Zhou, D., Le, Q. V., Zoph, B., Wei, J., and Roberts,
time series analysis, 4(4):221–238, 1983. A. The flan collection: designing data and methods for
effective instruction tuning. In Proceedings of the 40th In-
Gneiting, T. and Schlather, M. Stochastic models that sepa- ternational Conference on Machine Learning, ICML’23.
rate fractal dimension and the Hurst effect. SIAM Review, JMLR.org, 2023.
46(2):269–282, 2004. doi: 10.1137/s0036144501394387.
Mandelbrot, B. How long is the coast of Britain? Statistical
Goldberger, A. L., Amaral, L. A., Hausdorff, J. M., Ivanov, self-similarity and fractional dimension. science, 156
P. C., Peng, C.-K., and Stanley, H. E. Fractal dynamics in (3775):636–638, 1967.
physiology: alterations with disease and aging. Proceed-
ings of the national academy of sciences, 99(suppl 1): Mandelbrot, B. Gaussian self-affinity and fractals: global-
2466–2472, 2002. ity, the earth, 1/f noise, and R/S. Springer Science and
Business Media, 2002.
Guo, C., Pleiss, G., Sun, Y., and Weinberger, K. Q. On
calibration of modern neural networks. In ICML. PMLR, Mandelbrot, B. B. The fractal geometry of nature. WH
2017. freeman New York, 1982.
Heaps, H. S. Information retrieval, computational and Mandelbrot, B. B. and Wallis, J. R. Noah, Joseph, and
theoretical aspects. Academic Press, 1978. operational hydrology. Water resources research, 4(5):
909–918, 1968.
Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika,
M., Song, D., and Steinhardt, J. Measuring mas- Montemurro, M. A. and Pury, P. A. Long-range fractal
sive multitask language understanding. arXiv preprint correlations in literary corpora. Fractals, 10(04):451–
arXiv:2009.03300, 2020. 461, 2002.
Hurst, H. E. Long-term storage capacity of reservoirs. Trans- Najafi, E. and Darooneh, A. H. The fractal patterns of words
actions of the American society of civil engineers, 116(1): in a text: a method for automatic keyword extraction.
770–799, 1951. PloS one, 10(6):e0130617, 2015.
10
Fractal Patterns May Unravel the Intelligence in Next-Token Prediction
OpenAI. GPT-4 technical report. arXiv:2303.08774v4 Soboleva, D., Al-Khateeb, F., Myers, R., Steeves, J. R.,
[cs.CL], 2023. Hestness, J., and Dey, N. SlimPajama: A 627B to-
ken cleaned and deduplicated version of RedPajama,
Paxson, V. and Floyd, S. Wide area traffic: the failure of June 2023. URL https://huggingface.co/
Poisson modeling. IEEE/ACM Transactions on network- datasets/cerebras/SlimPajama-627B.
ing, 3(3):226–244, 1995.
Srivastava, A., Rastogi, A., Rao, A., Shoeb, A. A. M., Abid,
Peng, C.-K., Buldyrev, S. V., Goldberger, A. L., Havlin, S., A., Fisch, A., Brown, A. R., Santoro, A., Gupta, A.,
Sciortino, F., Simons, M., and Stanley, H. E. Long-range Garriga-Alonso, A., et al. Beyond the imitation game:
correlations in nucleotide sequences. Nature, 356(6365): Quantifying and extrapolating the capabilities of language
168–170, 1992. models. arXiv preprint arXiv:2206.04615, 2022.
Pilgrim, I. and Taylor, R. P. Fractal analysis of time-series Suzgun, M., Scales, N., Scharli, N., Gehrmann, S., Tay, Y.,
data sets: Methods and challenges. In Ouadfeul, S.-A. Chung, H. W., Chowdhery, A., Le, Q. V., Chi, E. H., Zhou,
(ed.), Fractal Analysis, chapter 2. IntechOpen, Rijeka, D., and Wei, J. Challenging BIG-Bench tasks and whether
2018. doi: 10.5772/intechopen.81958. URL https: chain-of-thought can solve them. arXiv:2210.09261v1
//doi.org/10.5772/intechopen.81958. [cs.CL], 2022.
Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Watkins, N. Mandelbrot’s stochastic time series models.
Matena, M., Zhou, Y., Li, W., and Liu, P. J. Exploring Earth and Space Science, 6(11):2044–2056, 2019.
the limits of transfer learning with a unified text-to-text Willinger, W., Taqqu, M. S., Leland, W. E., and Wilson, D. V.
transformer. arXiv:1910.10683v4 [cs.LG], 2019. Self-similarity in high-speed packet traffic: analysis and
modeling of Ethernet traffic measurements. Statistical
Roberts, A., Chung, H. W., Levskaya, A., Mishra, G., Brad-
science, pp. 67–85, 1995.
bury, J., Andor, D., Narang, S., Lester, B., Gaffney, C.,
Mohiuddin, A., Hawthorne, C., Lewkowycz, A., Salcianu, Willinger, W., Taqqu, M. S., Sherman, R., and Wilson, D. V.
A., van Zee, M., Austin, J., Goodman, S., Soares, L. B., Self-similarity through high-variability: statistical analy-
Hu, H., Tsvyashchenko, S., Chowdhery, A., Bastings, sis of Ethernet LAN traffic at the source level. IEEE/ACM
J., Bulian, J., Garcia, X., Ni, J., Chen, A., Kenealy, K., Transactions on networking, 5(1):71–86, 1997.
Clark, J. H., Lee, S., Garrette, D., Lee-Thorp, J., Raffel,
C., Shazeer, N., Ritter, M., Bosma, M., Passos, A., Maitin-
Shepard, J., Fiedel, N., Omernick, M., Saeta, B., Sepassi,
R., Spiridonov, A., Newlan, J., and Gesmundo, A. Scaling
up models and data with t5x and seqio, 2022. URL
https://arxiv.org/abs/2203.17189.
Roche, S., Bicout, D., Maciá, E., and Kats, E. Long range
correlations in DNA: scaling properties and charge trans-
fer efficiency. Physical review letters, 91(22):228101,
2003.
11
Fractal Patterns May Unravel the Intelligence in Next-Token Prediction
A. Experiment Details
All of our experiments are conducted in JAX/Flax (Bradbury et al., 2018) using the open source T5X framework (Roberts
et al., 2022).
T5 baselines in Table 2 and 3 are pretrained from scratch using the open source T5.1.1 decoder-only architecture from the
T5X library.1 . We pretrain using a causal language modeling objective over the C4 corpus with the default T5 vocabulary as
per Raffel et al. (2019). Training is done for 500k steps with a sequence length of 1024 and batch size of 512, resulting in a
total of 262B tokens seen during pretraining. We optimize our model with the Adafactor (Shazeer & Stern, 2018) optimizer
with an inverse square root learning rate schedule, 1k warmup steps, and an initial learning rate of 1e-2. Models are trained
using 256 TPUv5e chips (Jouppi et al., 2020).
T5 context length ablation experiments in Table 4 are trained with the same pretraining objective but over the SlimPajama-
627B corpus (Soboleva et al., 2023) and using a modified version of the T5 vocabulary that preserves whitespace and
introduces byte-fallback for out of vocabulary tokens. This is similar to Chowdhery et al. (2022), but preserving the original
T5 vocabulary. Models with sequence lengths 2048, 4096, 8192 are trained with batch sizes of 512, 256, and 128 respectively
to preserve the number of tokens seen per batch and overall training steps. We train all models for 100k steps, using the
same learning rate schedule described above. Hence, all models observe 100B tokens.
1
https://github.com/google-research/t5x/tree/main/t5x/examples/decoder_only/models
12
Fractal Patterns May Unravel the Intelligence in Next-Token Prediction
B. Full Results
In this section, we provide the full list of parameters calculated for each combination of LLM and domain. We use
bootstrapping (Efron & Tibshirani, 1994) to estimate the error margin.
Table 5. Log-perplexity (NLL) scores evaluated on the first 2048 tokens, after trimming the first 100 tokens, of documents belonging to
each of the shown domains. Only documents with a minimum length of 4K tokens are used.
Table 6. Self-similarity exponent S evaluated on the first 2048 tokens, after trimming the first 100 tokens, of documents belonging to each
of the shown domains. Only documents with a minimum length of 4K tokens are used.
13
Fractal Patterns May Unravel the Intelligence in Next-Token Prediction
Table 7. Hurst exponent H evaluated on the first 2048 tokens, after trimming the first 100 tokens, of documents belonging to each of the
shown domains. Only documents with a minimum length of 4K tokens are used.
Table 8. Joseph exponent J evaluated on the first 2048 tokens, after trimming the first 100 tokens, of documents belonging to each of the
shown domains. Only documents with a minimum length of 4K tokens are used.
14
Fractal Patterns May Unravel the Intelligence in Next-Token Prediction
Table 9. Full downstream few-shot evaluation results compared to upstream BPB. Here, BPB is computed over The Pile validation split
using the first 2048 tokens of every document. All evaluation results are reported as raw (un-normalized) accuracy.
Please note that our results are not directly comparable to all previous published results for the same models; please cite the
original results from (Chowdhery et al., 2022; Anil et al., 2023b). Here, we only aim for a fair comparison between models: only
pretrained models without instruction tuning are used, we do not optimize any prompts for each model, and we evaluate all models using
only a 2K sequence length.
Table 10. Adjusted R2 , which measures the proportion of variation in downstream performance (row) that is predictable from the given
input(s) (column) using a trained linear regressor. Unlike in the median Hurst exponent, we do not observe any improvement when
combining BPB scores with the self-similarity exponent S or the Joseph exponent J.
15