Hierarchical Calculus Numerics — error comparison (explicit \(D_r^n\))

Numerics & Error Comparison

Instead of approximating \(f(x)\) directly (rank 0), approximate it in a relative space (rank 1) or a higher log–log space (rank 2), then transform back. This often improves numerical stability for large values and multi-scale growth.
Notation rule: derivatives are written with explicit rank and degree: \(\;D_r^n\). In this page we mainly use degree \(1\): \(\;D_0^1, D_1^1, D_2^1\).

Concept DOI: 10.5281/zenodo.17917302

1) Local approximation methods (around a base point \(x_0\))

We compare three first-order local models by holding a suitable derivative (at degree \(1\)) locally constant.

Rank 0 (D0) Taylor 1 (additive)

\[ f(x)\approx f(x_0)+\left(D_{0}^{1}f\right)(x_0)\,(x-x_0). \]

Rank 1 (D1) Relative model (multiplicative)

\[ D_{1}^{1}f=\frac{d\ln f}{d\ln x} \quad\Rightarrow\quad f(x)\approx f(x_0)\left(\frac{x}{x_0}\right)^{\left(D_{1}^{1}f\right)(x_0)}. \]

Rank 2 (D2) Log–log model (structural)

\[ \boxed{ D_{2}^{1}f=\frac{d\ln(\ln f)}{d\ln(\ln x)} } \] \[ \Delta_2=\ln(\ln x)-\ln(\ln x_0)=\ln\!\left(\frac{\ln x}{\ln x_0}\right), \qquad a=\left(D_{2}^{1}f\right)(x_0). \] \[ \ln(\ln f(x))\approx \ln(\ln f(x_0)) + a\,\Delta_2 \quad\Rightarrow\quad f(x)\approx \exp\!\Big(\exp(\ln(\ln f(x_0)) + a\,\Delta_2)\Big). \]
Domain note: Rank 2 requires \(x>1\) (so \(\ln x>0\)) and typically \(f(x)>1\) (so \(\ln f>0\)).
Error metric: We report relative error: \(\;\varepsilon = \left|\dfrac{\widehat{f}(x)-f(x)}{f(x)}\right|\).

2) Experiment 1: multi-scale model where rank 2 becomes exact

\[ f(x)=\exp\big((\ln x)^2\big),\quad x>1,\qquad x_0=10. \]

Compare at \(x\in\{8,9,10,11,12,15\}\).

Why this example?
Here \(\ln(\ln f)=\ln((\ln x)^2)=2\ln(\ln x)\), so \(D_{2}^{1}f=2\) is constant. Therefore the rank-2 approximation is exact in theory; any displayed \(\approx 0\) error is only rounding/formatting.
Table — Experiment 1 (multi-scale): D0 vs D1 vs D2
x True value Rank 0 Rank 1 Rank 2 Rel. err R0 Rel. err R1 Rel. err R2
875.49615.85071.82975.4960.7900.0486≈0
9124.935108.284123.556124.9350.1330.0110≈0
10200.717200.717200.717200.717000
11314.160293.151311.319314.1600.06690.00904≈0
12480.468385.585464.759480.4680.1970.0327≈0
151530.785662.8861298.7191530.7850.5670.152≈0
Note: If you later compute the table by code, keep more digits to show that rank-2 errors are numerically near machine precision.

3) Experiment 2: power law where rank 1 becomes exact

\[ g(x)=x^{3.5},\quad x>0,\qquad x_0=10. \]

Compare at \(x\in\{8,9,10,11,12,15\}\).

Why this example?
For a power law, \(D_{1}^{1}g=3.5\) is constant, so the rank-1 model matches the true value (up to rounding).
Table — Experiment 2 (power law): D0 vs D1
x True value Rank 0 Rank 1 Rel. err R0 Rel. err R1
81448.155948.6831448.1550.345≈0
92187.0002055.4802187.0000.0601≈0
103162.2783162.2783162.27800
114414.4284269.0754414.4280.0329≈0
125985.9685375.8725985.9680.1019≈0
1513071.3198696.26413071.3190.335≈0

4) Conclusion

Practical rule
Power-law behavior ⟶ rank 1 (relative derivative \(D_{1}^{1}\)) is typically the right coordinate system.
Multi-scale behavior like \(\exp((\ln x)^k)\) ⟶ rank 2 (\(D_{2}^{1}\)) becomes highly stable and can be exact for certain models.