What kind of problems is it mostly used for? Please describe.
Underdetermined nonlinear least-square problems $\min_x \Vert f(x) \Vert_2$ for $f: \mathbb{R}^n \mapsto \mathbb{R}^m$, with $n > m$ (more parameters than equations), trying to evolve the initial guess to a "nearby" solution, possibly with constraints (most simply, bound constraints on $x$).
Describe the algorithm you’d like
As described in this discourse post, it is a straightforward modification of Levenberg–Marquardt (L–M) for underdetermined cases.
Instead of solving a regularized/damped least-squares problem $(J^T J + \lambda I) \delta = -J^T f(x)$ for each step $x \to x + \delta$ as in ordinary L–M, we solve a regularized minimum-norm problem $(JJ^T + \lambda I) z = - f(x) \implies \delta = J^T z$ at each step (finding the smallest step length to solve the linearized equations as $\lambda \to 0^+$). This keeps the matrix $JJ^T + \lambda I$ small ($m \times m$), and it is well-conditioned if you choose e.g. $\lambda = \alpha \Vert J \Vert_F^2$ for a sufficient dimensionless damping parameter $\alpha > 0$ as recommended in the paper (we chose $\alpha = 0.1/n$ IIRC, but the basic idea here is similar to L–M so you can do something analogous to whatever damping you have now) — so you can directly use Cholesky to solve it. Or you could use LQ factorization of $J$ (= QR of $J^T$) or other methods if you want.
Other implementations to know about
Right now our code for the paper is in a LeastSquaresOptim.jl fork at Inverse-Design-of-Resonances/LeastSquaresOptim/levenberg_marquardt.jl at main · aristeidis-karalis/Inverse-Design-of-Resonances · GitHub
References
We described this in the reference Chen et al. (2026), appendix B, but I wouldn't be surprised if other authors have stumbled across the same trick. (At the time of publication, however, we couldn't find any examples.)
What kind of problems is it mostly used for? Please describe.
Underdetermined nonlinear least-square problems$\min_x \Vert f(x) \Vert_2$ for $f: \mathbb{R}^n \mapsto \mathbb{R}^m$ , with $n > m$ (more parameters than equations), trying to evolve the initial guess to a "nearby" solution, possibly with constraints (most simply, bound constraints on $x$ ).
Describe the algorithm you’d like
As described in this discourse post, it is a straightforward modification of Levenberg–Marquardt (L–M) for underdetermined cases.
Instead of solving a regularized/damped least-squares problem$(J^T J + \lambda I) \delta = -J^T f(x)$ for each step $x \to x + \delta$ as in ordinary L–M, we solve a regularized minimum-norm problem $(JJ^T + \lambda I) z = - f(x) \implies \delta = J^T z$ at each step (finding the smallest step length to solve the linearized equations as $\lambda \to 0^+$ ). This keeps the matrix $JJ^T + \lambda I$ small ($m \times m$ ), and it is well-conditioned if you choose e.g. $\lambda = \alpha \Vert J \Vert_F^2$ for a sufficient dimensionless damping parameter $\alpha > 0$ as recommended in the paper (we chose $\alpha = 0.1/n$ IIRC, but the basic idea here is similar to L–M so you can do something analogous to whatever damping you have now) — so you can directly use Cholesky to solve it. Or you could use LQ factorization of $J$ (= QR of $J^T$ ) or other methods if you want.
Other implementations to know about
Right now our code for the paper is in a LeastSquaresOptim.jl fork at Inverse-Design-of-Resonances/LeastSquaresOptim/levenberg_marquardt.jl at main · aristeidis-karalis/Inverse-Design-of-Resonances · GitHub
References
We described this in the reference Chen et al. (2026), appendix B, but I wouldn't be surprised if other authors have stumbled across the same trick. (At the time of publication, however, we couldn't find any examples.)