E.g., how can one sample this vast acquisition space efficiently?
Let me focus my answer on the optimization of the acquisition sampling scheme, which remains an open question (for sequence optimization, please see ).
To give some idea of the complexity of this issue, each analysis method will react differently to various optimization strategy. For instance, is a two-term cumulant expansion and does not rely on such cumulant-based assumptions. As a result, QTI shows quite a low sensitivity to the details of the acquisition sampling scheme and DTD seems more sensitive to it (see ). This sensitivity should not be seen as a drawback but more as a potential for proper sampling-scheme driven performance optimization.
Within the past decade, various optimization strategies have been developed. For instance, was based on theoretical considerations, optimizes precision on parameter estimation via model-specific Cramer-Rao bounds, optimizes the diversity in probed diffusion patterns and employs autoencoders in the context of machine learning.
While acquiring three b-shapes (linear, planar, spherical) can provide additional specificity, two b-shapes are usually sufficient to achieve enhanced specificity within clinically feasible times (see and ). From experience, QTI and DTD provides good results for an 80-point acquisition scheme with two b-shapes (linear/spherical or linear/planar) and four b-shells: b = 100, 700, 1400, 2000 s/mm2 (~5 minutes of acquisition time). Keeping the number of points constant and adding a b-shell at b = 300 s/mm2 seems to improve the performance of DTD, as it enables better capture of the low-b signal decay.