Simon Willison highlights Bryan Cantrill’s critique of Large Language Models, noting their inherent lack of ‚laziness‘ as a virtue. Cantrill argues that LLMs, unconstrained by cost or the need to optimize for future time, tend to generate excessive and potentially low-quality output, leading to system bloat rather than improvement if not carefully managed.
Source: Simon Willison