I really appreciate Taggart-Tech's take on the use of LLMs for learning, or robbing us of learning, as we try to understand and expand our individual and collective knowledge of the variety of complex systems.
With absolutely zero domain knowledge, you can ask a generative model for output that may or may not be directly useful to the problem at hand. The result is both dangerous in its uncertain value, and because of what it takes from the user in an imbalanced exchange.
Generative models do not automate the grunt work. They steal knowledge work and replace it with a variably convincing facsimile. The output may not be accurate, but at least it was fast—a characteristic that deep, serious knowledge work never shares.
...
But here's the thing: in order to disagree meaningfully with model output, including "deep research" output, you must be an expert in the material. I hope it's clear that you cannot become an expert in any field by relying on the output of generative models. In case it isn't though: imagine thinking you were a professional football player because you watched—not even played—some streamer playing "Madden." Your knowledge of the material is so many layers abstracted from praxis, it can barely be called knowledge at all.
The link to Courtney Milan @ bsky's thread on the learning process is spot on.
The answer was not the point. The answer was never the point. The process of searching is the process of learning.
Knowledge is never knowing the answer. It’s knowing the territory.