Anything that comes out of an LLM is a proposition, not a solution. A proposition needs to be checked.
Thus, LLMs are mostly interesting for shallow problems (repeatable and can be evaluated by humans quickly).
Using them for generating solutions for complicated situations is misusing them.