GitHub - amoffat/HeimdaLLM: Verify LLM output
HeimdaLLM
Pronounced [ˈhaɪm.dɔl.əm] or HEIM-dall-EM
HeimdaLLM is a robust static analysis framework for validating that LLM-generated structured output is safe. It currently supports SQL.
In simple terms, it helps makes sure that AI won't wreck your systems.
Pronounced [ˈhaɪm.dɔl.əm] or HEIM-dall-EM
HeimdaLLM is a robust static analysis framework for validating that LLM-generated structured output is safe. It currently supports SQL.
In simple terms, it helps makes sure that AI won't wreck your systems.