GitHub - BrunoScaglione/langtest: Deliver safe & effective language models
576 - Using LLMs at Oxide / RFD / Oxide
rfd.shared.oxide.computerDanger Testing
dangertesting.comHeimdaLLM
Pronounced [ˈhaɪm.dɔl.əm] or HEIM-dall-EM
HeimdaLLM is a robust static analysis framework for validating that LLM-generated structured output is safe. It currently supports SQL.
In simple terms, it helps makes sure that AI won't wreck your systems.
Pronounced [ˈhaɪm.dɔl.əm] or HEIM-dall-EM
HeimdaLLM is a robust static analysis framework for validating that LLM-generated structured output is safe. It currently supports SQL.
In simple terms, it helps makes sure that AI won't wreck your systems.