Nicolay Gerold

N

Nicolay Gerold

@nicolaygerold

The hater’s guide to Kubernetes

Paul Venuto feed updates

Muaath Bin Ali Microservices Design Principles

Andrew Huberman How to Increase Motivation & Drive | Huberman Lab Podcast #12

Moyi 10 Ways To Run LLMs Locally And Which One Works Best For You

microsoft GitHub - microsoft/LLMLingua: To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.

GitHub - trimstray/the-book-of-secret-knowledge: A collection of inspiring lists, manuals, cheatsheets, blogs, hacks, one-liners, cli/web tools and more.