LLMs
Overview
MaxText is a high performance , highly scalable , open-source LLM written in pure Python/Jax and targeting Google Cloud TPUs and GPUs for training and inference . MaxText achieves high MFUs and scales from single host to very large clusters while staying simple and "optimization-free" thanks to the power of Jax and the XLA compiler.
MaxText... See more
MaxText is a high performance , highly scalable , open-source LLM written in pure Python/Jax and targeting Google Cloud TPUs and GPUs for training and inference . MaxText achieves high MFUs and scales from single host to very large clusters while staying simple and "optimization-free" thanks to the power of Jax and the XLA compiler.
MaxText... See more
google • GitHub - google/maxtext: A simple, performant and scalable Jax LLM!
OpenGPTs
This is an open source effort to create a similar experience to OpenAI's GPTs. It builds upon LangChain, LangServe and LangSmith. OpenGPTs gives you more control, allowing you to configure:
This is an open source effort to create a similar experience to OpenAI's GPTs. It builds upon LangChain, LangServe and LangSmith. OpenGPTs gives you more control, allowing you to configure:
- The LLM you use (choose between the 60+ that LangChain offers)
- The prompts you use (use LangSmith to debug those)
- The tools you give it (choose from
github.com • Langchain-Ai/Opengpts
How do models represent style, and how can we more precisely extract and steer it?
A commonly requested feature in almost any LLM-based writing application is “I want the AI to respond in my style of writing,” or “I want the AI to adhere to this style guide.” Aside from costly and complicated multi-stage finetuning processes like Anthropic’s RL with... See more
A commonly requested feature in almost any LLM-based writing application is “I want the AI to respond in my style of writing,” or “I want the AI to adhere to this style guide.” Aside from costly and complicated multi-stage finetuning processes like Anthropic’s RL with... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
Study finds RLHF reduces LLM creativity and output variety : A new research paper posted in /r/LocalLLaMA shows that while alignment techniques like RLHF reduce toxic and biased content, they also limit the creativity of large language models, even in contexts unrelated to safety.
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
core components of Deep RL that enabled success like AlphaGo: self-play and look-ahead planning.
Self-play is the idea that an agent can improve its gameplay by playing against slightly different versions of itself because it’ll progressively encounter more challenging situations. In the space of LLMs, it is almost certain that the largest portion... See more
Self-play is the idea that an agent can improve its gameplay by playing against slightly different versions of itself because it’ll progressively encounter more challenging situations. In the space of LLMs, it is almost certain that the largest portion... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
These two components might be some of the most important ideas to improve all of AI.
The AI engineering framework
Marvin is a lightweight AI engineering framework for building natural language interfaces that are reliable, scalable, and easy to trust.
Sometimes the most challenging part of working with generative AI is remembering that it's not magic; it's software. It's new, it's nondeterministic, and it's incredibly powerful - but... See more
Marvin is a lightweight AI engineering framework for building natural language interfaces that are reliable, scalable, and easy to trust.
Sometimes the most challenging part of working with generative AI is remembering that it's not magic; it's software. It's new, it's nondeterministic, and it's incredibly powerful - but... See more
PrefectHQ • GitHub - PrefectHQ/marvin: ✨ Build AI interfaces that spark joy
𝘱𝘦𝘳𝘧𝘰𝘳𝘮𝘢𝘯𝘤𝘦: it will improve your LLM performance on given use cases (e.g., coding, extracting text, etc.). Mainly, the LLM will specialize in a given task (a specialist will always beat a generalist in its domain)
𝘤𝘰𝘯𝘵𝘳𝘰𝘭: you can refine how your model should behave on specific inputs and outputs, resulting in a more robust product
𝘮𝘰𝘥𝘶𝘭𝘢𝘳𝘪𝘻𝘢𝘵𝘪𝘰𝘯:... See more
𝘤𝘰𝘯𝘵𝘳𝘰𝘭: you can refine how your model should behave on specific inputs and outputs, resulting in a more robust product
𝘮𝘰𝘥𝘶𝘭𝘢𝘳𝘪𝘻𝘢𝘵𝘪𝘰𝘯:... See more
Shortwave — rajhesh.panchanadhan@gmail.com [Gmail alternative]
Motivation for finetuning
.png?table=block&id=e222d02f-1d78-4887-8972-a958b1fbca65&spaceId=996f2b3b-deaa-4214-aedb-cbc914a1833e&width=1250&userId=&cache=v2)