Prompting
SudoLang v1.0.9
Introduction
SudoLang is a pseudolanguage designed for interacting with LLMs. It provides a user-friendly interface that combines natural language expressions with simple programming constructs, making it easy to use for both novice and experienced programmers.
SudoLang can be used to produce AI-first programs such as chatbots and te... See more
Introduction
SudoLang is a pseudolanguage designed for interacting with LLMs. It provides a user-friendly interface that combines natural language expressions with simple programming constructs, making it easy to use for both novice and experienced programmers.
SudoLang can be used to produce AI-first programs such as chatbots and te... See more
sudolang-llm-support/sudolang.sudo.md at main · paralleldrive/sudolang-llm-support
Overview
GPTScript is a new scripting language to automate your interaction with a Large Language Model (LLM), namely OpenAI. The ultimate goal is to create a fully natural language based programming experience. The syntax of GPTScript is largely natural language, making it very easy to learn and use. Natural language prompts can be mixed with trad... See more
GPTScript is a new scripting language to automate your interaction with a Large Language Model (LLM), namely OpenAI. The ultimate goal is to create a fully natural language based programming experience. The syntax of GPTScript is largely natural language, making it very easy to learn and use. Natural language prompts can be mixed with trad... See more
gptscript-ai • GitHub - gptscript-ai/gptscript: Natural Language Programming
AI Template & SPR Library
Featuring advanced prompts and SPRs
🟢 Website
🔵 LinkedIn
🔴 Patreon
⚪ Discord
Prompt Engineering
Advanced GPTs
Template Library
Featuring advanced prompts and SPRs
🟢 Website
🔴 Patreon
⚪ Discord
Prompt Engineering
Advanced GPTs
Template Library
nerority • GitHub - nerority/AI-Library: Template Library
Dify is an LLM application development platform that has helped built over 100,000 applications. It integrates BaaS and LLMOps, covering the essential tech stack for building generative AI-native applications, including a built-in RAG engine. Dify allows you to deploy your own version of Assistants API and GPTs, based on any LLMs.
Using our Cloud S... See more
Using our Cloud S... See more
langgenius • GitHub - langgenius/dify: An Open-Source Assistants API and GPTs alternative. Dify.AI is an LLM application development platform. It integrates the concepts of Backend as a Service and LLMOps, covering the...
SELF-DISCOVER
This implements Google's algorithm from https://arxiv.org/pdf/2402.03620.pdf
Setup
Install Conda: https://docs.conda.io/projects/miniconda/en/latest/
This implements Google's algorithm from https://arxiv.org/pdf/2402.03620.pdf
Setup
Install Conda: https://docs.conda.io/projects/miniconda/en/latest/
catid • GitHub - catid/self-discover: Implementation of Google's SELF-DISCOVER
SGLang is a structured generation language designed for large language models (LLMs). It makes your interaction with LLMs faster and more controllable by co-designing the frontend language and the runtime system.
The core features of SGLang include:
The core features of SGLang include:
- A Flexible Front-End Language : This allows for easy programming of LLM applications with multiple ch
sgl-project • GitHub - sgl-project/sglang: SGLang is a structured generation language designed for large language models (LLMs). It makes your interaction with models faster and more controllable.
They have a fast jsond ecoding feature with a finite state machine.
SGLang is a structured generation language designed for large language models (LLMs). It makes your interaction with LLMs faster and more controllable by co-designing the frontend language and the runtime system.
The core features of SGLang include:
The core features of SGLang include:
- A Flexible Front-End Language : This allows for easy programming of LLM applications with multiple ch
sgl-project • GitHub - sgl-project/sglang: SGLang is a structured generation language designed for large language models (LLMs). It makes your interaction with models faster and more controllable.
Depending on your use case, batching may help . If you are sending multiple requests to the same endpoint, you can batch the prompts to be sent in the same request. This will reduce the number of requests you need to make. The prompt parameter can hold up to 20 unique prompts. We advise you to test out this method and see if it helps. In some cases... See more
OpenAI Platform
Requesting a large amount of generated tokens completions can lead to increased latencies:
- Lower max tokens : for requests with a similar token generation count, those that have a lower max_tokens parameter incur less latency.
- Include stop sequences : to prevent generating unneeded tokens, add a stop sequence. For example, you can use stop sequence
OpenAI Platform
Intuition : Prompt tokens add very little latency to completion calls. Time to generate completion tokens is much longer, as tokens are generated one at a time. Longer generation lengths will accumulate latency due to generation required for each token.