Models
AI That Quacks: Introducing DuckDB-NSQL-7B, A LLM for DuckDB2024/01/25BY Till Döhmen and Jordan TiganiSubscribe to MotherDuck BlogE-mailAlso subscribe to other MotherDuck updatesSubmitWhat does a database have to do with AI, anyway?After a truly new technology arrives, it makes the future a lot harder to predict. The one thing you can be sure of is... See more
Till Döhmen • AI That Quacks: Introducing DuckDB-NSQL-7B, A LLM for DuckDB
ScaleCrafter is capable of generating images with resolution of 4096 x 4096 and results with resolution of 2048 x 1152 based on pre-trained diffusion models on a lower resolution. Notably, our approach needs no extra training/optimziation .
YingqingHe • GitHub - YingqingHe/ScaleCrafter: Official implementation of ScaleCrafter for higher-resolution visual generation at inference time.
Supported Models
Suggest Edits
Where possible, we try to match the Hugging Face implementation. We are open to adjusting the API, so please reach out with feedback regarding these details.
Model
Context Length
Model Type
codellama-34b-instruct
16384
Chat Completion
llama-2-70b-chat
4096
Chat Completion
mistral-7b-instruct
4096 [1]
Chat Completion
pplx-7b-c... See more
Suggest Edits
Where possible, we try to match the Hugging Face implementation. We are open to adjusting the API, so please reach out with feedback regarding these details.
Model
Context Length
Model Type
codellama-34b-instruct
16384
Chat Completion
llama-2-70b-chat
4096
Chat Completion
mistral-7b-instruct
4096 [1]
Chat Completion
pplx-7b-c... See more
Supported Models
Models by perplexitiy, among other their online model with access to the internet.
- Cohere introduced Embed v3, an advanced model for generating document embeddings, boasting top performance on a few benchmarks. It excels in matching document topics to queries and content quality, improving search applications and retrieval-augmentation generation (RAG) systems. The new version offers models with 1024 or 384 dimensions, supports o
FOD#27: "Now And Then"
Qwen-14B is the 14B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-14B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-14B, we release Qwen-14B-Chat, a lar... See more
Qwen/Qwen-14B-Chat · Hugging Face
We are excited to release the first version of our multimodal assistant Yasa-1, a language assistant with visual and auditory sensors that can take actions via code execution.
We trained Yasa-1 from scratch, including pretraining base models from ground zero, aligning them, as well as heavily optimizing both our training and serving infrastructure.
... See more
We trained Yasa-1 from scratch, including pretraining base models from ground zero, aligning them, as well as heavily optimizing both our training and serving infrastructure.
... See more
Announcing our Multimodal AI Assistant - Reka AI
multimodal-maestro
👋 hello
Multimodal-Maestro gives you more control over large multimodal models to get the outputs you want. With more effective prompting tactics, you can get multimodal models to do tasks you didn't know (or think!) were possible. Curious how it works? Try our HF space!
👋 hello
Multimodal-Maestro gives you more control over large multimodal models to get the outputs you want. With more effective prompting tactics, you can get multimodal models to do tasks you didn't know (or think!) were possible. Curious how it works? Try our HF space!
roboflow • GitHub - roboflow/multimodal-maestro: Effective prompting for Large Multimodal Models like GPT-4 Vision, LLaVA or CogVLM. 🔥
Cross-Encoder for Hallucination Detection
This model was trained using SentenceTransformers Cross-Encoder class.
The model outputs a probabilitity from 0 to 1, 0 being a hallucination and 1 being factually consistent.
The predictions can be thresholded at 0.5 to predict whether a document is consistent with its source.
Training Data
This model is base... See more
This model was trained using SentenceTransformers Cross-Encoder class.
The model outputs a probabilitity from 0 to 1, 0 being a hallucination and 1 being factually consistent.
The predictions can be thresholded at 0.5 to predict whether a document is consistent with its source.
Training Data
This model is base... See more
vectara/hallucination_evaluation_model · Hugging Face
Replit AI is now free for all users . Over the past year, we’ve witnessed the transformative power of building software collaboratively with the power of AI. We believe AI will be part of every software developer’s toolkit and we’re excited to provide Replit AI for free to our 25+ million developer community.
To accompany AI for all, we’re releasin... See more
To accompany AI for all, we’re releasin... See more