Search found 8 matches

Author Message

NotFirst

Post 01-Jul-2024 02:53

New version: https://aitracker.art/viewtopic.php?t=77

NotFirst

Post 30-Jun-2024 21:22

GGUF: https://huggingface.co/google/gemma-2-27b-it
Model page: https://huggingface.co/google/gemma-2-27b-it
Authors: Google
Model Information
Summary description and brief definition of inputs and outputs.
Description
Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights for both pre-trained variants and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone.
Inputs and outputs
Input: Text string, such as a question, a prompt, or a document to be summarized.
Output: Generated English-language text in response to the input, such as an answer to a question, or a summary of a document.
Citation
@article{gemma_2024,
    title={Gemma},
    url={https://www.kaggle.com/m/3301},
    DOI={10.34740/KAGGLE/M/3301},
    publisher={Kaggle},
    author={Gemma Team},
    year={2024}
}

NotFirst

Post 01-Jul-2024 02:40

Fixed Gemma 2 GGUF - requires llama.cpp b3266 or later
GGUF by legraphista https://huggingface.co/legraphista/gemma-2-27b-it-IMat-GGUF/
Model info: https://aitracker.art/viewtopic.php?t=75

imbev

Post 10-Jun-2024 17:36

GGUF of Tinyllama 1.1B Chat v1.0
https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF
https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0

msgraves

Post 10-Jun-2024 12:49

1.6B parameter model built by @vikhyatk using SigLIP, Phi-1.5 and the LLaVa training dataset. The model is release for research purposes only, commercial use is not allowed.
Try it out on Huggingface Spaces!
Usage
pip install transformers timm einops
from transformers import AutoModelForCausalLM, CodeGenTokenizerFast as Tokenizer
from PIL import Image
model_id = "vikhyatk/moondream1"
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)
tokenizer = Tokenizer.from_pretrained(model_id)
image = Image.open('<IMAGE_PATH>')
enc_image = model.encode_image(image)
print(model.answer_question(enc_image, "<QUESTION>", tokenizer))
reupload #4. I assume this service doesn't like emojis. https://huggingface.co/vikhyatk/moondream1
 

The current time is: 23-Jul 07:35

All times are UTC + 1

Cover our bill…

Donate at ko-fi.com

Or use crypto:

We accept BTC, XMR, LTC, BCH & DOGE