Skip to content

SingleApi

Internet, programming, artificial intelligence

Menu
  • Home
  • About
  • My Account
  • Registration
Menu

WIP: Hugging Face cheatsheet part 1

Posted on September 5, 2024

As prerequisite we need to have Python, pip, venv installed, for Git there is a need for lfs, because Hugging Face supports files larger than 5GB in repos. To work with with account you can use Anaconda and Jupyter Notebook, or Google Colab (then we don’t need install anything), or tools from JetBrains (I use IntelliJ with plugins, lots of people PyCharm). The token and also loaded models are kept in ~/.cache/huggingface/ folder.

pip install huggingface-hub

# Log in using a token from huggingface.co/settings/tokens it's done only once
huggingface-cli login             

When working with HuggingFace objects you can use help() method, which displays a detailed information about what objects contains e.g. properties and their bounds. Try e.g. help(pipeline) to get all possible tasks like:
– “question-answering”: will return a [`QuestionAnsweringPipeline`].
– “summarization”: will return a [`SummarizationPipeline`].
– “table-question-answering”: will return a [`TableQuestionAnsweringPipeline`].
– “text2text-generation”: will return a [`Text2TextGenerationPipeline`].
– “text-classification” (alias `”sentiment-analysis”` available): will return a
[`TextClassificationPipeline`].

Hugging Face page also have a sample code how to start using their objects in Use this model menu.

Using transformers from code

# in Google Colab use ! for pip
!pip install transformers

# login to hub on Colab, no need locally if logged in earlier
from huggingface_hub import notebook_login
notebook_login()

# -------------------------------------------

# for most of objects you can use help() e.g. help(pipeline)

# using transformers and pipelines
import transformers
from transformers import AutoTokenizer
from transformers import pipeline

tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3.5-mini-instruct")
classifier = pipeline("text-classification", model="philschmid/tiny-bert-sst2-distilled")

# training data load
from datasets import load_dataset
dataset = load_dataset("smth")

# to get tensor
print(tokenizer("Some random sentences here", return_tensors='pt').input_ids) 

# to get classifier analysis
classifier("I love that thing!")
classifier(["I love that thing!", "I hate your style."])

# to decode single tensor id usually from input_ids
tokenizer.decode(1024)

input_ids = tokenizer("Some random sentences here", return_tensors='pt').input_ids

model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3.5-mini-instruct")

# optional - example only
response_logits = model(input_ids).logits
best_find_index = response_logits[0, -1].argmax()
tokenizer.decode(best_find_index)
e1 = torch.topk(response_logits[0, -1], 10)
e2 = torch.topk(response_logits[0, -1].softmax(dim=0), 10)

# it's torch object so you can use response_logits.shape to see its dimensions, 
# and use indexes or just take last vector [0, -1] if 1 sequence and 3 dimensions
# e1 - just topK,  e2 - topK with probabilities

# TEXT generation
out = model.generate(input_ids, max_new_tokens=20, repetition_penalty=1.4, do_sample=True, top_k=5, top_p=0.9, temperature=0.5)
print(tokenizer.decode(out[0]))

Recent Posts

  • Kimi 2.5 and OpenAI Codex Drive AI Advancements
  • Moltbot AI Assistant and Kimi K2.5 Multimodal Model Advances
  • Claude Code 2.1.20 and Verdent Agentic AI Advances
  • Claude Code with Anthropic Opus 4.5 and Ralph Wiggum Technique Drive AI-Powered Autonomous Software Development and Multi-Agent Orchestration
  • Claude Code my full remote flow

Recent Comments

  • adrian on Anthropic Launches Claude Cowork Powered by Claude Code for AI-Driven Workplace Task Automation and Agentic AI Development
  • adrian on Advancements in AI Foundation Models Agentic Frameworks and Robotics Integration Driving Next Generation AI Ecosystems
  • adrian on n8n DrawThings
  • adrian on Kokoro TTS Model, LLM Apps Curated List
  • adrian on Repo Prompt and Ollama

Archives

Categories

agents ai apps automation blender cheatsheet claude codegen comfyui deepseek devsandbox docker draw things flux gemini gemini cli google hidream hobby huggingface hugging face java langchain4j llama llm mcp meta mlx movies n8n news ollama openai personal thoughts quarkus rag release repo prompt speech-to-speech spring stable diffusion tts vibe coding whisper work

Meta

  • Register
  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

Terms & Policies

  • Comments Policy
  • Privacy Policy

Other websites: jreactor gaming.singleapi

©2026 SingleApi | Design: Newspaperly WordPress Theme
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
Do not sell my personal information.
Cookie settingsACCEPT
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT