This video walks through step by step guide to locally install top code AI Model which can run on CPU and its very small in size.
Code:
pip install transformers torch
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stable-code-3b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
"stabilityai/stable-code-3b",
trust_remote_code=True,
torch_dtype="auto",
)
model.cuda()
inputs = tokenizer("write me a script in Java to reverse a list", return_tensors="pt").to(model.device)
tokens = model.generate(
**inputs,
max_new_tokens=500,
temperature=0.2,
do_sample=True,
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
No comments:
Post a Comment