This is Qwen LLM full guide with commands to install locally on AWS or any Linux instance for AI chat. This Qwen 7B Chat LLM is created by Alibaba.
Commands Used:
git clone https://github.com/QwenLM/Qwen-7B.git
cd Qwen-7B
!pip install transformers
!pip install -r requirements.txt
!git clone -b v1.0.8 https://github.com/Dao-AILab/flash-attention
!cd flash-attention && pip install .
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-7B-Chat", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B-Chat", device_map="auto", trust_remote_code=True).eval()
model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-7B-Chat", trust_remote_code=True)
response, history = model.chat(tokenizer, "How are you today?", history=None)
print(response)
No comments:
Post a Comment