Skip to content

Commit 63a05fa

Browse files
authored
Merge pull request #452 from oracle-samples/darenr-patch-1
Update deploy-with-smc.md
2 parents dad156e + b24e293 commit 63a05fa

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

ai-quick-actions/deploy-with-smc.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
# Deploy ELYZA-japanese-Llama-2-13b-instruct with Oracle Service Managed vLLM(0.3.0) Container
22

3-
![ELYZA](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b-instruct/resolve/main/key_visual.png)
3+
![ELYZA](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b/resolve/main/key_visual.png)
44

5-
This how-to will show how to use the Oracle Data Science Service Managed Containers - part of the Quick Actions feature, to inference with a model downloaded from Hugging Face. For this we will use [ELYZA-japanese-Llama-2-13b-instruct](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b-instruct) from a company named ELYZA, which is known for its LLM research and is based out of the University of Tokyo. ELYZA uses pre-training from the English-dominant model because of the prevalence of English training data, along with an additional 18 billion tokens of Japanese data.
5+
This how-to will show how to use the Oracle Data Science Service Managed Containers - part of the Quick Actions feature, to inference with a model downloaded from Hugging Face. For this we will use [ELYZA-japanese-Llama-2-13b-instruct](https://huggingface.co/collections/elyza/elyza-japanese-llama-2-13b-6589ba0435f23c0f1c41d32a) from a company named ELYZA, which is known for its LLM research and is based out of the University of Tokyo. ELYZA uses pre-training from the English-dominant model because of the prevalence of English training data, along with an additional 18 billion tokens of Japanese data.
66

77
## Required IAM Policies
88

0 commit comments

Comments
 (0)