We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent b62b606 commit a43e52cCopy full SHA for a43e52c
keras_hub/src/models/stablelm/stablelm_attention.py
@@ -21,8 +21,7 @@ class StableLMAttention(keras.layers.Layer):
21
num_query_heads: int. Number of attention heads for queries.
22
num_key_value_heads: int. Number of attention heads for keys and
23
values.
24
- hidden_dim: int. Hidden dimension of the input (e.g., 2560 for
25
- StableLM-3B4E1T).
+ hidden_dim: int. Hidden dimension of the input.
26
rope_max_wavelength: float. Maximum wavelength for rotary embeddings
27
(default: 10000).
28
rope_scaling_factor: float. Scaling factor for rotary embeddings
0 commit comments