Skip to content
Discussion options

You must be logged in to vote

The last Relu activation is messing up with your output. It is not common to use activation function as the last layer.

model_v2 = nn.Sequential(
    nn.Linear(in_features=2, out_features=10),
    nn.ReLU(),
    nn.Linear(10, 10),
    nn.ReLU(),
    nn.Linear(10, 1),
    nn.ReLU()  # DELETE THIS
)

Also add zero out your gradients in the training loop or they will accumulate

optimizer.zero_grad()

And for better result try out Adam optimizer with lr = 0.1

Hope this answers your questions.

Replies: 1 comment 3 replies

Comment options

You must be logged in to vote
3 replies
@s2005lg
Comment options

@s2005lg
Comment options

@Prezzo-K
Comment options

Answer selected by s2005lg
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants