Skip to content

Conversation

@gracelgilbert
Copy link

  • Repo Link
  • Features
    • Array Scan
      • CPU
      • GPU (naive, efficient, thrust)
    • Stream Compaction
      • CPU (with and without scan)
      • GPU (with efficient GPU scan)
    • MLP Network
      • Training on data set (xor pairs and character data)
      • Error propagation
      • Prediction based on preset weights
    • Performance analysis of parallel algorithms on CPU and GPU
  • The given information was useful in terms of running a single iteration of the MLP, but more information on the overall structure of the training process would have been useful, as in the various loops and iterations of running the MLP and adjusting weights required to get converged data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant