Fast Machine Learning Lab
Real-time and accelerated ML for fundamental sciences
Pinned Loading
Repositories
Showing 10 of 60 repositories
- SuperSONIC Public
Server infrastructure for GPU inference-as-a-service in large scientific experiments
fastmachinelearning/SuperSONIC’s past year of commit activity - onnxruntime_backend Public Forked from triton-inference-server/onnxruntime_backend
The Triton backend for the ONNX Runtime.
fastmachinelearning/onnxruntime_backend’s past year of commit activity - server Public Forked from triton-inference-server/server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
fastmachinelearning/server’s past year of commit activity - pytorch_backend Public Forked from triton-inference-server/pytorch_backend
The Triton backend for the PyTorch TorchScript models.
fastmachinelearning/pytorch_backend’s past year of commit activity