Skip to content

End-to-end framework for evaluating accuracy, latency, and energy of quantized LLMs (FP32, FP16, INT8). Includes pre-tokenized SST-2 datasets, reproducible inference harness, GPU power logging via nvidia-smi, and analysis tools for energy-aware model deployment.

Notifications You must be signed in to change notification settings

krishkc5/energy_aware_quantization

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

97 Commits
 
 
 
 
 
 
 
 

About

End-to-end framework for evaluating accuracy, latency, and energy of quantized LLMs (FP32, FP16, INT8). Includes pre-tokenized SST-2 datasets, reproducible inference harness, GPU power logging via nvidia-smi, and analysis tools for energy-aware model deployment.

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •