Please do not close or refresh this window...
Register now before the deadline approaches
When
Where
Who
TBD
No sessions matched your search. Reset and try a broader search
General
Welcome
High-performance cluster networking for GPU systems has been traditionally associated with large-scale pre-training. As time passed, post-training and distributed inference applications brought their unique requirements as well. This talk surveys the training/inference parallelism-induced traffic patterns from training/inference of LLMs and the impact on evolution of scale-out and scale-up network.
My Bio
DE-CIX
Arista
Amazon
Powered by
Track Two
You seem to have accessed a link that has already been used or has expired / revoked