We study exploration in Multi-Armed Bandits (MAB) in a setting where~k players collaborate in order to identify an ϵ-optimal arm. Our motivation comes from recent employment of MAB algorithms in compu