Detection and classification of cells in immunohistochemistry (IHC) images play a vital role in modern computational pathology pipelines. Biopsy scoring and grading at the slide level is routinely performed by pathologists, but analysis at the cell level, often desired in personalized cancer treatment, is both impractical and non-comprehensive. With its remarkable success in natural images, deep learning is already the gold standard in computational pathology. Currently, some learning-based methods of biopsy analysis are performed at the tile level, thereby disregarding intra-tile cell variability; while others do focus on accurate cell segmentation, but do not address possible downstream tasks. Due to the shared low and high-level features in the tasks of cell detection and classification, these can be treated jointly using a single deep neural network, minimizing cumulative errors and improving the efficiency of both training and inference. We construct a novel dataset of Proteasome-stained Multiple Myeloma (MM) bone marrow slides, containing nine categories with unique morphological traits. With the relative difficulty of acquiring high-quality annotations in the medical-imaging domain, the proposed dataset is intentionally annotated with only 5 % of the cells in each tile. To tackle both cell detection and classification within a single network, we model these as a multi-class segmentation task, and train the network with a combination of partial cross-entropy and energy-driven losses. However, as full segmentation masks are unavailable during both training and validation, we perform evaluation on the combined detection and classification performance. Our strategy, uniting both tasks within the same network, achieves a better combined Fscore, at faster training and inference times, as compared to similar disjoint approaches.