Abstract: Deep learning is everywhere. Its easy to put together a deep learning workflow, and it is fast. The accuracies have also been improving and are generally in the high eighties and nineties. But it is also the PCA of today. It works, but the features that make it work are jumbled and hidden. As we move towards understanding (a) fainter sources, (b) plethora of fleeting sources as required by quick follow-up for LIGO sources, we need to make deep learning more interpretable, be it based on light curves or images. In fact, combining varied sources of knowledge is often the best. We will present some on-going work which combines visualization and a pinch of pragmatism as we handle the growing ZTF data and get ready for even bigger data challenges.