Multi-target classification of multivariate time series data poses a challenge in many real-world applications (e.g., predictive maintenance). Machine learning methods, such as random forests and neural networks, support training these classifiers. However, the debugging and analysis of possible misclassifications remain challenging due to the often complex relations between targets, classes, and the multivariate time series data. We propose a model-agnostic visual debugging workflow for multi-target time series classification that enables the examination of relations between targets, partially correct predictions, potential confusions, and the classified time series data. The workflow, as well as the prototype, aims to foster an in-depth analysis of multi-target classification results to identify potential causes of mispredictions visually. We demonstrate the usefulness of the workflow in the field of predictive maintenance in a usage scenario to show how users can iteratively explore and identify critical classes, as well as, relationships between targets.