Deep learning models are often accompanied by additional numerical data analysis tasks such as clustering, dimensionality reduction, data transformations, and linear modeling. While matrix engines are primarily designed with deep neural network workloads in mind, they have been demonstrated to be useful for general purpose matrix processing used in such tasks. This talk will describe our process of using an open-source RISC-V SoC design framework (Chipyard) for evaluating the re-use of an edge SoC DNN accelerator for general matrix multiplication workloads. Specifically, we will focus on integration with the relevant software stack (BLAS) and its underlying assumptions, as well as the hardware implications of different arithmetic intensity regimes.