This paper explores the feasibility of entirely disaggregated memory from compute and storage for a particular, widely deployed workload, Spark SQL [9] analytics queries. We measure the empirical rate at which records are processed and calculate the effective memory bandwidth utilized based on the sizes of the columns accessed in the query. Our findings contradict conventional wisdom: not only is memory disaggregation possible under this workload, but achievable with already available, commercial network technology. Beyond this finding, we also recommend changes that can be made to Spark SQL to improve its ability to support memory disaggregation.