New architectures for Brain-Machine Interface communication and control use mixture models for expanding rehabilitation capabilities of disabled patients. Here we present and test a dynamic data-driven Brain-Machine Interface (BMI) architecture that relies on multiple pairs of forward-inverse models to predict, control, and learn the trajectories of a robotic arm in a realtime closed-loop system. A method of window-RLS was used to compute the forward-inverse model pairs in real-time and a model switching mechanism based on reinforcement learning was used to test the ability to map neural activity to elementary behaviors. The architectures were tested with in vivo data and implemented using remote computing resources.