: We consider communication between modules in an integrated architecture for Speech and Natural Language (NL), in particular the communication with the semantics module. In an integrated Speech/Language system several components--phonology (intonation), syntax, context model--may express meaning constraints, which the semantics module must flexibly manage and evaluate, in order to enable semantic inference. This paper describes an implemented approach in the ASL Project in which nonsemantic modules provide feature-based contraints that are then translated into a meaning representation language. We realize these translator functions in the spirit of federated agents' architectures (Genesreth); this functionality is required in heterogenous integrated architectures, and is implemented here using compiler technology.