One aspect of rational behavior is that agents can pursue multiple goals in parallel. Current BDI theory and systems do not provide a theoretical or architectural framework for deciding how goals interact and how an agent can decide which goals to pursue. Instead, they assume for simplicity reasons that agents always pursue consistent goal sets. By omitting this important aspect of rationality, the problem of goal deliberation is shifted from the architecture to the agent programming level and needs to be handled by the agent developer in an error-prone ad-hoc manner. This paper argues that goal deliberation mechanisms can hardly be built directly into the fixed BDI interpreter cycle, because goal deliberation typically needs to be done irregularly at any point in time. Therefore, an enhanced BDI interpreter architecture is proposed that is specifically designed for extensibility. This extensibility can be exploited for the integration of arbitrary goal deliberation strategies. Categ...