Structured Hidden Markov Model (S-HMM) is a variant of Hierarchical Hidden Markov Model that shows interesting capabilities of extracting knowledge from symbolic sequences. In fact, the S-HMM structure provides an abstraction mechanism allowing a high level symbolic description of the knowledge embedded in S-HMM to be easily obtained. The paper provides a theoretical analysis of the complexity of the matching and training algorithms on S-HMMs. More specifically, it is shown that Baum-Welch algorithm benefits from the so called locality property, which allows specific components to be modified and retrained, without doing so for the full model. The problem of modeling duration and of extracting (embedding) readable knowledge from (into) a S-HMM is also discussed.