I\'m studying for my computing languages test, and there\'s one idea I\'m having problems wrapping my head around.
I understood that regular grammars<
I think what you want to think about are the various pumping lemmata. A regular language can be recognized by a finite automaton. A context-free language requires a stack, and a context sensitive language requires two stacks (which is equivalent to saying it requires a full Turing machine.)
So, if we think about the pumping lemma for regular languages, what it says, essentially, is that any regular language can be broken down into three pieces, x, y, and z, where all instances of the language are in xy*z (where * is Kleene repetition, ie, 0 or more copies of y.) You basically have one "nonterminal" that can be expanded.
Now, what about context-free languages? There's an analogous pumping lemma for context-free languages that breaks the strings in the language into five parts, uvxyz, and where all instances of the language are in uvixyiz, for i ≥ 0. Now, you have two "nonterminals" that can be replicated, or pumped, as long as you have the same number.