I have seen that in most cases the time complexity is related to the space complexity and vice versa. For example in an array traversal:
for i=1 to length(v)
There is a well know relation between time and space complexity.
First of all, time is an obvious bound to space consumption: in time t you cannot reach more than O(t) memory cells. This is usually expressed by the inclusion
DTime(f) ⊆ DSpace(f)
where DTime(f) and DSpace(f) are the set of languages recognizable by a deterministic Turing machine in time (respectively, space) O(f). That is to say that if a problem can be solved in time O(f), then it can also be solved in space O(f).
Less evident is the fact that space provides a bound to time. Suppose that, on an input of size n, you have at your disposal f(n) memory cells, comprising registers, caches and everything. After having written these cells in all possible ways you may eventually stop your computation, since otherwise you would reenter a configuration you already went through, starting to loop. Now, on a binary alphabet, f(n) cells can be written in 2^f(n) different ways, that gives our time upper bound: either the computation will stop within this bound, or you may force termination, since the computation will never stop.
This is usually expressed in the inclusion
DSpace(f) ⊆ Dtime(2^(cf))
for some constant c. the reason of the constant c is that if L is in DSpace(f) you only know that it will be recognized in Space O(f), while in the previous reasoning, f was an actual bound.
The above relations are subsumed by stronger versions, involving nondeterministic models of computation, that is the way they are frequently stated in textbooks (see e.g. Theorem 7.4 in Computational Complexity by Papadimitriou).