If you couldn't mutate the list then your reasoning would be perfectly sound. Unfortunately a List<> is manipulated imperatively. Which means you can change a List by adding a new Animal to it. If you were allowed to use a List as a List you could wind up with a list that also contains a Cat.
If List<> was incapable of mutation (like in Scala), then you could treat A List as a List. For instance, C# makes this behavior possible with covariant and contravariant generics type arguments.
This is an instance of the more general Liskov substitution principal.
The fact that mutation causes you an issue here happens elsewhere. Consider the types Square and Rectangle.
Is a Square a Rectangle? Certainly -- from a mathematical perspective.
You could define a Rectangle class which offers readable getWidth and getHeight properties.
You could even add methods that calculate its area or perimeter, based on those properties.
You could then define a Square class that subclasses Rectangle and makes both getWidth and getHeight return the same value.
But what happens when you start allowing mutation via setWidth or setHeight?
Now, Square is no longer a reasonable subclass of Rectangle. Mutating one of those properties would have to silently change the other in order to maintain the invariant, and Liskov's substitution principal would be violated. Changing the width of a Square would have an unexpected side-effect. In order to remain a square you would have to change the height as well, but you only asked to change the width!
You can't use your Square whenever you could have used a Rectangle. So, in the presence of mutation a Square is not a Rectangle!
You could make a new method on Rectangle that knows how to clone the rectangle with a new width or a new height, and then your Square could safely devolve to a Rectangle during the cloning process, but now you are no longer mutating the original value.
Similarly a List cannot be a List when its interface empowers you to add new items to the list.