So, I would like to split this text into sentences.
s = \"You! Are you Tom? I am Danny.\"
so I get:
[\"You!\", \"Are you To
Easiest way is to use nltk.
nltk
import nltk nltk.sent_tokenize(s)
It will return a list of all your sentences without loosing delimiters.