I want to have some kind of failsafe in place for my Android social-network-style app so that when a lot of users are connected through it to my Postgresql database, it continue
Each platform has a different connection pooling interface. You'll need to read the documentation for the specific platform you use (Ruby+Rails or whatever), or use a generic pooling midlayer like PgBouncer.
Answers relating to one tool (say, PHP with Zend Framework) will have nothing to do with answers relating to another tool (like Ruby on Rails). Even if you choose something like PgBouncer, there are still details related to how the platform handles transaction lifetimes, pooling mode to choose based on app needs, etc.
So you need to first determine what you're using and what you need to do with it. Then study how to set up its connection pooling. (With many tools it's just automatic).
If you're still stuck after reading the documentation for the platform you choose, ask a new detailed and specific question tagged appropriately for the platform.
Don't have your app connect directly to PostgreSQL. Especially if it's over the Internet from random clients.
Use a web server near the PostgreSQL server and have it accept web service requests to broker access to the database via a well defined web API with short transactions that're scoped to request to as great as extent as possible.
This isn't just a case of received wisdom - there are good reasons to do it, and serious problems with running PostgreSQL from random devices over the Internet.
Issues with talking to Pg straight over the Internet from many clients include:
Each PostgreSQL backend has a cost, whether idle or not. PgBouncer in transaction pooling mode helps with this to some degree.
Connections get lost randomly when you're working over the Internet. WiFi drops, IP address changes on dynamic IP services, mobile service that fade out or max out in capacity or just stagger along with high packet loss there. This leaves you with lots of PostgreSQL connections in indeterminate states, probably with open transactions, giving you <IDLE> in transaction
issues and the need to allow a lot more connections than are really doing work.
It's transactional - if something doens't finish you can terminate the transaction and know it'll have no effect.
A server responding to HTTP web service requests from your app on Android devices to act as a broker for database access can be a big advantage.
You can define a versioned API, so when you introduce new features or need to change the API you don't have to break old clients. This is possible with Pg using stored procedures or lots of views but can get clunky.
You strictly control the scope of database access and transaction lifetimes.
You can define an idempotent API, where running the same request multiple times only has an effect one. (I strongly recommend doing this because of the next point).
Everything is stateless and can have short time-outs. If something doesn't work you just retry it.
Every database connection goes via a pool, so you don't have idle sessions sitting around. Every database backend is working hard for maximum throughput.
You can queue work up rather than trying to do tons concurrently and thrashing the server. (You can also do this with PgBouncer in transaction pooling mode).
... and re your edit to change the question's meaning:
Your "Also" re performance is really a totally different question (and should preferably be posted as such). The very short version: totally impossible to predict without a lot more info on the workload, like number of db requests per client app request, kind of data, kind of queries, size of data, frequency of queries, practicality of caching, ...... endlessly. Anyone who claims to definitively answer that question is either the first true psychic in history or completely full of it.
You need to figure out roughly what your data size, query patterns, etc will be. Figure out how much you can afford to cache in a midlayer cache like redis/memcached, how stale you can let it get, what level of cache invalidation you need. Determine whether your "hot" dataset (that you access a lot) will fit in RAM or not. Determine whether the indexes for frequently queried tables will fit in RAM or not. Figure out what your rough read/write balance is and how much your writes are likely to be insert-only (append) or more regular OLTP (insert/update/delete). Dummy up a data set and some client workloads. Then you can start answering that question - maybe. To do it right you also have to simulate stalled/vanished clients, etc.
See why it's not just an "Also?".