Let\'s say I have a simple stored procedure that looks like this (note: this is just an example, not a practical procedure):
CREATE PROCEDURE incrementCounte
Short answer to your question is YES it can and will come up short. If you want to block concurrent execution of stored procedures start a transaction and update the same data in every execution of the stored procedure before continuing to do any work within the procedure.
CREATE PROCEDURE ..
BEGIN TRANSACTION
UPDATE mylock SET ref = ref + 1
...
This will force other concurrent executions to wait their turn since they will not be able to change 'ref' value until the other transaction(s) complete and associated update lock is lifted.
In general it is a good idea to assume result of any and all SELECT queries are stale before they are ever even executed. Using "heavy" isolation levels to workaround this unfortunate reality severely limits scalability. Much better to structure changes in a way which make optimistic assumptions about state of system you expect to exist during the update so when your assumption fail you can try again later and hope for a better outcome. For example:
UPDATE
MyTable
SET
CounterColumn = current
WHERE CounterColumn = current - 1
Using your example with added WHERE clause this update does not affect any rows if assumption about its current state fails. Check @@ROWCOUNT to test number of rows and rollback or some other action as appropriate while it differs from expected outcome.