I have come to understand that bit banging is horrible practice when it comes to SPI/I2C over GPIO. Why so?
Bit banging is not quite possible in a non realtime system. And if you put it inside the kernel that is noninterruptable then you really have to make sure that you only bitbang a certain number of bits before rescheduling user processes.
Consider this: you have a scheduling timer running at 1/1000s intervals. When it runs, you check if some process wants to send data over the bitbanged interface and you handle this request. The request requires you to bitbang a byte at 9600 baud (as example). Now you have a problem: it takes 0.8ms to bitbang a byte. You can't really afford this because when the scheduling interrupt runs, it has to do it's job and load the next process that is required to run and then exit. This usually takes much shorter time than 1ms and that 1ms is mostly spent running the user process until the next interrupt. But if you start bitbanging then you mostly spend that ms doing nothing.
One solution to this may be using a timer peripheral just for the bitbanging purpose. This would give a fairly autonomous and interrupt driven bitbanging code that does not have to sit idle at all - but that is only at the expense of using a dedicated timer. If you can affort a dedicated hardware timer, then bitbanging would probably work great. But in general it is very hard to do reliable bitbanging at high speeds in a multitasking environment.