Some other website use cURL and fake http referer to copy my website content. Do we have any way to detect cURL or not real web browser ?
Remember: HTTP is not magic. There's a defined set of headers sent with each HTTP request; if these headers are sent by web-browser, they can as well be sent by any program - including cURL (and libcurl).
Some consider it a curse, but on the other hand, it's a blessing, as it greatly simplifies functional testing of web applications.
UPDATE: As unr3al011 rightly noticed, curl doesn't execute JavaScript, so in theory it's possible to create a page that will behave differently when viewed by grabbers (for example, with setting and, later, checking a specific cookie by JS means).
Still, it'd be a very fragile defense. The page's data still had to be grabbed from server - and this HTTP request (and it's always HTTP request) can be emulated by curl. Check this answer for example of how to defeat such defense.
... and I didn't even mention that some grabbers are able to execute JavaScript. )