I'll give you a few situations, just for the fuck of it (since you seem less of a douche than dchuk).
First up, the typical SQL based ORM will basically just do a select *. The goal here is to minimize re-use of requests for individual columns. For any major high traffic site (we're not talking a single server in your moms basement here) you introduce network issues to bandwidth requirements. Oh wait, I hear you say, but MySQL can't push 1gbit of traffic out of it - though this is a valid assumption, when you have a rack of database servers, you can easily saturate a 1gbit or even 10gbit link between switches.
Next up, we have the situation of update efficiency of counters. The average ORM will do something like select *, then update blah set column = 50. The problem here, is what if another server was also working on this column? Is 50 the correct value? Or was the intended result column = column + 1? Are you going to use transactions to lock the database now, or are you going to lock with memcache? If you use MySQL, you're now introducing the possibility of deadlocks due, ever increased by even 1ms of network latency. But I hear you say, why not just use memcache in the first place? What if the row is something that actually matters, such as impressions, clicks or conversions... you're going to leave those stats in volatile memory?
Next we have query efficiency. Lets say we have a database with 20 million rows. We're doing a select on something like surname, and want to page through. What is your ORM going to do? First 100 results or so, sure limit 100 is fine. What about when it's limit 10000, 100? Oh shit, your code now needs to switch to using a where statement otherwise your queries start taking 10x longer to process. What if that limit is complex?
What about if I want to grab stats from the database in natural order, without fucking around with reordering the rows in memory? select * from (select somedata from table group by 1 order by 1 desc limit 24) as data order by date asc; Is your ORM doing to do that shit for you efficiently?
What happens when you're a django fag using south to manage your database? Seriously doesn't take much for some dipshit to fuck up foreign keys and truncate a table when doing what should have been a single row delete, or have his n00b ass want to clean up his code, and start dropping columns on insanely huge tables in production, causing minutes or even hours of server downtime while the change is committed.