I'm sure there are parameters to make it faster but there are parameters to make MongoDB faster too.
Truth is, when speed matters it's probably because your project matters. And if your project matters your data matters and then you'll need to take reliability and durability to a whole new level and you'd have to start over with the benchmarks.
I'm following your posts carefully :-)
It seems, that "good old SQL" is often slower in those "create 1000 objects" benchmarks. MySQL has some benchmarks, that make PostgreSQL look like a slow database. SQLite is also very fast, when compared to MySQL. BerkeleyDB may be even faster ;) unfortunatley, some OSS projects I know stopped using it at some point. But well then, again, creating as many objects as possible quickly may be really your app model, so...
How about other benchmarks, like "update every record, that matches 3 - 4 foreign keys AND an IN() range query"?
What about data reliability?
Default PostgreSQL installation is also far from perfect. If speed is your goal, then you can tune PostgreSQL not to use disk that often - disable fsync, set a large COMMIT delay and so on. These settings make the database unreliable in case of power-loss, but for me - I run django tests on local machine - they make testing way faster.
Just my $0.05. For some time in my life, I neglected SQL databases. This didn't turn out to be as good as I thought.
Comment
I'm sure there are parameters to make it faster but there are parameters to make MongoDB faster too.
Truth is, when speed matters it's probably because your project matters. And if your project matters your data matters and then you'll need to take reliability and durability to a whole new level and you'd have to start over with the benchmarks.
Parent comment
I'm following your posts carefully :-) It seems, that "good old SQL" is often slower in those "create 1000 objects" benchmarks. MySQL has some benchmarks, that make PostgreSQL look like a slow database. SQLite is also very fast, when compared to MySQL. BerkeleyDB may be even faster ;) unfortunatley, some OSS projects I know stopped using it at some point. But well then, again, creating as many objects as possible quickly may be really your app model, so... How about other benchmarks, like "update every record, that matches 3 - 4 foreign keys AND an IN() range query"? What about data reliability? Default PostgreSQL installation is also far from perfect. If speed is your goal, then you can tune PostgreSQL not to use disk that often - disable fsync, set a large COMMIT delay and so on. These settings make the database unreliable in case of power-loss, but for me - I run django tests on local machine - they make testing way faster. Just my $0.05. For some time in my life, I neglected SQL databases. This didn't turn out to be as good as I thought.