Well, one of my teams just went to production with a rather big project. The whole thing is on hosted on Microsoft Azure.
Like all projects, you always learn a couple of things, there are always good things, bad things and things that could of been done better.
For this project, we chose the Azure SQL Databases to hold the data. An S0 instance gives us plenty of space, and since we actually spent time benchmarking the system, ou sql queries were optimised and we had caching where it counted. I thought we were pretty good and the there would have to be a massive amount of concurrent users on the system to kill it…
Turns out all you need is 60, because although a S0 can take a lot of parallel inserts at a time, it can only handle 60 connections at a time. That is a real bummer.
The chances that we hit 60 connections at the same moment is still pretty slim, because connections are only open for the lifetime of a request. Still, i brought our system up to S1 which gives us 90 connections, just in case…
Another fun thing is we implemented in EF the required SQL error retry logic, so if we get denied because of no available connections, the system will simply retry after a certain timeout, that should prevent the code from failing at the expense of longer execution time.
I though i understood the whole DTU thing, but never took into account the connection limit.. Oh well, live and learn
Refer to this page for information of these limits.

Leave a comment