Today, we hit upon an issue where our App Service did intermittently
hang up. We checked the performance level attributes like Memory and CPU which
were on ordinary levels and so as Application Insights which did not highlight
any major exceptions apart from the routine ones. What really helped us narrow
down the problem is the Azure App Service’s Diagnose and Solve Problem. We ran a few quarries
on “Web App Down” and “Web App Slow” and it eventually pointed out that Multiple
sessions were trying to call a new connection, every time it initiated. Further,
we narrowed down data, saw that one particular function which updates real-time
revenue constantly was the culprit for the error; It was a SQLDataReader
which was not closing up its connection and subsequently, multiple instances got
created, leaving with large amounts of connections piling up and eventually
falling victim for time-outs. We also noticed that there were 114 active
connections which were way too big to have. Furthermore, we also saw more unique
sessions and users compared to ordinary days which is also another reason to
this error to prop up other than on earlier days.
After resolving the issue, we were able to bring down the
active connection to 21 and the Web app to perform well.
I fixed the issue by bringing the following piece of code in
our Model
datareader = Sqlcmd.ExecuteReader(CommandBehavior.CloseConnection);
and made sure that datareader.close() was called after the operation is finished.
What we used for this operation
- Executing Azure SQL Server’s system Stored Proc to getting active connection: exec sp_help and exec sp_help2
- Azure App Service’s Diagnose and solve Problems function; Select Health Check under Availability and Performance
a.
Under this Select Web App Down
b.
Then Select Slow Request Stack Tracking
c.
Which leads us to the Stack Traces.