Since about the middle of last week, the Oracle server in our dev environment had been reporting “ORA-12519: TNS:no appropriate service handler found” intermittently. Not having much time to look at it (and not being much of an Oracle DBA), I found restarting the Oracle server made it go away for a day or so. By Tuesday though, it had started to get really annoying for everyone, so I had no choice but to cancel my meetings and take a proper look at the problem.
A quick search on Google revealed that it was most likely due to Oracle reaching its maximum number of processes. Most posts tended to just suggest increasing the number of processes without covering why you might suddenly running out of processes. As this had only just started happening and the load in our dev environment is meant to be very light, I really wanted to avoid just upping the number of processes as that would be just hiding the problem. I wanted to find and fix the root cause.
So a bit more searching around, and I managed to find these useful queries.
First, you need to be able to connect to your database either using the sys
account or by logging into your Oracle box and using a direct SQL Plus connection as the Oracle user:
sqlplus / as sysdba
If you’re still getting errors about no available processes, then you will have to manually kill one of the Oracle processes using the kill
command.
Once logged in, you can use these two queries to find out how many processes and sessions there are currently logged:
select count(*) from v$process; select count(*) from v$session;
For 11g, the default maximum number of processes is 150, so you should get 149 back (I don’t know why it’s out by one). Once you’ve confirmed that you have definitely reached the maximum number of processes, you can use this query to see what they all are and what they’re currently doing if they’re active:
SELECT sess.process, sess.status, sess.username, sess.schemaname, sql.sql_text FROM v$session sess, v$sql sql WHERE sql.sql_id(+) = sess.sql_id;
The above query is adapted from this Stack Overflow question and answer.
In the case of my failing server, this revealed that one of our QA testers was checking his load test script against our dev server! Every time he ran the script, it consumed all the available processes and locked everyone else out! A quick word with the tester (and one final Oracle restart to flush out all his sessions) and normality was restored without needing to mess about with any Oracle system settings.