Sunday, 11. April 2010
SIP Brute Force Attack Originating From Amazon EC2 Hosts.
I woke up Saturday morning to find strangely high network activity on some of our inbound connections. After a quick review, it turned out that most of the traffic was going into several of our hosted PBX systems. After a little more digging, I discovered that several systems on the Amazon EC2 network were preforming brute force attacks, against our VoIP servers. They were attempting to guess user names and passwords for our SIP clients. I immediately blocked all traffic from the attacking IPs and examined the logs. Thankfully, I found that non of the attacks had succeeded in guessing passwords.
Confident that the immediate threat was dealt with, I shot off a complaint to firstname.lastname@example.org listing the IP addresses and some log snapshots for validation. I fully expected to see the attack traffic disappear from our edge as soon as Amazon got the report. Boy, was I wrong…
Out of the Attack, Into the DDOS…
Once I finished up with the logs, I found that my network usage hadn’t dropped with the blocking of the addresses, it had actually increased! It seemed that the EC2 systems attacking me didn’t care that we were rejecting their packages, they were continuing to try to connect to our equipment… I tried all the tricks, such as drop the packets, send the packets to a black hole, police the connections, but none of the local tricks worked. Still believing that Amazon would shut down this server, I decided to wait before contacting my up streams. After waiting another hour, I decided to contact my main upstream. They also run hosted PBXs, and could have attacks running against them as well. When I got a hold of their support team, they verified that they were being attacked by Amazon as well. They suggested that they block all of the EC2 subnets from connecting with SIP at their border, and I agreed that was a good response for now.
I was also being attacked on a smaller level on one of our backup T1s. I elected to let them hammer me on that carrier, so I could see if Amazon would do anything…
“I’m sorry, you have reached a company that doesn’t care that we are attacking you…”
Twenty four hours after my initial complaint to email@example.com, I received a form email that basically stated, that even though it was their IP address, and they were routing it, and it was their hardware it was running on, that it was the customer that is at fault. And that I could use a form to contact that customer directly.
Needless to say, I did send a message to that customer, and I also sent an email to firstname.lastname@example.org telling them that they are responsible for traffic originating from their network, and to control their network!
It’s been ten hours since then, and I still haven’t seen a change….
And it’s not just us…
If the Asterisk Users mailing list is any indication of total impact, it looks like hundreds, maybe even thousands of operators have been effected by this. I still can’t believe that Amazon can be so distant to think that they do not carry any responsibility for their client systems illegal activity once the activity has been reported to them.