Skip to content
Software agents are everywhere in the cloud. These little programs perform often complex or repetitive functions on our behalf so we don't have to. Some agents help us keep our systems updated and avoid configuration drift. Others roam our compute infrastructure in an attempt to keep everything safe from threats. Software agents are designed to make our job easier.

 

However, in cases of large and complex systems where the true value of the agent should be realized, the opposite can occur. Getting approval to install agent software on machines can involve a lot of red tape. Deploying and managing hundreds of agents on multiple hosts can be a real hassle. They can sap compute resources and impede performance. And while agents help us monitor our systems, who's monitoring the agents?

 

Some organizations - typically government agencies and critical infrastructure industries - have policies that strictly forbid the use of agents in their systems. And for one good reason: Security. Since agents reach out and write to your runtime environment, they represent a serious security risk. Agents require open ports on servers in order for them to do their jobs (although not necessarily with administrative access). Agents can make changes to code on servers. Agents therefore are an attack vector, and one that usually has privileges that allow serious harm to be done.

 

In short, if a black hat compromises an agent, they may very well have the keys to the castle.

 

Agents often come with their own flaws. There's been no shortage of security advisories regarding software agents. Typically such vulnerabilities provide attackers with the means to inject code into your servers and applications.

 

However, the most common attack vector to exploit agents is via social engineering. Spear phishing and privilege escalation attacks can deliver agents into the hands of a black hat, essentially giving them free reign over your system. Modern agents with API access only create new vulnerabilities. Because black hats prefer to do their work slowly, uncovering agent exploits is a challenge.

 

We know of a large organization that has a policy requiring one group to maintain patches for all servers on their data floor. This central group puts agents, running as root/Admin on every server for every system in the organization. The AD user accounts for the operators in this group can then ssh or RDP into any server on the floor as root/Admin without providing additional credentials. Getting one of those accounts is therefore the best option for any black hat, as they can then infect all the systems in the organization through a single log in. While this example may be an extreme case, it is very real. We suspect there are many other similar situations all due to the fundamental problems of having agents and servers communicating via shared, routable IP address space.

 

The only way you can be confident that agents in your system haven't been compromised is to eliminate the use of agents altogether. If you are using a platform with a control backplane, you should use it instead of addressable server/agent architectures. Unfortunately, there are few solutions available now that can leverage modern architectures with SDNs and backplanes to provide better security, but this is changing.

 

We're beginning to see ISVs market agentless monitoring solutions. We think this is a positive development and hope it turns into a trend. Note, however, that these solutions generally avoid agent-oriented architecture by SSHing directly into your servers. So while they can save us from the complexity of managing agents and close some critical vulnerability surfaces, they don't eliminate risk.

 

The agent/server model is another example of expanding the vulnerability surface in computing that has already shown itself to be far too large and permeable. The only solutions that will improve security and automation will avoid use of the agent/server model and should have no routable address space between them.

 

Categorized Under