Cybercriminals are increasingly becoming more sophisticated in their methods of attack. This is often true of their methods for extracting sensitive data from their victims as well. Exfiltration, or exportation, of data is usually accomplished by copying the data from a compromised system via a network channel, although removable media or physical theft can also be utilised.
In 2009, Trustwave’s SpiderLabs investigated more than 200 data breaches in 24 different countries, including several in the Asia-Pacific region. While the methods used by cybercriminals to exfiltrate data from a compromised environment varied, the method of entry into an environment was often similar. Globally, the most common channel of entry was via a remote access application being utilised by the target organisation. In global investigations, 45% of compromises occurred by attackers gaining access to a system through a remote access application. These were not zero-day exploits or complex application flaws, and the attacks looked no different to the IT staff than, for example, the CEO connecting from London while on a business trip. The attackers also didn’t need to brute-force the accounts they used. The investigations found that 90% of these attacks were successful because of vendor-default or easily guessed passwords, like ‘temp:temp’ or ‘admin:nimda.’
This type of attack is less common in the Asia-Pacific region, where a combination of factors means that e-commerce sites are more common targets for attack. For the majority of compromised sites in Asia-Pacific, an attack known as SQL injection is the method of entry. SQL injection is an attack that leverages a weakness in an e-commerce site’s software and which leads to an attacker establishing a foothold in the database that powers that site.
Network Attacks
Attackers can use this foothold to download confidential data being stored by the e-commerce site in the database into which they have established access, or to launch further attacks across the network of the victim.
For example, attackers often launch network enumeration tools. These enumeration tools are used to discover additional network targets alongside information about these targets, such as usernames, user privileges and network shares. These enumeration tools are often ‘noisy’ and a clear sign to a diligent network administrator that an attack is imminent. Unfortunately, we’ve found that most entities are not properly monitoring their systems and therefore fail to observe these indicators.
Our investigations showed attackers harvested data using either manual or automated methods. For example, attackers manually located potentially valuable databases and documents, and searches of the operating system were conducted using specific keywords to identify alternate data of interest.
Custom-written malware was an example of an automated data harvesting method. Many applications that deal with confidential information like payment card information, even those that make heavy use of encryption, will at some stage store this sensitive data in clear text. This occurs in systems that require data to be decrypted in RAM for the application to understand it. Cybercriminals in 2009 frequently employed RAM parsers to capture this data for the brief period of time it was stored in clear text – in fact, in 67 percent of SpiderLabs’ investigations involving malware, RAM parsers were used.
Cybercriminals were, on average, able to continue to gather data for 156 days before being stopped. For that period of time, attackers entered the environment, set up their tools to remove data and also harvested the data before a single IT or security department reacted to their activities. Some of the 2009 investigations showed recurring activity from the same cybercriminals over the course of three years. Long detection times were typical in 2009 and, seemingly armed with this knowledge, cybercriminals were not practicing stealth in their activities.
FTP, SMTP and IRC functionality were regularly utilised as the exfiltration channel in cases where custom malware had been used by the attacker. In reverse analysis of custom malware, binaries would disclose the existence of FTP functionality including hardcoded IP addresses and credentials. With off-the-shelf malware, such as keystroke loggers, attackers most often used built-in FTP and e-mail capabilities to exfiltrate data. When e-mail services were employed for extraction, the attackers often opted to install a malicious SMTP server directly on the compromised system to ensure the data was properly routed.
It is clear that in all of these cases sensitive data was sent out of a target environment. During this time, the IT security teams did not detect the loss.
Case Studies
In one recent case, a large retail chain in southeast Asia suffered a significant compromise of their help desk environment. The help desk supported the point of sale at numerous retail locations across the region. The compromise itself was caused by a number of information security failures, chiefly with the configuration and security hardening of the desktop platform being used by help desk analysts.
These security failures allowed a specially crafted piece of malware to enter the environment and propagate to many of the help desk analysts’ workstations. Once installed, this malware harvested sensitive cardholder data (e.g. the data from the magnetic stripe of credit cards processed at retail locations). The malware then uploaded this data to a website where presumably it could be collected by the parties that had developed the malware.
In another recent case, the e-commerce site of a smaller Australian merchant was compromised using a common flaw within the shopping cart software they had purchased. This shopping cart software was poorly designed and, as a result, stored cardholder data relating to all historic online orders. The attacker was able to make use of the flaw to deploy a piece of malware known as a ‘web shell’. This shell gave the attacker interactive, administrative access over the shopping cart software, and the database that was used to store historic payment data. The attacker was then able to develop an addition to the shopping cart site that presented them with a web page that included a list of all historic payments, along with sensitive cardholder data.
When looking for signs of an attack, IT security teams seem to expect something complex. However, attacks are usually very simple, not ‘noisy’ and likely appear benign in a routine log review. It is not until the data has left the target environment and shows up in some other capacity that the breach is detected. Paying close attention to the behaviours of ‘normal’ activity against ‘standard’ systems is the key to identifying a problem before it is too late. Every anomaly should be viewed with a degree of suspicion and if an internal investigation leads to an unexplained occurrence, companies should bring in a trusted expert for assistance. A 10-minute conversation or log review by an expert could save an organisation millions of dollars should the anomaly be something more serious.