The Bell OH-58 Kiowa is an armed reconnaissance helicopter that was used by the United States Army for observation and fire support, in active service from 1969 to 2020. From Vietnam to Afghanistan, Kiowa helicopters flew over 820,000 combat hours.
Attack Surface Management
An organisation’s attack surface is the sum of all the different points that an adversary can potentially leverage to get unauthorised access to its data. The complete attack surface has physical components such as laptops and USB ports, and also an important range of human vulnerabilities which are exploited by social-engineering attacks such as spearphishing.
‘Attack Surface Management’ software focuses on possible entry points associated with the organisation’s online attack surface – websites, subdomains, apps and other assets that can be found on the internet, outside the range of protection of the corporate firewall. The first thing that the software does is crawl the internet, building an inventory of all the organisation’s online assets. It is often the case that many more websites are found than expected – usually due to shadow-IT activity, M&A and historic marketing campaigns. The software is also able to provide information about high-level vulnerabilities including misconfigurations, software and patching issues and expired certificates. It then monitors the full inventory for changes on an ongoing basis.
Another angle of Attack Surface Management is regarding the online activity of malicious actors that target the organisation’s brand and assets. The software automates detection, monitoring and remediation of digital threats to the organisation arising from fraudulent websites, domains, social profiles and apps. Associated malicious activity might include domain typosquatting (whereby criminals create a fake URL which looks very similar to a corporate website), company or executive impersonation on social media and brand misuse in scams.
‘Cybersecurity Asset Management’ is another approach to attack surface management, that covers devices inside rather than outside the corporate network(s). These include desktops, laptops, mobile devices and virtual machines, but also IoT devices (e.g. connected cameras, temperature monitors) and components on ICS networks such as PLCs (Programmable Logic Controllers) and HMIs (Human-Machine Interfaces). The software provides visibility of the assets, compiling an inventory with a range of metadata (operating system, associated user, agents running on the device). It also enables the controller to query the whole inventory (e.g. “Show me all windows machines that don’t have endpoint protection installed”), make changes to devices (“Apply these new patches to all machines running this software”) and other functions.
Web Application Firewall
Websites and web applications are usually an important part of the organisation’s attack surface, and can be vulnerable to many different types of malicious activity, including those that result in the most serious data breaches and ransomware attacks. Threats include denial-of-service attacks, injections that exploit vulnerabilities in input fields, and automated threats – aka ‘bad bot’ activity – directed against the website, such as credential stuffing. Web Application Firewalls (WAF) are a powerful way to defend against these attacks, and generally considered to be a requirement for active and dynamic websites. This is particularly the case when serious money is involved, so you will always find that there is a WAF protecting banking websites, gambling/trading platforms and cryptocurrency exchanges.
A WAF works by monitoring HTTP/S traffic going to and from the website and, like a network firewall, applies rules such as ‘block’ or ‘allow’ based on whether it believes the traffic is malicious or safe.
Denial-of-Service (DoS, DDoS) attacks can also be mitigated by a WAF: When it detects the start of such an attack, data flows are diverted via the provider’s network – which will normally have easily enough capacity to manage them. Scrubbing software separates good and bad data packets, and routes the good ones to their proper destination.
An important benefit of using a WAF is that website traffic is routed through the provider’s CDN (Content Delivery Network). A good CDN will deliver a substantial drop in latency and bandwidth utilisation, along with increased website reliability and performance, due to traffic optimisation and content caching. WAFs are very cool!
Application Denial-of-Service Protection
DoS and DDoS attacks have grown in size and sophistication during the last couple of years – see this presentation – and some, such as the ‘Slow Loris Attack’ are unspectacular but nonetheless highly effective and difficult to manage. They can be defended against by specialised protection in the WAF, but it’s important to select a provider with a high-capacity network, and that uses advanced analytics to determine when a Denial-of-Service attack is happening – and when it isn’t. Organisations that are genuinely concerned about DDoS – for example gambling or trading platforms – should aim for a vendor commitment to mitigate “any” attack within 3 seconds, and an uptime SLA of 99.999%
Bad Bot Management
‘Bots’ are software agents that do automated tasks on the internet, and account for nearly 40% of internet traffic. Good bots are owned by legitimate companies and perform tasks that most people would agree are socially useful. For example, Google’s Googlebot and Baidu’s Baiduspider crawl websites looking for the information that they use in their search engines. However, nearly half of bot activity is by ‘bad bots’ which are used for malicious purposes, including:
Websites are probed in the same way that hackers do, in a search for vulnerabilities that might include out-of-date software versions (fingerprinting) or susceptibility to exploits such as SQL injection attacks. Found vulnerabilities may be exploited by attackers.
A form of Account Takeover attack where a massive volume of stolen username/password pairs are tested against the login page
Capture of prices (for example of flights or hotel rooms) or other information from a website in order to be able to undercut or do arbitrage transactions against the business
OWASP Top 10 Mitigation
The OWASP Top 10 is a community-lead, open source project from the OWASP Foundation – a global group of thousands of security analysts that share information about information security threats. It focuses on Web Application Security, showing what are currently considered the ten most critical risks (the current version is 2017). Current members of the list include Injection, Cross Site Scripting, Security Misconfiguration and Using Components with Known Vulnerabilities.
Because the OWASP Top 10 is such a comprehensive and rigorously reviewed list of threats and vulnerabilities to websites, it is widely used as a security objective – for example by the PCI:DSS credit card security standard.
The main ways to protect websites and web applications against the OWASP Top 10 and other threats and vulnerabilities are Web Application Scanning, use of Web Application Firewalls (WAF) and multi-factor authentication (MFA).
Web Application Scanning
Web Application Scanners probe websites for vulnerabilities in much the same way as a hacker, but using automated techniques.
Step one is to chart the website and web application structures using a ‘crawler’ that simulates user/browser behaviour to crawl through links and inputs. Then the software scans pages and endpoints for potential vulnerabilities and misconfigurations, including – but not limited to – the current OWASP Top 10. It’s important to use a thorough scanner that can detect sophisticated risks including ‘out-of-band’ vulnerabilities such as delayed XSS and server-side request forgery, and that doesn’t return an excessive number of false positives.
The result of the scan is a report that can be tailored to fit relevant objectives and frameworks (e.g. GDPR, ISO27001, PCI:DSS).
The software also helps with remediation: Temporary protection is achieved by exporting found vulnerabilities to a WAF which can specifically defend against them before they are fixed. It should then feed updated information based on follow-up scans to continuous integration and deployment systems, issue management and tracking platforms, to support the work of the development team.
APIs are used for interaction between different applications, services (and microservices) and platforms. They massively improve user experience, speed of access and scalability. APIs are ubiquitous nowadays – in part due to the rapid rise of microservices and IoT devices – and can help with security as they intermediate between whoever is requesting to interact with a server and the server itself.
However, APIs also present security risks, adding to an organisation’s attack surface and potentially offering exploitable vulnerabilities to attackers. Examples of these risks include:
Denial-of-Service attack on web API: An attempt to overwhelm the API’s capacity with excessive volume of data requests
Excessive Data Exposure: All object properties are exposed, relying on clients to perform data filtering before displaying to user
Broken User Authentication: Attackers can compromise authentication tokens or find other ways to assume other user’s identities
Broken Object Level Authorisation: ID of object that is sent within a request is manipulated, giving unauthorised access to sensitive data
Authentication and authorisation are important aspects of API security. WAFs can also provide useful protection – by use of signatures and other methodologies to block vulnerability exploits, prevention of traffic interception for MITM (Man in the Middle) attacks, and diversion of malicious flows in the event of an attempted DDoS attack.
RASP (Runtime Application Self-Protection)
RASP is an innovative technology that can provide protection against some challenging types of threat, in particular zero-day attacks on web applications but also malicious activity from insiders or partners of the organisation (‘east-west traffic’). It is built into the application runtime environment via an autonomous plugin, providing active protection at the application layer by inspecting data in the context of how it will be used.
RASP solutions can block a wide variety of threats including but not limited to the OWASP Top 10, such as command injection, insecure cookies, path traversal and weak authentication. It can then deliver the following information: Origin of threat (IP with cookie detail); payload content; location of attack (URL, stack trace); exact time of attack.
Use cases include pre-production security testing, visibility into real-time attacks with data sent to the SIEM, protection of legacy applications and combination with a WAF to provide solid defence-in-depth for web applications.
Secure Email Gateway (SEG)
The SEG is one of the first lines of defence, along with Security Awareness Training, against ransomware and other malware-based attacks that use email and social engineering as attack vectors. Unlike training-based approaches, SEGs provide digital protection and do not rely on employees giving their full attention to threat indicators that can sometimes be obscure and hard to notice – especially in the circumstances of sophisticated spearphishing attacks.
Protection is provided for outbound and inbound emails, with the former focusing on malicious or negligent transmission of sensitive data. Inbound protection covers a wide range of undesirable payloads including spam, malware and phishing attacks. Like most security software, SEG’s can be deployed on-premise or in the cloud, with the latter generally being preferable given the wide usage of cloud-based email services such as Microsoft Office 365 and Google Workspace (formerly G Suite), and the fact that many companies are in the process of cloud migration.
SEG software provides a mix of the following functionality:
DMARC email validation
This uses SPF and DKIM protocols to authenticate email addresses against the domain that they claim to originate from – protection against ‘direct domain spoofing’. However, this type of validation is not effective against the common tricks of using look-alike domains, recently registered domains or displaying a different name than the actual address.
Protection against impersonation
Looks for features that indicate a high risk such as header anomalies, suspect email body content and external domain similarity
Malicious URL protection
Compares URLs in incoming emails to known malicious sites that are used for phishing attacks or malware delivery
Some combination of signature and next-gen malware detection applied to attachments.
Advanced spearphishing protection
Advanced SEG software may include machine learning and Natural Language Processing based techniques to defend against zero-day phishing attacks of the type used in Business Email Compromise. It may also be able to operate within individual mailboxes – potentially removing or neutralising malicious emails across the organisation.
Security Awareness Training
Security Awareness Training targets the human ‘social engineering’ vector, which is the way that many of the most destructive and expensive attacks begin, including Business Email Compromise, data breaches and ransomware. It is based around training employees to recognise and avoid signs of malicious activity such as phishing emails and suspicious emails, but can also cover a broader range of subjects such as GDPR compliance and password security. Training around information security is a requirement of frameworks such as ISO27001 (Control 7.2.2).
The first step is normally a period of baseline testing of employees to test their current level of awareness. Training is done using modules, videos, simulated phishing mails and other methods, and then followed up with further testing, which normally shows steady improvement and implicit ROI.
Network firewalls monitor and manage inbound and outbound traffic on the perimeter of a private network such a corporate intranet. The firewall is configured with policies that determine what traffic is allowed to pass, and what is blocked. Early firewalls had very simple policies, largely based around which port was being considered, direction of flow and the related IP addresses. However, they are now much more sophisticated (‘Next-gen firewalls’) and can examine the contents of packets to see what applications they are associated with and also attempt to find and block malware and intrusion attempts.
There is currently a bit of a question mark around the relevance of network firewalls, given the recent phenomenon of ‘network inversion’, whereby most or many users, devices, applications and data are now outside the corporate network. This situation has of course been exacerbated by the shift towards working from home, and the trend among organisations to shift computing resources and data to the cloud (‘cloud migration’). One outcome of this has been that organisations are starting to adopt a more user-to-cloud type approach that delivers better performance, security and scalability, operates in the cloud along with other services, and is called Secure Access Service Edge (SASE). SASE implementation is consistent with the view that network firewalls will be replaced by CASBs and SWGs.
Secure Access Service Edge (SASE)
Secure Access Service Edge (SASE – pronounced ‘Sassy’) is a term that was coined by Gartner in August 2019, in a paper called ‘The future of Network Security is in the Cloud’. In fact, it doesn’t represent a new technology, but rather a form of network architecture that brings together a range of different existing network and security technologies.
There are two main recent developments that are behind the recent interest in the SASE concept: Firstly, organisations have seen a huge shift towards cloud usage and increased working from home among employees, that has pushed more and more users, devices, applications and even data outside the perimeter of the corporate network (see Network Firewalls). This has created a number of new risks around data security, and also lead to inefficient data journeys (‘tromboning’) as information travels from dispersed users, to cloud services and back-again via the corporate network over VPNs. Secondly, SD-WAN technology combined with the extensive global private network infrastructures of SASE providers now offers a reliability that is competitive with MPLS connections and is also much cheaper: SASE solutions are quick (lower latency) and reliable – and they also offer much better scalability as they are built from cloud services.
There are also a number of security benefits that SASE can deliver. VPNs can be replaced by Zero Trust Network Access solutions which have lower latency due to traffic optimisation over private networks, and are not routed via a central location. They are also more secure, as they do not provide the VPN’s corporate desktop-like access to the entire corporate network – user/device access is authenticated directly to applications according to the zero-trust model.
As data flows directly over the internet/cloud services, security is also managed at this level, and SASE providers offer full-stack security functionality that can be managed and configured across all users and devices from ‘one pane of glass’ as SaaS. This functionality includes but is not limited to: Cloud Access Security, Secure Web Gateways, Next-Gen Firewalls that are available wherever the organisation does business, Data Loss Prevention and anti-threat software.
ENDPOINT PROTECTION (EPP)
Endpoints are end-user devices such as desktop PCs, laptops and mobile devices. The recent attention to the concept of ‘endpoint security’ – as with the shift of attention from network firewalls towards cloud solutions and particularly SASE – is due to the increasing amount of time that users and devices spend outside the corporate perimeter, and more frequent use of mobile devices for work. These devices are often unconnected to the corporate network, and very autonomous, so a big chunk of security needs to be local to them (i.e. installed on the device itself).
In terms of anti-malware software, EPP solutions normally combine signature-based protection with next-gen anti-malware against zero-day threats. They may also include specific protection against fileless attacks (living off the land), ransomware and whatever else is a high profile concern at the time.
Other functionality that EPP solutions sometimes provide includes DLP/Insider Threat Protection and local encryption.
Benefits of the centralised management of EPP solutions include the fact that endpoints can be configured and have their software updated remotely by the administrator. Also, information that feeds back from all of the endpoints that have the EPP agent installed can be used to help monitor and manage security over the whole network.
Traditional anti-malware software spots viruses and other malicious software by inspecting data traffic, and comparing files that pass by with known signatures of malware. These signatures are hashes of entire or partial files (normally generated using SHA-1 or MD5 algorithms) that have been found to be malicious by antivirus firms, and which are added to software owners’ databases during updates.
The problem with the signature-based approach is that it only works with known types of virus, so organisations that are exposed to malware that has not already been observed will not be protected. This might happen if the organisation has been targeted by a very sophisticated attacker (e.g. a nation state supported hacking group or APT) that has the time and resources to build its own advanced malware, and send it directly into the victims devices and network.
It has also become much easier for criminals and amateur hackers to obtain and build signature-less malware themselves from internet and darknet sources, and obviously this can be used in ransomware and data-breach attacks. At the current edge of this technology are families of polymorphic or metamorphic viruses that modify and disguise themselves automatically and autonomously in order to evade signature-based detection, which may be of APT or criminal origin. All these forms of malware which have – not yet – been assigned a signature by security companies, are known as ‘zero day threats’.
‘Next-Gen’ in this case basically means ‘not signature-based’. Current alternatives include:
Sandboxing: Suspicious files and programs are intentionally executed (‘detonated’) in a virtual environment in order to observe their behaviour and see if they behave in a malicious way.
Heuristics: Files are inspected for more general features and behavioural attributes of known malware types. These might include generic digital signatures that are shared by different members of a virus family, but perhaps padded out with meaningless code to make them resistant to signature-based detection.
Behavioural detection: Software that monitors process execution paths for activity that is likely to signal the existence of malware.
Machine-learning: Statistical analysis is used to train computers to recognise signs that indicate the presence of malware in files, based on a wide range of features.
Endpoint Detection and Response (EDR)
EDR – sometimes referred to as TDR (Threat Detection and Response) – is a version of Endpoint Protection (EPP) that gathers and monitors traffic data from all endpoints across the network, looking for suspicious activity. It deploys Next-Gen Anti-Malware software that can detect polymorphic and fileless types of malware, and can provide specialised protection against ransomware attacks and data breaches by pointing out signs that this type of activity is going on (for example unusual file encryption). EDR requires active monitoring, and so is normally deployed by organisations that are large enough to have a SOC (Security Operations Centre).
Industrial (ICS) Security
Modern industries including power generation, manufacturing, pharmaceuticals and mass transport make extensive use of ICS (Industrial Control System) networks in their processes. ICS was a huge driver of the ‘third industrial revolution’, that itself was arguably kicked off by the invention of the Programmable Logic Controller (PLC) in 1968. Devices on these networks including PLCs and Human-Machine Interfaces (HMIs) communicate using ICS protocols such as Modbus, that were not created with security in mind (for example they are not encrypted and do not require authentication). Secondly, the devices themselves can be hard to manage due to their age and fragility. This means that, if an attacker can infiltrate an ICS network, it is normally relatively easy for them to steal data or disrupt the industrial process that it is controlling – an activity that can cause serious damage, expense and even loss of life.
The main way to secure an ICS network is to ensure that the related corporate IT network itself is well protected, in the same way that all organisations manage information security – ie by defending against phishing attacks, using multi-factor authentication and so on. That’s because – up to now – most successful ICS attackers appear to have accessed the ICS network via successful infiltration of the main corporate network. The ICS network itself can be protected by a two stage approach: Firstly, software is used that creates a full inventory of all devices on the industrial network, and investigates security-relevant features of the devices such as their configuration and software. Secondly, logging and monitoring software (SIEM – ‘Security Information and Event Management’) inspects traffic that is going to and from the devices on the network, looking for anomalous, suspicious activity that might indicate an attack.
It’s important to recognise that whereas in the past hackers had no alternative ICS infiltration route apart from via the ‘enterprise domain’, nowadays industrial devices and networks are often directly exposed to the internet via IoT. Therefore device hardening with secure configuration and up-to-date firmware is also a vital aspect of ICS security.
You can find more about Industrial Cybersecurity here.
Security Information and Event Management (SIEM)
‘Security Information and Event Management’ (SIEM) software aggregates, monitors and inspects traffic from multiple sources on the organisation’s network. Firstly it normalises data coming from different devices and types of software, so that streams from different sources can be used together in a single application – for example in a machine learning algorithm. Once it has done this, it uses different techniques to look for potentially malicious activity: These might include inspection for signatures of malicious activity (e.g. a series of many failed login attempts), machine-learning techniques that learn about the likelihood of an attack by analysis of the relationships between features of network traffic, and correlation analysis.
The single main objective of a SIEM is to filter a huge amount of disparate data into a small amount of actionable security alerts, with a low false positive rate. As it is optimally used in situations where there are many, high volume data sources to be managed, it’s a type of software that tends to be used by large organisations with a SOC (Security Operations Centre).
Data Discovery and Identification
If an organisation wants to protect its sensitive data, then the first thing that it needs to do is find out what data it has and where it is. Data discovery software (often part of a broader DLP package) can identify regulated data and intellectual property on the network, on endpoints and when it’s stored on cloud services.
Monitoring is continuous, automatically checking new or modified files of many different types, with some products offering Optical Character Recognition that can convert images into text for analysis. Functionality known as ‘fingerprinting’ enables the software to recognise sensitive documents and files based on algorithmically generated text strings. Compliance breaches can be remediated in real time with responses including quarantining, deleting or encrypting the exposed data.
Data Classification and Labelling
Another important aspect of DLP is data classification or labelling. Software that does this adds metadata tags to files to indicate their level of sensitivity, be it public, confidential or some other category. Classification labels of this type enable other software easily to control transmission or to monitor usage.
Data Loss Prevention (DLP)
DLP software monitors and controls transmission of sensitive data. This might be within the organisation (for example from an authorised employee to unauthorised employee), or out of it – perhaps to a private account or a cloud service. The main types of DLP are:
Monitors data being sent from the user’s endpoint by transmission methods including email, upload to a website or cloud service, instant messenger services including Slack, printers and USBs. Applies to endpoints within or outside the corporate network perimeter.
Data discovery and enforcement of data transmission policies in the cloud and cloud delivered applications. Normally able to scan the full range of cloud storage services and to monitor and control uploads, downloads and sharing among these and other cloud applications.
Becoming less commonly used due to the increased prevalence of cloud services, this type of DLP monitors data that is transmitted from the corporate network via email and the web.
Cloud Access Security Broker (CASB)
CASB software is placed between users and cloud applications, monitoring and controlling activity (‘data-in-motion’) in both directions. There is also an out-of-band form of CASB (using APIs) that is able to inspect data in cloud storage and also monitor ‘east-west’ traffic between cloud services. Examples of cloud services and applications that CASB software can monitor and control access to include Microsoft 365, Salesforce, Google Drive, and also LinkedIn and Facebook.
The type of CASB that monitors user <-> cloud traffic (also known as ‘inline deployment’) does so by using proxies that sit in front of the cloud services themselves (‘reverse proxy’) or forward proxies that are installed on user devices. The benefit of the reverse proxy approach is that it can be applied immediately to any device (e.g., BYOD or third-party laptops used by contractors or auditors.) However it can only be used with managed services such as Office 365 and Salesforce. Forward proxy CASB can handle managed, unmanaged and shadow IT cloud services.
CASBs typically provide the following functions:
Shows all cloud services that are used by the organisation, at the level of devices, users and data volumes. Discovery software can sometimes provide information about how secure different applications are, so that organisations can avoid risky ones.
Visibility and control
Traffic to and from cloud services is monitored, and can be controlled with a range of different policies that limit certain user groups to specified activities (e.g. download only) on specific instances of specific applications. Policy granularity can be very fine.
DLP and Compliance
Data Loss Prevention applied to the cloud, including functionality that enables discovery of data-at-rest in managed services such as AWS, Google Drive and Dropbox. This can be used for compliance with regulations such as GDPR and PCI-DSS.
CASBs offer a range of anti-threat solutions including anomalous user behaviour detection, malware protection and the ability to block access to known malicious websites (an overlap with SWG functionality)
Secure Web Gateway (SWG)
SWG software sits between users and the internet, monitoring and controlling internet HTTP/S traffic like a firewall. The main aims of the SWG are to protect the organisation from websites that are malicious or to which access is unauthorised by company policy, and to apply security policies to web-based apps (e.g. Skype). This is done by leveraging SWG features that include but are not restricted to the following (SWGs can also apply DLP and anti-malware services):
URL filtering: URLs that employees attempt to access are compared against a library of malicious or unauthorised websites, and blocked if they are on this blacklist.
Web Application Control: Security policies are applied to manage access to web-based applications such as VOIP and instant messaging apps.
SSL/TLS Inspection: HTTPS traffic is proxied through the SWG and decrypted as it passes through. The SWG inspects the decrypted traffic and enforces usage and security policies including malware scanning, before re-encrypting the traffic and routing it to its originally planned destination.
Multi-Factor Authentication (MFA)
Authentication is an action that proves something to be true or valid. In information security, authentication normally refers to the login process, during which one or more pieces of evidence (known as ‘factors’) are presented in order to show that whoever is attempting to log in to a computer resource – which might be a device, website or VPN connection – is who they say they are. Factors are normally, but not necessarily, used in conjunction with a username (sometimes only one factor might be required – for example a fingerprint). The main types of factor are:
Knowledge based: A piece of information known by the user such as a password or PIN
Possession based: A physical object in the hands of the user such as a bank card or mobile device.
Biometric: A physical characteristic of the user such as a fingerprint or walking pattern (gait)
Location: Evidence of where the user is, such as GPS location or IP address
The single main problem with authentication is that it is most commonly done by using a (username, password) pair, and this method has become vulnerable to automated attacks, including the following:
Brute force and dictionary attacks
In a brute force attack, an adversary uses a computer application to attempt unauthorised login by combining a known username with all possible password combinations. Despite the fact that there may be trillions of possible combinations in an eight character alphanumeric password, hackers are able to access supercomputing processing power that can work through them in minutes.
Dictionary attacks are a refined form of brute force attack that restricts the guessed passwords to valid words in one or more languages, or variants of those words (for example monkey, MonKeY, M0nk5y and so on) and other non-random permutations such as 123456 and so on. This massively reduces the search space and the time taken to guess the password.
Similar to a dictionary attack, in this type of attack a group of commonly used passwords is tested on a large number of accounts (usernames).
In this form of attack, (username, password) pairs that have been stolen elsewhere, or purchased on the darknet, are tested on commercial websites and applications such as shopping sites, social media and gambling sites.
In each of these cases the intention is to access and abuse accounts (‘Account Takeover’) – often with the result that businesses and individuals are inconvenienced and lose money. When applied against resources that can allow access to corporate networks such as RDP and VPN, the attacker may be able to infiltrate and insert malware (for example ransomware) and/or steal data.
Multi-Factor Authentication is an efficient way to mitigate all these types of login attack: At login the user has to enter their username and password along with one or more of the factors mentioned above (e.g. a pin number sent to their mobile phone or a fingerprint). Attackers that do not have access to this additional factor are unable to access the resource. MFA is such an effective tool of information security that it is generally required by all security frameworks, and strongly recommended whenever there is a risk of account takeover attacks on unregulated venues (for example on a cryptocurrency exchange or gambling website). Apart from phishing, login attack (particularly on RDP) is the most common attack vector for ransomware.
Single Sign-On (SSO)
Single Sign-On (SSO) is a type of authentication and authorisation system that allows users to login to multiple applications using one set of credentials. The credentials might be (username, password), or access may be via MFA or even a single identification factor such as a fingerprint. An important benefit of SSO is that, once the user has logged in once, they have direct access to other applications that they use without having to repeat the login process. A good example of this is the automatic login to YouTube that is available to users that have already logged in to Google.
Benefits of SSO include the fact that users only have to remember one password, so it can be a strong one that is hard to remember – or ideally MFA can be enforced on the login. Secondly, user experience is significantly improved, as there is no need to go through the arduous process of multiple logins every morning.
The basic explanation of how SSO works is as follows: A user that wants to access one or several web applications is routed (or has already logged in) to an Identity Provider (IDP) such as Google, which authenticates the user. The IDP subsequently issues keys (tokens) to the app(s) giving authorisation to the authenticated user to access the app. The tokens (which are encrypted and signed) can also provide details of the exact authorisation scope of the user when they use the app (i.e., which files they can view). An important security feature of SSO schemes is that passwords are not shared apart from between the user and the IDP – the SP applications do not see them.
There are several different SSO configurations. One popular variant uses SAML (Security Assertion Markup Language):
As with all SSO configurations, there are three types of entity involved; the User, an Identity Provider (IDP) and a Service Provider (SP) that is normally a web app that the User wants to access.
Step 1: The User attempts to login to the SP from a browser
Step 2: The SP replies with a SAML request that contains the SP’s identification, and causes the browser to redirect the user to the IDP
Step 3: The IDP authenticates the user. This may happen automatically if the user is already logged in to the IDP. Otherwise, the IDP will request an authentication factor (e.g. password) or factors in the case of MFA (e.g. password plus fingerprint).
Step 4: The IDP generates a SAML response (a ‘token’) that is routed via the browser to the SP. Note that this response, as with other communications in this process, is encrypted and signed.
Step 5: The SP receives the SAML response and verifies it. On successful verification, the SP will grant access to the user, subject to access rights that have been authorised by the IDP.
Clearly a major security concern regarding SSO is the fact that if the User/IDP relationship is hacked (for example if passwords are revealed), then all the related applications are compromised. Therefore password security becomes more important and it clearly makes sense strictly to apply MFA.
Risk-Based Authentication is a form of authentication that allocates a level of stringency to the login process depending on features exhibited by whoever or whatever is presenting themselves as the user. These features might be the device from which access is being requested itself (is it a known computer or mobile device?), the location of the user, or the time of the login attempt, and could also include a biometric factor. Based on this information, which can be described as the users ‘context’, the RBA software assigns a quantitative risk measure or score. A low score will cause the authentication system to be less stringent than a high score – perhaps just asking for a password. High risk scores will normally trigger the requirement to provide an additional factor or factors (e.g. a PIN code delivered by email).
The main benefit of RBA is a an improvement in user experience: MFA, which can be frustrating for some people if they have to deal with it very frequently, is reserved for login attempts that look suspicious – for example from an unrecognised device at a strange time.
Lifecycle management is a feature of Identity and Access Management software that enables a more efficient treatment of workforce access rights. It covers the lifecycle of each employee, from when they start at the organisation, until when they leave – and during temporary breaks such as maternity leave (“joiner, mover, leaver”).
Access to applications is managed from a single admin control, across groups. This means that when someone starts at the company, they can be immediately assigned user credentials that authorise access to all the apps that their group uses. Equally, when they leave, all access rights can be revoked in real time.
From a security perspective, lifecycle management is helpful because it facilitates appropriate allocation of access rights, and eliminates the risk that people that leave the company – or are temporarily away – retain logins that would enable them to disrupt or steal sensitive applications or data. Users and the HR department also benefit from no longer having to manage application access on a piece by piece basis, setting up and terminating accounts one by one.
Customer Identity and Access Management
Customer Identity and Access Management (CIAM) software bundles together various client identity and access functions such as MFA, SSO and directory services. The aim is to help the organisation manage client information in a secure way, while providing a seamless and secure user experience. There are also a couple of specific differences between customer identity and employee identity that can be managed by this type of solution:
Connected organisations use many different digital channels to enable customers to interact with their brands, including mobile, social media platforms and various different applications and websites. CIAM helps to provide a consistent and unified user experience across all the channels.
Customer numbers – and volumes of data associated with the customers – can change rapidly depending on marketing activity, seasonality and other reasons. CIAM software facilitates onboarding and offboarding of large numbers of customers, and maintains consistent service to them despite this volatility.