The changing face of cyber protection
Published May 2015, Australian Defence Magazine
Author: Richard Brown, Cogito Group
In the cyber security world, the ‘castle defence’ strategy has long been the traditional means by which organisations have protected their data. All emphasis has traditionally focused on the strong protection of the border; thick walls and a metaphoric drawbridge that could be raised to completely stop invaders.
Until now. These days that drawbridge needs to stay down for business to be done and unfortunately not everyone comes through the front door.
In a rapidly changing landscape, the trusted insider threat (Snowden) and Bring Your Own Device are but a few examples that highlight although border protection is still important, a comprehensive, layered approach to security and authentication is essential.
Coupled with these factors, the post Sept 11 environment has led to the need for increased interoperability, sharing and collaboration between government agencies and allies. As we actively invite ‘guests’ into our ‘castle’ it’s more important than ever that we maintain control over who they are (identity) and their access rights to the information.
It’s a complicated scenario. We’ve witnessed the growth of the internet and the ever increasing connectivity of people and devices; dynamic intelligence over static intelligence and the borderless over the perimeter.
These days we need to adapt to internet scale rather than enterprise scale. Our systems are no longer just on our physical premises, but in the cloud and accessible via the internet – and they are accessed any time, day or night from anywhere in the world. They are accessed not only by employees but contractors, customers and partners. In the not so distant past, you needed to physically carry hardware out of a building to steal information. These days with virtualization, a server can be stolen remotely, simply as a file or accessed and altered in a malicious way.
Today’s threats are becoming more and more sophisticated. Each day we read news of cyber security breaches that highlights the intensifying magnitude of execution.
This all adds up to requiring security solutions that are adaptable, scalable and integrated. We need to be able to provide flexible services to meet the operational tempo. They need to be managed in a way that combines encryption, access policies, key management, content security and of course, authentication and authorisation.
The new look castle
Boundary protection (the firewall) will always play a significant part in cyber security. The firewall sits at the most critically important place in the network and needs to have a centralised point of visibility and control over everything entering and leaving the network.
However, the port-based firewalls many organisations use are quite simply outdated. They provide limited value in a world where network boundaries are becoming harder to define and internet applications are exploding.
The next generation firewalls work on the premise that identity is key: application identification; user identification; content identification.
New and improved guards remain at check points at the front of the castle inspecting everyone and everything entering, irrespective of the port and protocol. They use heuristic (behavioural) techniques to determine if there are any patterns in what the traffic is doing. This allows the identification of any troublesome applications. They no-longer only rely only on ‘known’ malicious content but can defend against zero-day vulnerabilities.
The old firewall just blocked everything, whereas next generation firewalls incorporate user and content identification. This means social networking applications, like Facebook, can be enabled for the whole organisation with different permissions set. For example, all users may have viewing access, but specific individuals or groups within the organisation (that have legitimate need to use it) may also be assigned edit access. Content identification functionality means IT departments can stop threats and prevent data leaks without having to invest in a pile of additional products and risk appliance sprawl.
Identity is key
An ‘identity’ is the set of attributes that uniquely identify an entity. These days an ‘entity’ may be a person (an employee, a contractor), a device or even a third party (such as a partner, an agency or a service provider). Entities include users from outside the organisation and may represent a group or role.
The growth of the internet and ever increasing connectivity of people and devices has meant the definition of managing identities has grown. It’s no longer simply about just managing the identity of people accessing services. Organisations now need to gain an understanding of the relationships it has with identities.
A good identity solution is designed to handle complexity. It provides Adaptive Access Management. It knows the relationship between identities and can use this and other information to make dynamic decisions based on set rules. An example is that it may know when an identity is logging in from a different device, area or region and may challenge for additional authentication.
If Corporal Peter’s logs into the DRN from a new device or from a different country, a modern, adaptable identity system will adjust to these uncertain circumstances and ask him for additional authentication beyond a simple password. If he had logged on from Townsville and 20 minutes later he looks like he is logging in from China, broader access management decisions might come into play such as requiring further authentication factors or flagging the incident for further investigation.
To this end, an effective identity solution provides security, auditing and metrics. It will not just protect information through segregation and workflow approval but provide valuable auditing, rectification and analytics.
It provides fraud prevention via the enforcement of separation and segregation of duties. Attestation is also achieved with the recertification of access requirements seamlessly. This process ensures that not only is the access required when first granted but continues to be required.
The benefits of an effective Identity Management solution are significant. Costs savings are achieved through the rationalisation of effort in the management of disparate processes and systems. Improved efficiencies are realised through automation, reduced errors, streamlined authorisation and reduced helpdesk calls. Enhanced audit and reporting capabilities improve compliance with security policies and regulations. Improved client experience is achieved through the streamlined user interface and the simplified provisioning and de-provisioning of services through self service capability where appropriate.
The solution must be scalable in terms of the number of actors, relationships, and attributes. We must also keep in mind scalability of administration.
The ASD website states, Multi-Factor Authentication (MFA) remains one of the most effective methods an agency can use to prevent a cyber-intruder from gaining access to and propagating that access throughout the network accessing sensitive information.
When implemented correctly, the use of MFA can reduce risk of information loss and damage caused by unauthorised access.
Put simply, MFA is the provision of multiple pieces of information in order to perform tasks such as system authentication.
It is an approach to authentication that requires two or more of the three authentication factors:
- Something you know e.g. password, pin or pattern;
- Something you have e.g. smartcard/token; and/or
- Something you are e.g. biometric. In this complex and evolving climate of advanced threats, virtualization, regulatory mandates and mobility, it’s important to take a data-centric approach to protect and control information. At the root of trust of the crypto foundation are the cryptographic keys. The security of these keys is imperative. Protecting the keys themselves against the ever-evolving data threats is itself a challenge as well as what to do should the storage of these keys fail.Once data is encrypted this data can continue to pass through systems transparently, and be persistently available for decryption by authorized users. Administrators no longer need full access to everything, encryption technologies allow them to back-up and restore the data without access to the unencrypted data. Data is secure throughout its lifecycle and it’s seamless so use may not even know the data is encrypted.
- Often encryption is implemented within the same system used to store the data. The problem with this is that many attacks target the system protecting the data. Once you have compromised a system that controls and protects the encryption keys, you also have the data. Another issue is management where there are a lot of keys. There are a range of options for central, secure, redundant and automated key management for information stored in physical, virtualised and public cloud environments.
- There is an inevitability about getting through the boundary. As internal and external risks grow, an additional mechanism to boundary defences to protect data is encryption.
The targeting of privileged accounts by a cyber-intruder can potentially lead to the adversary having access to the entire system or network which may include administration or trophy systems. These types of privileged accounts may include accounts that have access to approve financial transactions or access sensitive information and databases. As such they should use more secure authentication methods such as MFA.
There is an inevitability about getting through the boundary. As internal and external risks grow, an additional mechanism to boundary defences to protect data is encryption.
At the root of trust of the crypto foundation are the cryptographic keys. The security of these keys is imperative. Protecting the keys themselves against the ever-evolving data threats is itself a challenge as well as what to do should the storage of these keys fail.
Often encryption is implemented within the same system used to store the data. The problem with this is that many attacks target the system protecting the data. Once you have compromised a system that controls and protects the encryption keys, you also have the data. Another issue is management where there are a lot of keys. There are a range of options for central, secure, redundant and automated key management for information stored in physical, virtualised and public cloud environments.
Once data is encrypted this data can continue to pass through systems transparently, and be persistently available for decryption by authorized users. Administrators no longer need full access to everything, encryption technologies allow them to back-up and restore the data without access to the unencrypted data. Data is secure throughout its lifecycle and it’s seamless. Users don’t know this is actually encrypted. It is secured by keys held in hardware, and the encryption can cover files, databases and virtual machines.
Make sure the keys stay in the kingdom
These are all ‘enabling’ technologies that will create a future ready security foundation that is modular, forward-looking and ensures that security needs are met as new threats, devices and user needs evolve.
Your information can even sit in the cloud, while you remain in control of your data while outsourcing its’ storage.
However, within all these scenarios it must be stressed – the keys to the kingdom – must remain in control of the keepers of the castle.
To view this article in the May edition of Australian Defence Magazine, click here.