Some of the security threats faced by businesses are shown on diagram below that I drew a number of years ago. Although still relevant I'm aiming to update it.
How many do you face on a daily basis,
Saturday, 28 February 2015
Thursday, 26 February 2015
What is SSL
SSL has been in the news over the last year for having a number of high profile vulnerabilities, but outside of the world of the encryption specialist the understanding of what it does is limited. This is fine as security tools such as SSL & TLS are suppose to be transparent to the end user.
The Secure Socket Layer (SSL) was a proprietary security protocol developed by Netscape, only versions 2 and 3 have been widely used. Version 2.0 was released in February 1995 and version 3.0, released in 1996. The 1996 draft of SSL 3.0 was published by IETF as a historical document in RFC 6101
The protocol is based on a socket which is the address of a service and consist of the internet protocol address, the port number and protocol being used.
Hence the following are two distinct sockets
As the SSL suggest this is a form of security based around the socket on a server. Typically we expect to see SSL on secure website indicated by HTTPS in the URL (address) bar and the padlock.
Secure HTTP typically uses socket TCP 443, whilst plaintext HTTP will use socket TCP 80.
A key component from a user's prospective is the digital certificate associated with the SSL connection.
To create an SSL certificate for a web server requires a number of steps to be completed, a simplified
The process of proving the identity of a server is cover by my blog on PKI.
A server will require to have a number of cipher suits installed which allow it to negotiate to use a common cipher that is available on both the client and server. A cipher suite consists of a number of parts
Examples of some cipher suites that might be found on a server are
What is SSL
The Secure Socket Layer (SSL) was a proprietary security protocol developed by Netscape, only versions 2 and 3 have been widely used. Version 2.0 was released in February 1995 and version 3.0, released in 1996. The 1996 draft of SSL 3.0 was published by IETF as a historical document in RFC 6101
The protocol is based on a socket which is the address of a service and consist of the internet protocol address, the port number and protocol being used.
Hence the following are two distinct sockets
- 127.0.0.1:53 UPD
- 127.0.0.1:53 TCP
As the SSL suggest this is a form of security based around the socket on a server. Typically we expect to see SSL on secure website indicated by HTTPS in the URL (address) bar and the padlock.
Secure HTTP typically uses socket TCP 443, whilst plaintext HTTP will use socket TCP 80.
A key component from a user's prospective is the digital certificate associated with the SSL connection.
To create an SSL certificate for a web server requires a number of steps to be completed, a simplified
- Generation of identity information including the private/public keys
- Creation of a Certificate Signing Request (CSR) which includes only the public key
- The CSR is sent to a CA who validates the identity
- CA generates a SSL certificate that is then installed on the server
The process of proving the identity of a server is cover by my blog on PKI.
A server will require to have a number of cipher suits installed which allow it to negotiate to use a common cipher that is available on both the client and server. A cipher suite consists of a number of parts
- a key exchange algorithm
- a bulk encryption algorithm
- a message authentication code (MAC) algorithm
- a pseudorandom function (PRF)
Examples of some cipher suites that might be found on a server are
- SSL_RC4_128_WITH_MD5
- SSL_DES_192_EDE3_CBC_WITH_MD5
- SSL_RC2_CBC_128_CBC_WITH_MD5
- SSL_DES_64_CBC_WITH_MD5
- SSL_RC4_128_EXPORT40_WITH_MD5
For a security service to be considered secure it should support only strong cryptographic algorithms such as those defined in the PCI DSS glossary v3
- AES (128 bits and higher)
- TDES (minimum triple-length keys)
- RSA (2048 bits and higher)
- ECC (160 bits and higher)
- ElGamal (2048 bits and higher)
For additional on cryptographic key strengths and algorithms. information see NIST Special Publication 800-57 Part 1 (http://csrc.nist.gov/publications/)
SSL Handshake
The steps involved in the SSL handshake are as follows
- The client sends the server the client's SSL version number, cipher settings, session-specific data, and other information that the server needs to communicate with the client using SSL.
- The server sends the client the server's SSL version number, cipher settings, session-specific data, and other information that the client needs to communicate with the server over SSL. The server also sends its own certificate, and if the client is requesting a server resource that requires client authentication, the server requests the client's certificate.
- The client uses the information sent by the server to authenticate the server. If the server cannot be authenticated, the user is warned of the problem and informed that an encrypted and authenticated connection cannot be established.
- Using all data generated in the handshake thus far, the client creates the pre-master secret for the session, encrypts it with the server's public key (obtained from the server's certificate, sent in step 2), and then sends the encrypted pre-master secret to the server.
- Both the client and the server use the master secret to generate the session keys, which are symmetric keys used to encrypt and decrypt information exchanged during the SSL session and to verify its integrity.
- The client sends a message to the server informing it that future messages from the client will be encrypted with the session key. It then sends a separate (encrypted) message indicating that the client portion of the handshake is finished.
- The server sends a message to the client informing it that future messages from the server will be encrypted with the session key. It then sends a separate (encrypted) message indicating that the server portion of the handshake is finished.
- The SSL handshake is now complete and the session begins. The client and the server use the session keys to encrypt and decrypt the data they send to each other and to validate its integrity.
- This is the normal operation condition of the secure channel. At any time, due to internal or external stimulus (either automation or user intervention), either side may renegotiate the connection, in which case, the process repeats itself.
Attacks on SSL
The SSL 3.0 cipher suites have a weaker key derivation process; half of the master key that is established is fully dependent on the MD5 hash function, which is not resistant to collisions and is, therefore, not considered secure. In October 2014, the vulnerability in the design of SSL 3.0 has been reported, which makes CBC mode of operation with SSL 3.0 vulnerable to the padding attack
Renegotiation attack
A vulnerability of the renegotiation procedure was discovered in August 2009 that can lead to plaintext injection attacks against SSL 3.0 and all current versions of TLS. It allows an attacker who can hijack an https connection to splice their own requests into the beginning of the conversation the client has with the web server
Version rollback attacks
an attacker may succeed in influencing the cipher suite selection in an attempt to downgrade the cipher suite strength, to use either a weaker symmetric encryption algorithm or a weaker key exchange. In 2012 it was demonstrated that some extensions to the orginal protoclas are at risk: in certain circumstances it could allow an attacker to recover the encryption keys offline and to access the encrypted data
POODLE attack
On October 14, 2014, Google researchers published a vulnerability in the design of SSL 3.0, which makes cipher block chaining (CBC) mode of operation with SSL 3.0 vulnerable to the padding attack (CVE-2014-3566). They named this attack POODLE (Padding Oracle On Downgraded Legacy Encryption).
BERserk attack
On September 29, 2014 a variant of Daniel Bleichenbacher’s PKCS#1 v1.5 RSA Signature Forgery vulnerability was announced by Intel Security: Advanced Threat Research. This attack, dubbed BERserk, is a result of incomplete ASN.1 length decoding of public key signatures in some SSL implementations, and allows a MITM attack by forging a public key signature.
Heartbleed Bug
The Heartbleed bug is a serious vulnerability in the popular OpenSSL cryptographic software library, affecting versions 1.0.1 to 1.0.1f. This weakness allows stealing the information protected, under normal conditions, by the SSL/TLS encryption used to secure the data payloads. SSL/TLS provides communication security and privacy over the Internet for applications such as web, email, instant messaging (IM) and some virtual private networks (VPNs).
Friday, 20 February 2015
Trust within the PKI
The Public Key Infrastructure is the backbone of eCommerce on the Internet, however it is something that most users don't understand.
Starting from a general user prospective: when doing anything that requires confidentiality of information they are told to look for the padlock in the browser and check for https at the start of the URL. There are told to trust the padlock.
What the padlock means is two fold:
- The web server has had its identity authenticated
- The communication with the server is protected
Here I want to discuss what does it mean when we are told the web server has been authenticated. Well authentication is:-
- Authentication is the process of determining whether someone or something is, in fact, who or what it is declared to be.
The owner (admin) of the web server has gone to a certification authority (CA) and proved the identity of the server to the satisfaction of the CA. The CA who can who can issue digital certificates once they are happy with the identity will issue a digital certificate with the name of the server on it, the public key of the server and a signature of the CA. The signature is a block of text that has been encrypted with the private key of the issuing CA.
In English, we trust the identity of the server based on a 3rd party verifying the identity of the server to an indeterminable standard. The 3rd party (the issuing CA) is an organisation we likely know nothing about has used a process to determine the identity that we know nothing about. The issuing CA could of done a simple check such as the requester responded to an email sent to an address on the application or it could be a detail check of identity such as papers of incorporation of an organisation, financial background etc.
Basically we become very trusting because someone who we don't know has said the web server is trustworthy. This seems very bad on the facing of it, however with the PKI we actually have a method of getting some assurances; although it does rely on trusting someone at some point.
With the PKI, there are root CA's organisations that through checks based on the IETF and ITU recommendations on PKI infrastructure have proved their identity to operating system and browser vendors and consequently have had their public key included in the operating system and browser. Each root CA then performs check on child CA's to the recommendations to verify their identity. This continues with each CA in the chain verify the next link until the CA that issued a certificate to the web server is verified by its parent CA creating a tree hierarchy. Thus the chain of trust is built up.
Back to the certificates: a public key on a certificate can allow a message encoded with the public key's corresponding private key to be read. This way the public key of the root CA certificates in the browser can be used to read the signature on their child CA issued certificates. This way each certificate issued by the chain can be verified and authenticated, meaning the certificate issued to the web server can be verified and the identity of the server can be authenticated.
PKI Trust chain |
So providing the trust model is upheld with no certificates issued without the identity of the receiving entity being checked thoroughly, we can rely on the authentication of the entity. There lies the problem, we need robustness in the PKI to prevent fraudulent certifications issued or CA's being hacked and certifications issued without proper authorisation.
Thursday, 19 February 2015
Internet connect toys: what is the risk?
It has been announced that Mattel is partnering with US start-up ToyTalk to develop Hello Barbie (https://fortune.com/2015/02/17/barbies-comeback-plan/) , which will have two-way conversations with children using speech-recognition platform connected across the Internet.
Initial thoughts were; is this hackable, and could it be used to abuse or groom children. Embedded platforms are often limited in the ability to deploy strong countermeasures such as encryption due to limited processing power; despite modern processors becoming more powerful, and they are difficult to update if a vulnerability is discovered.
A similar toy Vivid Toy’s ‘Cayla’ Talking Doll has been found to be vulnerable to hacking by security researcher Ken Munro, from Pen Test Partners (http://www.techworm.net/2015/01/vivid-toys-cayla-talking-doll-vulnerable-hacking-says-security-researcher.html)
The risk to abuse children through insecure internet connect devices was demonstrated by baby monitor being hack and abuse been shouted at a baby (https://www.yahoo.com/parenting/nanny-freaks-as-baby-monitor-is-hacked-109405425022.html). The device from Foscam has been patched with the update available, but a search of Shodan shows device still susceptible after the patch was made available as deployed devices where not updated.
The toy will listen to the child's conversation and adapt to it over time according to the manufacturer - so, for instance, if a child mentions that they like to dance, the doll may refer to this in a future chat. The device relies on a WiFi connection and the speech is likely to be processed by a remote facility. Listening devices have already proved to be controversial with privacy as demonstrated by Samsung (http://www.theregister.co.uk/2015/02/09/samsung_listens_in_to_everything_you_say_to_your_smart_tellie/). The doll is not likely to have a sophisticated control panel so will the connection to WiFi be through WiFi Protected Setup (WPS), this has already been the subject of attacks (http://null-byte.wonderhowto.com/how-to/hack-wi-fi-breaking-wps-pin-get-password-with-bully-0158819/).
Could the toy be used for abuse, I think the answer is that someone will learn how to attack the toy, whether it will be used in real life is another matter but the impact if it was is such that I hope security has been designed in from the start.
What could be done?
Mattel have not given many details on the security measures deployed to protect the toy and information recorded. It could be considered that the information is PII and therefore subject to the data protection laws.
The question is should there be a mandatory level of protection that manufacturers will have to meet to protect internet connected devices if they are to be used by children or potential pick up sensitive information. Should a kitemark scheme be introduced to show the manufacturer has meet the minimum levels of due care and due diligence?
Initial thoughts were; is this hackable, and could it be used to abuse or groom children. Embedded platforms are often limited in the ability to deploy strong countermeasures such as encryption due to limited processing power; despite modern processors becoming more powerful, and they are difficult to update if a vulnerability is discovered.
A similar toy Vivid Toy’s ‘Cayla’ Talking Doll has been found to be vulnerable to hacking by security researcher Ken Munro, from Pen Test Partners (http://www.techworm.net/2015/01/vivid-toys-cayla-talking-doll-vulnerable-hacking-says-security-researcher.html)
The risk to abuse children through insecure internet connect devices was demonstrated by baby monitor being hack and abuse been shouted at a baby (https://www.yahoo.com/parenting/nanny-freaks-as-baby-monitor-is-hacked-109405425022.html). The device from Foscam has been patched with the update available, but a search of Shodan shows device still susceptible after the patch was made available as deployed devices where not updated.
The toy will listen to the child's conversation and adapt to it over time according to the manufacturer - so, for instance, if a child mentions that they like to dance, the doll may refer to this in a future chat. The device relies on a WiFi connection and the speech is likely to be processed by a remote facility. Listening devices have already proved to be controversial with privacy as demonstrated by Samsung (http://www.theregister.co.uk/2015/02/09/samsung_listens_in_to_everything_you_say_to_your_smart_tellie/). The doll is not likely to have a sophisticated control panel so will the connection to WiFi be through WiFi Protected Setup (WPS), this has already been the subject of attacks (http://null-byte.wonderhowto.com/how-to/hack-wi-fi-breaking-wps-pin-get-password-with-bully-0158819/).
Could the toy be used for abuse, I think the answer is that someone will learn how to attack the toy, whether it will be used in real life is another matter but the impact if it was is such that I hope security has been designed in from the start.
What could be done?
Mattel have not given many details on the security measures deployed to protect the toy and information recorded. It could be considered that the information is PII and therefore subject to the data protection laws.
The question is should there be a mandatory level of protection that manufacturers will have to meet to protect internet connected devices if they are to be used by children or potential pick up sensitive information. Should a kitemark scheme be introduced to show the manufacturer has meet the minimum levels of due care and due diligence?
Wednesday, 18 February 2015
Penetration testing as a tool
When protecting your organisation's assets penetration testing by an ethical hacker is an useful tool in the information security team arsenal. Penetration testing is used to test the organisation security countermeasures that have been deployed to protect the infrastructure; both physical and digital, the employees and intellectual property. Organisations need to understand the limitations of penetration testing and how to interpret the results in order to benefit from the testing.
It may be used both pro-actively to determine attack surfaces and the susceptibility of the organisation to attack, as well as reactively to determine how wide spread a vulnerability is within the organisation or if re-mediation has been implemented correctly.
Penetration testing is a moment in time test, it indicates the potential known vulnerabilities in the system at the time of testing. A test that returns no vulnerabilities in the target system does not necessarily mean the system is secure. An unknown vulnerability could exist in the system that tools are not aware of, or the tool itself may not be capable of detecting. This can lead to the organisation having a false sense of security. I refer you to the case of heartbleed, which existed in OpenSSL for two years before being discovered and may of been a zero day prior to the public annoucement.
A penetration test carried by a good ethical tester will include the usage of a variety of tools in both automatic and manual testing modes driven by a methodologies that ensures attack vectors are not overlooked, mixed with knowledge and expertise of the tester. A tool is only as good as the craftsman wielding it.
The term ethical hacker or tester means that they will conduct authorised tests within the agreed scope to the highest levels of ethical behaviour; they will not use information obtained for the own purposes, financial gain or exceed the agreed limits.
An organisation needs to plan careful it's use of penetration testing in order to maximise the benefits.
It can be part of Information Security Management System and will typically be used in the following areas:-
The testing strategy has to be developed to meet the organisations requirements which should be driven by its mission objectives and risk appetite. It needs to be cost effective in that the testing delivers results by giving some assurance on the attack surface area and that the vulnerability management programme is being effective and controls are working. The frequency of testing will depend on factors such as how dynamic the organisation is; for example is the footprint of the organisation evolving the whole time, are their frequent changes to infrastructure and applications or is more less constant with no changes. It will also depend on regulatory and standard compliance activities, the PCI DSS specifies at least quarterly internal and external vulnerability scanning combined with annual penetration testing. The frequency could be driven by the an organisation being a high profile, controversial target (consider how many attacks the NSA must contend with). These days there is no such thing as security through obscurity, attackers are not just targeting URLs; if you where a small unknown company you could of avoided being attacked a decade ago but now with automated scanning tools scanning large swathes of IP addresses your digital footprint will be scanned and probably attacked.
A recommendation for many organisations could be a monthly internal compliance scan, with quarterly internal and external vulnerability scans conducted by a qualified tester and not just automated scans. An annual penetration test on the external infrastructure and applications conducted by a skilled tester. There will be a need to conduct scans when significant changes are implemented on the infrastructure or within applications. If your internal network is segmented into security zones then regular testing of the configuration is required. Social engineering testing such as physical entry should be conducted annually with at an annual phishing test of employees. When new vulnerabilities are reported then a scan of the infrastructure and applications may be required to determine the extent of the vulnerability within the organisation, typically this would be for high and critical rated vulnerabilities. As new controls and re-mediation activities are completed then testing should be conducted to ensure the work has been completed and the vulnerability has been re-meditated sufficiently. The level of testing could be determined by risk and business impact with lower rated systems being vulnerability scanned and high or critical systems undergoing full penetration testing. Organisations may consider using a number of test companies to ensure the widest possible breadth of knowledge is brought against the attack surface to give the best chance of identifying vulnerabilities.
However management should be aware there is always a residual risk that a vulnerability has remained undetected and should not assume a clean test report is a true indication of the security posture of the organisation. Being vigilant and monitoring for signs of intrusion are part of the security profile organisations should be deploying.
It may be used both pro-actively to determine attack surfaces and the susceptibility of the organisation to attack, as well as reactively to determine how wide spread a vulnerability is within the organisation or if re-mediation has been implemented correctly.
Penetration testing is a moment in time test, it indicates the potential known vulnerabilities in the system at the time of testing. A test that returns no vulnerabilities in the target system does not necessarily mean the system is secure. An unknown vulnerability could exist in the system that tools are not aware of, or the tool itself may not be capable of detecting. This can lead to the organisation having a false sense of security. I refer you to the case of heartbleed, which existed in OpenSSL for two years before being discovered and may of been a zero day prior to the public annoucement.
A penetration test carried by a good ethical tester will include the usage of a variety of tools in both automatic and manual testing modes driven by a methodologies that ensures attack vectors are not overlooked, mixed with knowledge and expertise of the tester. A tool is only as good as the craftsman wielding it.
The term ethical hacker or tester means that they will conduct authorised tests within the agreed scope to the highest levels of ethical behaviour; they will not use information obtained for the own purposes, financial gain or exceed the agreed limits.
An organisation needs to plan careful it's use of penetration testing in order to maximise the benefits.
It can be part of Information Security Management System and will typically be used in the following areas:-
- Risk management: Determining vulnerabilities within the organisation and the attack surface area
- Vulnerability management: Detecting vulnerabilities presence the organisation, determining the effectiveness of re-mediation
- Assurance audit: Testing implemented countermeasures
- Regulatory compliance: Part of auditing to determine controls are implemented
The testing strategy has to be developed to meet the organisations requirements which should be driven by its mission objectives and risk appetite. It needs to be cost effective in that the testing delivers results by giving some assurance on the attack surface area and that the vulnerability management programme is being effective and controls are working. The frequency of testing will depend on factors such as how dynamic the organisation is; for example is the footprint of the organisation evolving the whole time, are their frequent changes to infrastructure and applications or is more less constant with no changes. It will also depend on regulatory and standard compliance activities, the PCI DSS specifies at least quarterly internal and external vulnerability scanning combined with annual penetration testing. The frequency could be driven by the an organisation being a high profile, controversial target (consider how many attacks the NSA must contend with). These days there is no such thing as security through obscurity, attackers are not just targeting URLs; if you where a small unknown company you could of avoided being attacked a decade ago but now with automated scanning tools scanning large swathes of IP addresses your digital footprint will be scanned and probably attacked.
A recommendation for many organisations could be a monthly internal compliance scan, with quarterly internal and external vulnerability scans conducted by a qualified tester and not just automated scans. An annual penetration test on the external infrastructure and applications conducted by a skilled tester. There will be a need to conduct scans when significant changes are implemented on the infrastructure or within applications. If your internal network is segmented into security zones then regular testing of the configuration is required. Social engineering testing such as physical entry should be conducted annually with at an annual phishing test of employees. When new vulnerabilities are reported then a scan of the infrastructure and applications may be required to determine the extent of the vulnerability within the organisation, typically this would be for high and critical rated vulnerabilities. As new controls and re-mediation activities are completed then testing should be conducted to ensure the work has been completed and the vulnerability has been re-meditated sufficiently. The level of testing could be determined by risk and business impact with lower rated systems being vulnerability scanned and high or critical systems undergoing full penetration testing. Organisations may consider using a number of test companies to ensure the widest possible breadth of knowledge is brought against the attack surface to give the best chance of identifying vulnerabilities.
However management should be aware there is always a residual risk that a vulnerability has remained undetected and should not assume a clean test report is a true indication of the security posture of the organisation. Being vigilant and monitoring for signs of intrusion are part of the security profile organisations should be deploying.
Tuesday, 17 February 2015
Firmware attacks
Read an article on the use of hard drive firmware to hide malware (http://www.reuters.com/article/2015/02/17/us-usa-cyberspying-idUSKBN0LK1QV20150217), to me it is not that surprising that it is being used as an attack vector. Firmware in general is a good target; look at the USB controller hacks (https://srlabs.de/badusb/) to see what can be done if a device can be attacked
Firmware is software but often stored in rewritable memory, as software it is open to the same bugs and problems as all software and hence have vulnerabilities. It also like software can be written to add additional functionality and rewritten to the storage media. All though vulnerabilities are being discovered in firmware, the firmware in devices is not updated as often as it should. The slowness in updating firmware in network devices contributed to the number of devices being detected as being susceptible to heartbleed in OpenSSL after most software was patched (http://www.technologyreview.com/news/526451/many-devices-will-never-be-patched-to-fix-heartbleed-bug/).
What surprised me is the article emphasised the need to have access to the source code. Firmware like software can be re-engineered, I do recall in my early days with a BBC Micro model B, stepping through code in the EPROMS that were used to extend the functionality of the computer.
A good programmer could step though the firmware in a hard drive or any device and determine the functionality of the routines. It should be possible to determine a suitable call that could be overwritten to point to malware and then return the execution back to the original routine. This will be helped as the firmware is unlikely to complete fill the storage chips, leaving room for malicious code to be appended and stored.
The concept of the attack through the firmware in hard drives that is executed as a computer executes the bootup via BIOS or EFI can be expanded to any device including graphic cards that have code executed during initialisation.
When firmware with an autorun capability is combined with the universal plug and play capability that operating systems have, it is surprising they are not more attacks are not executed through this attack vector.
Think about the all the devices that use that have firmware within your network, or by your users that connect devices to the network. Can you be sure that they are not introducing malware into organisation? Consider a firewall or a multifunction device containing a worm that infects a network when it is plugged in or restarted, the possibilities and permutations are limited by ingenuity.
Firmware is software but often stored in rewritable memory, as software it is open to the same bugs and problems as all software and hence have vulnerabilities. It also like software can be written to add additional functionality and rewritten to the storage media. All though vulnerabilities are being discovered in firmware, the firmware in devices is not updated as often as it should. The slowness in updating firmware in network devices contributed to the number of devices being detected as being susceptible to heartbleed in OpenSSL after most software was patched (http://www.technologyreview.com/news/526451/many-devices-will-never-be-patched-to-fix-heartbleed-bug/).
What surprised me is the article emphasised the need to have access to the source code. Firmware like software can be re-engineered, I do recall in my early days with a BBC Micro model B, stepping through code in the EPROMS that were used to extend the functionality of the computer.
A good programmer could step though the firmware in a hard drive or any device and determine the functionality of the routines. It should be possible to determine a suitable call that could be overwritten to point to malware and then return the execution back to the original routine. This will be helped as the firmware is unlikely to complete fill the storage chips, leaving room for malicious code to be appended and stored.
The concept of the attack through the firmware in hard drives that is executed as a computer executes the bootup via BIOS or EFI can be expanded to any device including graphic cards that have code executed during initialisation.
When firmware with an autorun capability is combined with the universal plug and play capability that operating systems have, it is surprising they are not more attacks are not executed through this attack vector.
Think about the all the devices that use that have firmware within your network, or by your users that connect devices to the network. Can you be sure that they are not introducing malware into organisation? Consider a firewall or a multifunction device containing a worm that infects a network when it is plugged in or restarted, the possibilities and permutations are limited by ingenuity.
Monday, 16 February 2015
Browser Cryptography
Introduction
Back in October I wrote this for my employee's blog. This February the Payment Card Industry Security Standards Council released a bulletin regarding the use of SSL for data protection on the Internet. In the bulletin, the Council states that SSL – a protocol for providing secure communications – is no longer acceptable for secure transactions. This requires payment card providers moving to a TLS encryption only.
Browser Cryptography
In October 2014 we saw a modern attack (POODLE) on a 1990s security protocol (SSLv3), which highlights the fact that, although we consider computing to be a fast-moving field, there are issues with ensuring compatibility with legacy applications and devices, which give rise to security issues. All e-commerce is conducted using secure HTTP (HTTPS), which uses secure sockets layer (SSL) or transport layer security (TLS). These are cryptographic protocols to provide protection. So, what is cryptography and what does it provide in terms of security?
Cryptography and Encryption
Cryptography (the science of ciphers) and encryption (the process of transforming plaintext to cipher text) has been in the news for the last few years following revelations that the US’s NSA, through the NIST, weakened random number generator algorithms. Spy and intelligence agencies like the NSA have broken or bypassed encryption, and there have been various attacks on cryptography protocols, such as Heartbleed and POODLE.
Security is often considered the product of the security triad of confidentiality, integrity and availability. However, it also encompasses a number of other principles that include authorisation, identification and non-repudiation. Cryptography and encryption are some of the tools we can use to provide security.
Although encryption is the main control on confidentiality, and is used by corporate and government bodies to protect our personal data – including credit cards, sensitive data and intellectual property – cryptography is not widely understood.
Cryptography is all around us: we all use it every day while we browse the internet; we rely on it to protect the data we send and receive from secure sites. We know that we should look for the padlock symbol, but very few of us know what it means when it is displayed in a browser. The padlock means the server has been identified by a certificate trusted by one of the 80+ root certificates on your machine, and that encryption is being used. Although the strength of the encryption is not taken into account, it indicates that some form of encryption is deployed between the browser and the server.
SSL and TLS are the protocols used by HTTPS to provide the means of authenticating a server via digital certificates and the public key infrastructure. They are also used to encrypt the communication using a hybrid of symmetric algorithms that encode the data and asymmetric algorithms that ensure the key for the symmetric algorithm is on both the client and the server.
Identification issues
Identification is provided by digital certificates that demonstrate that the URL and the server delivering it are correctly matched… but that does not mean you are talking to the server you want! Think about www.it-governance.co.uk compared to www.itgovernance.co.uk: both can be identified by digital certificates, but only one is the genuine article. Scammers will set up secured sites with common misspellings in an attempt to mislead users. It has been known for a fake site to be rated more highly on search engines than the genuine site itself, enabling scammers to commit fraud against the unaware.
Ciphers and cipher suites
Ciphers (the algorithms used in the encryption process) on servers are graded by strength, which is based on the work factor required to break them. The lower the effort required to break the cipher, the weaker the cipher is rated. Generally, ciphers are classified as weak, medium or strong. In actuality, it is the cipher suites that are examined – these consist of the different algorithms used to create a secure tunnel. A secure tunnel is the named combination of authentication, encryption, message authentication code (MAC) and key exchange algorithms used to negotiate the security settings for a network connection using the transport layer security (TLS) or secure sockets layer (SSL) network protocols.
Samples of cipher suites:
- SSL_CK_RC4_128_WITH_MD5
- SSL_CK_RC4_128_EXPORT40_WITH_MD5
- SSL_CK_RC2_128_CBC_WITH_MD5
- SSL_CK_RC2_128_CBC_EXPORT40_WITH_MD5
- SSL_CK_IDEA_128_CBC_WITH_MD5
- SSL_CK_DES_64_CBC_WITH_MD5
- SSL_CK_DES_192_EDE3_CBC_WITH_MD5
Cipher suites can include a NULL cipher for encrypting data being transferred. Although in the dark ages of the early noughties a browser might have displayed a padlock despite the data being transmitted as plain text, modern browsers will display an error message about plaintext, but a padlock may still be displayed in some browsers.
Servers and clients have a range of cipher suites installed by default. The actual ones installed will depend on the type of machine, vendor and other criteria. As clients and servers alike do not know in advance which other machines they may connect to, they need a range of possible suites installed so there is a chance there is a common dominator between both machines.
Encryption issues
Ciphers cause an overhead in transmitting data (such as latency) due to the need to encrypt and decrypt data as it is transmitted and received. Additional headers and footers are often appended to units of data, thus decreasing the throughput of useful data over a constant-speed connection. As strong ciphers often mean introducing an increased latency, the weakest common denominator is often selected during the negotiation of common cipher suites to reduce the overhead effect. This, however, also reduces the work factor needed to break the encryption, opening the communication up to eavesdropping by attackers.
It is also possible to force servers to downgrade to weaker protocols and ciphers though version rollback and downgrade attacks. These attacks forces the server to use more vulnerable protocols and ciphers, which aids an attacker in compromising the communication using attacks such as POODLE.
This means that, in order to guarantee the use of strong ciphers, one end of the negotiation must only use strong ciphers. This actually requires the server to be configured to have only strong, non-vulnerable cipher suites installed.
Recommendations
The following actions are recommended to ensure a strongly protected server when using SSL/TLS encryption.
- Weak or medium strength ciphers must not be used:
- No NULL cipher suites, due to no encryption used.
- No anonymous Diffie-Hellmann, due to not providing authentication.
- Weak- or medium-strength protocols must be disabled:
- SSLv2 must be disabled, due to known weaknesses in protocol design.
- SSLv3 must be disabled, due to known weaknesses in protocol design.
- Renegotiation must be properly configured:
- Insecure renegotiation must be disabled, due to MiTM attacks.
- Client-initiated renegotiation must be disabled, due to denial-of-service vulnerability.
- 509 certificate key length must be strong:
- If RSA or DSA is used, the key must be at least 2048 bits.
- 509 certificates must be signed only with secure hashing algorithms:
- Not signed using MD5 hash, due to known collision attacks on this hash.
- Keys must be generated with proper entropy.
- Secure renegotiation should be enabled.
- MD5 should not be used, due to known collision attacks.
- RC4 should not be used, due to crypto-analytical attacks.
- Server should be protected from BEAST attack.
- Server should be protected from CRIME attack; TLS compression must be disabled.
- Server should support forward secrecy.
It is important to note that some old browsers such as IE 6 are not capable of supporting TLS, and if SSL is removed from the server these clients will not be able to connect.
Subscribe to:
Posts (Atom)