Thursday, 20 December 2012

Plastic credit – who’s fault (a potted history of the payment card)


When you are recovering from the expenditure over the holiday period and you receive the credit card bill you may wonder whose fault it is for the plastic we use to make purchases.

Credit in various forms and mechanisms has been around since the early days of people conducting transactions. However it is widely regarded that the inventor of the first bank issued credit card was John Biggins of the Flatbush National Bank of Brooklyn in New York. In 1946, Biggins invented the "Charge-It" program between bank customers and local merchants.

In 1950, the Diners Club issued their credit card in the United States. The Diners Club credit card was invented by Diners' Club founder Frank McNamara and it was intended to pay restaurant bills. A customer could eat without cash at any restaurant that would accept Diners' Club credit cards. Diners' Club would pay the restaurant and the credit card holder would repay Diners' Club. The Diners Club card was at first technically a charge card rather than a credit card since the customer had to repay the entire amount when billed by Diners Club.

American Express issued their first credit card in 1958. Bank of America issued the BankAmericard bank credit card later in 1958.

By 1959 many financial institutions had begun credit programs. Simultaneously, card issuers were offering the added services of “revolving credit.” This gave the cardholder the choice to either pay off their balance or maintain a balance and pay a finance charge.

During the 1960s, many banks joined together and formed “Card Associations,” a new concept with the ability to exchange information of credit card transactions; otherwise known as interchange. The associations established rules for authorization, clearing and settlement as well as the rates that banks were entitled to charge for each transaction. They also handled marketing, security, and legal aspects of running the organization.

The two most well-known card associations were: National Bankamericard and Mastercharge which eventually became Visa and MasterCard.

By 1979, electronic processing was progressing. Dial-up terminals and magnetic strips on the back of credit cards were introduced thus enabling retailers to swipe the customer’s credit card through the electronic terminal. These terminals were able to access the issuing bank card holder information. This new technology gave authorizations and processed settlement agreements in a matter of 1—2 minutes.

Track 1 of the magnetic strips was designed to hold 79 characters, one less than the 80 columns on punch cards; the track 2 used 40 characters which contained the essential data required for a transaction and provide faster communication through dial up modems by reducing the amount of data sent.

The creation of chip and pin (the name given to the initiative in the UK) came about from the concept of smartcard technology, which was developed in the USA, and used an embedded microchip which was accessed through a 4 digit Personal Identification Number (PIN) to authorise payments.  Users no longer had to hand over their cards to a clerk, instead they inserted them into a Chip and Pin terminal, where they entered their unique 4 digit code, a message would be sent to an acquirer, via a modem, where they would check if the pin was correct, and if so, authorise the transaction.

In 1990 France introduced the use of chip and PIN cards based upon the French only B0’ standard. (For French domestic use only) and has cut card fraud by more than 80%

The technology was trialled in the UK starting in May 2003 in Northampton, its success there and high frequency of debit card users at the time, allowed HBOS to introduce the first cashback scheme. It was rolled out in the UK in 2004 to be the dominant way for the public to pay for items with the advertising slogan of ‘safety in numbers’ highlighting the personalised number aspect of the system. More than 100 countries use the technology online and offline, establishing its appearance as the dominant form of payment and being utilised by people all over the world.

References

http://www.fidelitypayment.com/resources/what_are_merchant_services
http://wiki.answers.com/Q/Who_is_John_Biggins_of_the_Flatbush_National_bank_in_Brooklyn_NY
http://www.cardsave.net/blog/the-history-of-chip-and-pin/
http://en.wikipedia.org/wiki/Chip_and_PIN
http://www.theukcardsassociation.org.uk/Advice_and_links/index.asp

Wednesday, 19 December 2012

November ADSL Router Analysis

Analysis of the logs files from my ADSL router for November, the level of events was similar to October and the USA again was the main source of events, China has not been the detected source IP since Aug. Turkey has been consistent through out the year so far.


The detected events broke down country wise as follows

CountrySource IPsNo of attack from country
USA220
Turkey1616
Germany16
Azerbaijan11

October ADSL Router Analysis

Analysis of the logs files from my ADSL router for October, there was an increase compared to September, however detected events did not meet the levels of July.


The detected events broke down country wise as follows

CountrySource IPsNo of attack from country
USA629
Turkey1616
Saudi Arabia112
Germany16

Thursday, 13 December 2012

Vulnerability scans & false positives: The importance of sanitising input

One of the contentious issues from vulnerability scanning, in particular with web applications, is false positives. This means that the scanning indicates vulnerabilities exist, whereas in reality there is no vulnerability, it is a response to the scanning that has been incorrectly classified. These are type I errors - a result that indicates that a given condition is present when actually it is not.

A type II error is where a condition is present but not indicated as being so. This type of error is a false negative and is often a worse condition as there is a false sense of security generated since vulnerabilities where not identified.

Web applications are frequently vulnerable to injection exploits. These techniques involve various vulnerabilities to SQL injection, Cookie manipulation, command injection, HTML injection, Cross-site scripting and so on.

To prevent injection it is recommended that the application uses a combination of parameterised statements, escaping, pattern checking, database permissions and input validation are used. Additionally the application should also be checking input to ensure that input values are within range and unexpected values are handled in a consistent manner. Generated error messages should not give away information that may be useful to an attacker but should help a user enter the required input, improving the user experience of the web site.

The testing for injection vulnerabilities is done by modifying GET and POST requests as well as Cookies and other persistent data storage, by changing the content to inject a piece of additional code before sending the modified requests to the web application the scanner attempts an injection attack. The additional code can represent a SQL command, HTML code or OS commands dependent on the attack being simulated.

The scanner then examines the responses back for the web application to determine if the injection attempt has succeeded by looking for evidence of a successful execution of the code. This evidence can be a delay in the return of the response, the response including the injected input within it which the browser interprets as HTML code, an error message is detected or data that has been retrieved by the simulated attack.

ASV and vulnerability scans generate a large number of reactions when testing for various injection techniques. These reactions can indicate a vulnerability exists or be a false positive.

Often the scanning detects that the response has the injected code embedded within it even though the injected code failed to execute in the intended manner. The automated software includes these as a result in its report. In reality, these results are false positives in that the attempt did fail, however they also indicate that the inputs to the application have not been sanitised to ensure only the expected range of inputs are processed by the application. What has happened is the modified input has passed through the application and been included in the response by the web server back to the vulnerability scan engine without being filtered out.

Although these false positives can be ignored they show that the application is not sanitising variables and values and is allowing a potential vector for future unknown attacks to occur. Eliminating an attack vector by properly sanitising variables, cookies and other forms of persistent data within a web application environment will help protect against attacks in the future.

The advantage of having an application that correctly sanitises the input is that the number of false positives detected during vulnerability scanning is reduced; noise that may be masking a true vulnerability is removed and the need to argue which results are false positives is reduced, especially if ASV scans were being conducted.

A disadvantage of not sanitising the input is often blocks of results are classed as false positives rather than examining hundreds of results and occasionally this means a true result is incorrectly classed as false positive creating a type II error. The incorrectly identification of a positive response as a false positive is worse than including all the responses to a test as it gives a false sense of assurance if vulnerabilities do not appear to have been identified.

When attempting to manually examine the results from automated testing to identify false positives, additional problems are encountered. Vulnerability scanners use their own engines to generate HTTP requests and analyse responses. Then trying to emulate the testing using browsers the current generation are equipped with technology that is designed to reduce attack vectors by filtering the responses sent and the requests received. Browsers such as IE will detect and block attempts at cross-site scripting, IE has improved since V6 and the latest versions prevent traversal attacks etc from the URL box in the browser.

The browser behaviour requires the use of additional tools to manual test for vulnerabilities, tools such as web proxy like web scarab or burpsuite are used to intercept the request objects from the browser, allowing modification before sending them onto the server. The proxy also allows interception of the response objects before they reach the browser, allowing the response to be examined often at a HTML source code level rather than allowing a browser to interpret the response and display it in their pane with filtering out of malicious scripts etc being done by the browser.
Even with just a reasonable size website, there can be hundreds of results from the testing.

Eeliminating the generation of responses, especially false ones by correctly sanitising the input to the application will make the scanning and reporting more efficient and reduce the time spent on false positives in the future. An organisation that looks at what is causing the generation of false positive response to a test scenario and eliminates the causes rather than ignoring the false response will be improving their security posture and making scanning more efficient. They will be reducing the chance of a vulnerability being ignored as it was lost in the noise or wrongly classified as being false.

In summary, it is import to ensure a web application correctly sanitises the input to reduce the production of false positives and improve the effectiveness of vulnerability scanning by reducing noise which masks important results.

Wednesday, 12 December 2012

Catching Insiders

I have discussed the insider threat a number of times and recently came across this article on the Dark Reading Website Five Habits Of Companies That Catch Insiders - Dark Reading which discusses the controls or habits that will aid in catching insiders.

The report Insider Threat Study: Illicit Cyber Activity Involving Fraud in the U.S. Financial Services Sector this article was based on made a number of recommendations which I have listed here


Behavioral and/or Business Process

  • Clearly document and consistently enforce policies and controls.
  • Institute periodic security awareness training for all employees.

Monitoring and Technical

  • Include unexplained financial gain in any periodic reinvestigations of employees.
  • Log, monitor, and audit employee online actions.
  • Pay special attention to those in special positions of trust and authority with relatively easy ability to perpetrate high value crimes (e.g., accountants and managers).
  • Restrict access to PII.
  • Develop an insider incident response plan to control the damage from malicious insider activity, assist in the investigative process, and incorporate lessons learned to continually improve the plan
I do recommend reading the article and the report to gain a better understanding of the controls that reduce the insider threat.

A New Year BYOD hangover for employers

Researchers have released news today of a zero day attack on the Samsung Smart 3D LED TV, whilst wondering how many of these will be unwrapped and installed over the Christmas holiday period that may be susceptible to this form of attack my thoughts turned to Information Security professionals who must surely be wondering what brand new gadgets employees will bringing into the organisation when they return to work after the holidays. Research has shown that employees, especially the younger ones will use their own devices (BYOD) and the cloud services such as dropbox even if the organisations they work for have policies banning such activities.

Every organisation should be considering policies for the use of BYOD within their environment and need to bear in mind that restrictive polices often fail if employees; from the senior level downwards; feel the policies interfere with doing their job and can’t see the implication of their actions on the security and governance of their employers business will continue with unsanctioned behaviour as they try and meet deadlines.

Organisations need to have well thought out policies and have in places procedures for implementing them, employees need to be informed of and frequently refreshed about the policies and implications to the organisation of breeches to information security as part of a continual information security education programme.

Policies on the use of BYOD should outline the privacy issues affecting both the owner of the equipment and the employer; it should cover the privacy the employee should expect from connecting their device to the corporate systems. Another important section of the policy is that it should cover what happens when a device is lost or upgraded. Requirements for notifying the IT department about such circumstances need to be included, the possibility needs to be considered if it is not possible to wipe the corporate data only then the whole device could be wiped losing all data for the employee.

The employee would need to agree to the policy before being able to use their own devices. There is often an advantage of allowing employees to use their own devices in terms of improved productivity, reduced expenditure; however there are costs and negative implications to both the employee and the employer.
Topics to be covered by a policy include
  • Device Selection
  • Encryption
  • Authentication
  • Remote Wipe Capabilities
  • Incident Management
  • Control Third-Party Apps
  • Network Access Controls
  • Intrusion Prevention / Detection Software (IPS/IDS)
  • Anti Virus - AV
  • Connectivity (Bluetooth/Wifi mobile hotspot)
It is not possible for an organisation to be able to support all devices on the market; therefore it may be necessary to limit allowed devices to a subset of those available. Selection of those devices will be a contested decision with various camps complaining their favourite manufacture or OS is not included. Ensuring the list is circulated to employees and reviewing the supported devices on a regularly basis will help alleviate device selection problems.

There are a large number of technical solutions that are available; however the selected solution should support the organisations aims and mission, within the selection process as with the policy generation it may be necessary to seek expert opinion.

There is no reason why the use of BYOD within the organisation cannot be allowed, giving greater flexibility to employees with improved productivity in a controlled environment that will protect the organisation. This is far better than having employees using their own devices in an uncontrolled manner and possible in an unknown manner leaving an organisation vulnerable to a problem they are not aware of. Having a policy that supports employees makes it easier to have sanctions for those who do not comply, no policy allows a situation where there is no control and a restrictive policy will often force employees to use their devices on the quiet.

Thursday, 6 December 2012

Data Protection & the EU

As part of the legal domain on the CISSP course I discussed with the class yesterday about the Data Protection requirements and how the EU data protection maps closely with the OCED data privacy requirements. We also discussed the situation over the transferring data to the US from the EU and the need for organisations in the US to sign up for and stay signatures of the Department of Commerce Safe Harbour agreement and whether the US Patriots act trumps the safe harbour agreement and EU companies should consider whether it is prudent to transfer PII to the US under the safe harbour if the government can read the data.

Today I find two interesting articles about this
Neither are giving a nice rosy feeling that there is a solution to the problem or there will be one in the near future.

Wednesday, 5 December 2012

Vulnerability Disclosure

I am delivering CISSP training this week and today we were discussing software development security in class, during which we discussed the role vulnerability researcher have and the affect of vulnerability disclosure with Proof of Concept has on the development on Malware. As is often the case when I am delivering the training I find in the InfoSec news feeds a relevant story to the topics in the domains of the CISSP, today I found "Exploit kit authors thrive due to PoC code released by whitehats" published by Help Net Security discussing exactly the same point as was giving weight to the discussions we had in class.

Insider Threat hits Swiss Spy Agency

In the news today "Swiss spy agency warns CIA, MI6 over 'massive' secret data theft" a disgruntled employee steals terabytes of data. The employee become disenfranchised after being ignored about warning to his employers about the operation of systems. With his admin rights given access to a lot data he downloaded it onto portable hard drives and walked out of the building.

One needs to question did he have the "need to know" to all the data systems, why was it possible to download vast quantities of data to portable drives. Did security not check employees leaving the building. Was their adequate supervision of those with elevated privileges.

There are a number of controls that should of been in place I suspect some will now be put in place.