Tuesday, 20 August 2013

Tools Update (20th Aug)

My very irregular update on new and updated Information Security tools that I have come across or use. The tools are mainly those for PenTesting although other tools are sometimes included. As a bit of background into how I find these tools, I keep a close watch on twitter and other websites to find updates or new releases, I also search for pen testing and security projects on Sourceforge. Some of the best sites I have found for details of new tools and releases are http://www.toolswatch.org/ & http://tools.hackerjournals.com 

This update is a scanning tool special, just two tools I am mentioning.

NMap 6.40

Nmap is a free and open source utility for network exploration or security auditing. The new release version 6.40 includes the following:
  • 14 new NSE scripts
  • hundreds of new OS and service detection signatures
  • a new --lua-exec feature for scripting Ncat
  • initial support for NSE and version scanning through a chain of proxies
  • improved target specification
  • many performance enhancements and bug fixes
Change log



ZMap is an open-source network scanner that enables researchers to easily perform Internet-wide network studies. With a single machine and a well provisioned network uplink, ZMap is capable of performing a complete scan of the IPv4 address space in under 45 minutes, approaching the theoretical limit of gigabit Ethernet. A new tool created by a team of researchers from the University of Michigan, Dept Computer Science and Engineering.


Importantly the researchers offer the following guidelines

University of Michigan suggestions for researchers conducting Internet-wide scans as guidelines for good Internet citizenship.
  • Coordinate closely with local network administrators to reduce risks and handle inquiries
  • Verify that scans will not overwhelm the local network or upstream provider
  • Signal the benign nature of the scans in web pages and DNS entries of the source addresses
  • Clearly explain the purpose and scope of the scans in all communications
  • Provide a simple means of opting out and honor requests promptly
  • Conduct scans no larger or more frequent than is necessary for research objectives
  • Spread scan traffic over time or source addresses when feasible

It should go without saying that scan researchers should refrain from exploiting vulnerabilities or accessing protected resources, and should comply with any special legal requirements in their jurisdictions.

Saturday, 15 June 2013

PCI DSS, Prism and backdoors

Within days of PRISM being exposed questions were being asked about the effect of possible backdoors into operating systems and the effect this would have on PCI Compliance. There have been some very good points made and some excellent answers given in a number of the forums around PCI compliance.

Having spent effectively the last week within minimal time to follow the news as I have been delivering an in-house week long information security training course in a hotel I am trying to catch-up with the developments.

It does seem that little is known about how PRISM actually works, other than it is a security programme for monitor communications that in the USA is covered by their Foreign Intelligence Surveillance Court making the collection of the data lawful within the USA. The PRISM programme appears to revolve around access to data at nine internet service providers (ISP), who have all said access requests covered by legal requests is given, however the ISP's also say there is no direct access to their data given. Other than the articles in the press where the term ‘backdoor’ is being bandied about there is no definitive proof an actual backdoor into operating systems was being used.

A concern of those who have been discussing PRISM and its potential backdoors into operating systems are about the ‘backdoor’ itself being compromised and being used by people other than those it was intended for.

Within the PCI DSS requirements, the controls on Account data (Cardholder Data and Sensitive Authentication Data) are such that if there is a legitimate legal request for access to account data it can be granted. Any law enforcement agency can request access to account data using due legal process within a judicial area that has jurisdiction over the account data.

The organisations that have to comply with the PCI DSS need to be concerned about unauthorised access to account data and hence the concern over a built in backdoor in an operating system to allow law enforcement access. This concern of backdoors is not new and is not limited to the recently publicised used of PRISM by USA. There have been recorded attempts at getting backdoors into the Linux kernel over the years by compromising the repositories of the kernel, along with rumours of back doors in the Microsoft Operating Systems.

The requirements of the PCI DSS are grouped into six areas

  1. Building and maintain a secure network
  2. Protect Cardholder Data
  3. Maintain a vulnerability management program
  4. Implement strong access control measures
  5. Regularly monitor and test networks
  6. Maintain an information security policy

PCI DSS compliant organisations have to implement controls that would prevent a breach. An unknown backdoor would be considered a zero-day vulnerability. Requirement 6.2 of the PCI DSS requires that a vulnerability management process to be in place to identify and assign a risk rating to vulnerabilities. If you are responsible for PCI DSS compliance you will need to consider the risk of the potential of an unknown backdoor being in the operating system of your servers.

If we consider what a backdoor is in terms of an operating system it provides remote access that bypasses the access controls implemented to ensure only properly authenticated and authorised subjects have access to the resources within that system.

That leaves us with two points to consider

  • Remote access requires a connection through our perimeter around the cardholder data environment (CDE)
  • Cardholder data should be protected whilst stored

The two main requirements of the PCI DSS that cover these points are

  • Requirement 1: Install and maintain a firewall configuration to protect cardholder data
  • Requirement 3: Protect stored cardholder data

Control of remote access

The requirements for controls on remote access are under subsections 1.2 which is “Build firewall and router configurations that restrict connections between untrusted networks and any system components in the cardholder data environment” and sub-section 1.3 which is “Prohibit direct public access between the Internet and any system component in the cardholder data environment”. For a backdoor to be used and attacker would have to penetrate through the network protection or use a tunnel that can pass through proxies and use allowed protocols.

In order to protect the CDE the PCI DSS recommends vulnerability testing and penetration testing as covered by the requirements 11.2 run internal and external network vulnerability scans at least quarterly and after any significant change in the network and 11.3 external and internal penetration testing at least once a year and after any significant infrastructure or application upgrade or modification covering both network and application. This testing is only as good as the vulnerabilities and exploits that are known. The required level of testing by the PCI DSS should considerably reduce the possibility of an attacker gaining access to a server in order to access a built in backdoor.

A network built it the requirements of the PCI DSS requirement 1 should also reduce the possibility of a tunnel existing that can be used with the backdoor to give access to account data.

Protecting stored account data

The PCI DSS has a requirement (req. 3) that stored cardholder data is protected through the use of strong cryptography and cryptography key management. A backdoor to the operating system may not be sufficient to gain access to applications using strong authentication and authorisation through additional third party applications. The PCI DSS requires not tying encryption accounts to the native operating systems access control mechanism. The architecture of any account data processing, storing or transmission application should be designed in such a way to limit access to cryptographic keys used to encrypt account data and give independence of the application from the native operating systems, this would reduce the effectiveness of any vulnerability within the operating system.


The layered approach that is built into the PCI DSS does give strong protection to account data providing the requirements have been implemented fully and correctly.

If you are PCI DSS compliant and are following the requirements of the standard fully you would be reducing the risk considerable. Compliance with the PCI DSS does not guarantee security, however it is a minimal level of security that any prudent or diligent organisation should be doing to protect account data.
In conclusion there is a potential risk to account data from a possible backdoor built into operating systems being used by the PRISM program being discovered and used for nefarious purposes, however the PCI DSS requires organisations to consider the possibility of unknown vulnerabilities, such as a zero-day backdoor, and to have in place controls to minimise the risk, although there will also be a residual risk. Your risk assessment should establish the likelihood of this type of vulnerability occurring and whether your countermeasures have reduced the risk to an acceptable level for the organisation, if you are doing self-assessment, or to the level acceptable by a QSA if you having to complete a RoC. By implementing the controls in the PCI DSS and being compliant you are showing due diligence in trying to protect account data.

Monday, 6 May 2013

Tools update (6th May)

My slightly irregular update on new and updated Information Security tools that I have come across or use. The tools are mainly those for PenTesting although other tools are sometimes included. As a bit of background into how I find these tools, I keep a close watch on twitter and other websites to find updates or new releases, I also search for pen testing and security projects on Source Forge. Some of the best sites I have found for details of new tools and releases are http://www.toolswatch.org/ & http://tools.hackerjournals.com 

Cain & Abel 4.9.44

Cain & Abel is a password recovery tool for Microsoft operating systems.
It allows easy recovery of various kind of passwords by sniffing the network, cracking encrypted passwords using dictionary and brute force attacks, decoding scrambled passwords, revealing password boxes, uncovering cached passwords and analyzing routing protocols.

Arachni v0.4.2 has been released
Arachni is a modular and high-performance (Open Source) Web Application Security Scanner Framework written in Ruby.

Kali 1.0.3
Kali Linux is an advanced Penetration Testing and Security Auditing Linux distribution.
Kali is a complete re-build of BackTrack Linux, adhering completely to Debian development standards. All-new infrastructure has been put in place, all tools were reviewed and packaged,

Thursday, 25 April 2013

Tools update (25th Apr 13)

My slightly irregular update on new and updated Information Security tools that I have come across or use. The tools are mainly those for PenTesting although other tools are sometimes included. As a bit of background into how I find these tools, I keep a close watch on twitter and other websites to find updates or new releases, I also search for pen testing and security projects on Source Forge. Some of the best sites I have found for details of new tools and releases are http://www.toolswatch.org/ & http://tools.hackerjournals.com

SSH Communications Security, the inventor of the Secure Shell and SFTP protocols, today announced the launch of SSH Risk Assessor (SRA), a free tool that provides users with a clear report on risk and compliance exposures in SSH environments. SRA will be available in May 2013, you can request it http://www.ssh.com/index.php/products/ssh-risk-assessor.html (registration required).

Nessus 5.2
Nessus 5.2 offers the ability to store attachments in the scan reports. Scan results now contain, among other things, remote screenshots via Remote Desktop Protocol (RDP) and VNC, as well as “pictures” of scanned websites.
The new attachments feature provides easy access to supporting information for vulnerability investigation and documentation, as well as offers other interesting information.

Friday, 12 April 2013

Identifying SSL/TLS Ciphers

Increasingly communication across networks and the Internet are using SSL/TLS to protect the transactions, this has been driven by a raft of legislation that is mandating the use of strong encryption, examples of such regulations, law and standards from around the world are listed below.

  • Financial Instruments and Exchange Law of 2006
  • FDA TITLE 21 CFR PART 11 (1997)

The SSL/TLS protocols are used by the HTTPS protocol to encrypt web pages and data entered into them. There are a number of versions of SSL/TLS which are in use; SSL was developed by Netscape for transmitting private documents via the Internet. TLS was developed by the Internet Engineering Task Force (IETF) to provide similar functionality to SSL..

SSLv1 - Never Published
SSLv2 - released in February 1995
SSLv3 - released in 1996 (RFC 6101, Historical document)
TLSv1.0 - released in January 1999 (RFC 2246)
TLSv1.1 - released in April 2006 (RFC 4346)
TLSv1.2 - released in August 2008  (RFC 5246)

Both SSL and TLS use cryptographic systems to encrypt data, the actual cryptographic system used is negotiated during the SSL/TLS handshake where the cipher suite is selected and encryption keys are generated and exchanged. Both use the asymmetric encryption using the website certificate to exchange the private keys for symmetric encryption.

In order for SSL/TLS to be acceptable for the encryption of cardholder data in order to comply with requirements for strong encryption such as section 4 of the PCI DSS, the negotiation phase should result in the use of a strong cipher. This requires the server to support versions of SSL and TLS that do not have well know vulnerabilities and use cipher suites based on strong cryptography. The capabilities of a server using HTTPS are advertised by the certificate and the initial phase of negotiating the key exchange.
TLS/SSL supports a large number of cipher suites, where the cipher suite is a combination of symmetric and asymmetric encryption algorithms used to establish secure communication.
Supported cipher suites can be classified based on encryption algorithm strength, key length, key exchange and authentication mechanisms. Some cipher suites offer a stronger level of security than others (e.g. weak cipher suites were developed for export to comply with US export law).

Understanding cipher suites

There are a number of different naming conventions

The Open SSL naming convention, which is probably the most common uses at least nodes in the naming of the ciper suite
  • key exchange,
  • server certificate authentication,
  • stream/block cipher
  • message authentication
For example

DHE for key exchange, RSA for server certificate authentication, 256-bit key AES for the stream cipher, and SHA for the message authentication.

Often the cipher name is prefixed with the protocol such as SSL or TLS and an additional node to indicate the mode used in the stream/block cipher such as


Which indicates it is for TLS and uses cipher block chaining in the implementation of 256 bit key AES. Some additional terms that may be found are
  • Anon - Anonymous cipher suites with no key authentication. Highly vulnerable to man in the middle attack.
  • Export -  Intentionally crippled cipher suite to conform to US export laws. Symmetric cipher used in export cipher suites typically does not exceed 56bits.
  • NULL - Null cipher suites do not provide any data encryption and/or data integrity

Auditing and compliance

One of the problems for those tasked with auditing or ensuring compliance with the regulations/standards is know what strength cryptography has been deployed on the servers.

Vulnerability assessment tools that assess the security profile of servers using SSL/TLS will integrator the server to assess its capabilities and will attempt to connect using all versions of SSL and TLS and a range of ciphers from the weak to the strongest. If the tool supports a PCI DSS test will report if the server is secure to the requirements of the standard. However even without a specific PCI DSS test for secure servers the tools will report the capabilities of the server which can be examined to see if the requirements of the standard are being met. The Nessus scanner has the capability of checking SSL services on arbitrary ports, and will report weak ciphers and includes a PCI DSS audit checks.

The nmap scanner, when used with the service/version scan “–sV” option will identify SSL services. Additionally tools such as openSSL can be used to manually audit

openssl s_client -no_tls1 -no_ssl3 -connect <server_Name>:443

Other tools are sslscan which is included in BackTrack and the ssl_tests script can be used.



Sunday, 7 April 2013

New Talk being developed

Been asked to give a talk on Internet Security to a group of  Occupational Health and Safety professional at one of their safety association meetings, so the talk will be low level on the technicality but the synopsis of the talk is given below

Title: How the web hacks you

The internet has become a feature of all our lives whether at work or at home. Recent developments such as cloud services and the government's push to move its activities online mean that more and more in our personal and work life we are conducting transactions over the web. The web has made a wide range of interactions from finding information to purchasing and banking activities so much more convenient for us. However it has also made it easier for us as individuals and organisations to be attacked via the web with phishing, scams, malware and hacking occurring. Not a day goes past when some form of attack via the web is reported in the news. This talk will outline the reasons why the web is vulnerable, explain some of the more frequent attacks and suggests countermeasures that make it less likely you will be hacked via the web.

Saturday, 6 April 2013

WiFi talk Feb 2013

Received some pictures of the talk I did in Feb this year to the Bedford branch of the BCS at Bedford College.

"WiFi Networks: The Practicalities of Implementing A WiFi Network" is the topic of a talk by Geraint Williams, Information Risk Consultant and Trainer, IT Governance Ltd., and Honorary Visiting Fellow at the University of Bedfordshire.

Secure configuration is becoming ever more important as an increasing number of devices are incorporating wireless technology - from laptops, smartphones, tablets, projectors and cameras, to multimedia entertainment systems and games consoles. The growing demand for allowing BYOD ("Bring Your Own Device") within the corporate network means that larger numbers of organisations are implementing wireless networks.

The wireless network standard 802.11 was originally released in 1997 by the IEEE and, by computing timescales, is a mature technology with a large base of manufacturers and both commercial and domestic users. Despite initiatives like Wireless Protected Setup (WPS) to make installation easier, there are still issues in implementing a network using wireless technology in both the corporate and home environments.

The courts have already convicted paedophiles for piggybacking neighbours wireless networks to download material, and hackers for using wireless networks for pirating software, music and films and for spying on occupants using their own security cameras.

Wireless networks have a history of security problems with flaws in the implementation of WEP and recently with WPS. This talk will look at these issues, the (open source) tools that can be used, and how these apply to the wireless environment. The talk will include practical demonstrations of the tools and techniques discussed in the presentation and will unravel the alphabetic soup of the available standards.

Mar 2013 ADSL Router Analysis

The latest analysis of my ADSL logfiles and a new twist for March, China has dropped completely from the results and the new bad boys are the United States or rather Akamai Technologies, Inc. The underlying scans from Turkey continue.

CountrySource IPsAttacksCountrySource IPsAttacks
China240United States13134
United Kingdom229Germany325
Russia11United Kingdom24



There is no clear correlation between the date of the attacks, although in 2013 the scans are concentrated on two days, the most prolific scanning IP addresses belonged to Akamai Technologies, Inc. who I have discussed previously on my blog.

My previous discussions on Akamai Technologies, Inc

Tuesday, 2 April 2013

Tools Update (2nd April 2013)

My slightly irregular update on new and updated Information Security tools that I have come across or use. The tools are mainly those for PenTesting although other tools are sometimes included. As a bit of background into how I find these tools, I keep a close watch on twitter and other websites to find updates or new releases, I also search for pen testing and security projects on Source Forge. Some of the best sites I have found for details of new tools and releases are http://www.toolswatch.org/ & http://tools.hackerjournals.com

The current stable release of Wireshark is 1.8.6. It supersedes all previous releases, including all releases of Ethereal. You can also download the latest development release (1.9.2) and documentation.

Updates: Autoruns v11.5, Du (Disk Usage) v1.5, Procdump v5.14, Procmon v3.04, Ru (Registry Usage) v1.0

Scylla is another tool that you can use for penetration testing protocols used by different applications.

Sunday, 31 March 2013

Feb 2013 ADSL Router Analysis

I have now completed 12 months of collecting the log files from my ADSL router and moving into the second year of data collection. I will be looking at how 2013 data matches up against the 2012 data on a month per month basis.

Source IP addresses are the source address from the packet(s) detected, it is not necessarily the true source of the attack.

YearCountriesSource IPsAttacks

Attacks coming from Turkish owned IPs is consistent, however in Feb 2013 the rest of the attacks have no pattern.

CountrySource IPsAttacksCountrySource IPsAttacks
Netherlands116United States27


South Africa11

United Kingdom11

Friday, 29 March 2013

Retriving passwords /etc/shadow

Using Python to retrieve passwords from the /etc/shadow file on Backtrack 5R3 as an exercise in improving scripting skills.

Note: This is an educational exercise for those wishing to learning python as part of becoming a security professional in order to improve their skills and enable them to write or modify tools, a key part of any pen testers repertoire. A solution is not giving, however how to get to a working solution is laid out in the notes. By understanding how the shadow password system works, it is possible to write a script to solve the problem.

In the Violent Python book one of the first example is retrieving passwords from the /etc/passwd file and after describing their example it ask if those reading can modify the script to retrieving passwords in the /etc/shadow file, giving the hint that the shadow file uses SHA512 hashing, the functions for which are in the hashlib library. This is a red herring as the hashlib file only outputs either in Hexadecimal or a string containing non-printable ASCII characters, where as the shadow file contains only printable ASCII characters.

First thing is to understand the problem, on backtrack we know the default password is toor for the user root, this enable us to test our script quite easily. However lets us examine a line from the shadow file.


We can see it consists of data separated by colons, the meaning of each segment can be found in the /shadow man page.

  • login name
  • encrypted password
  • date of last password change
  • minimum password age
  • maximum password age
  • password warning period
  • password inactivity period
  • account expiration date
  • reserved field

We are only interested in the first two fields.

  • The login name must be a valid account name, which exist on the system.
  • The encrypted password refer to man page on crypt for details on how this string is interpreted.

The encrypted password file consist of a data segmented by the "$" symbol, these fields are
  • Hash method
  • Salt Value
  • Encrypted Password
The hash methods are represented by the following keys
  • $1$ - MD5
  • $5$ - SHA256
  • $6$ - SHA512
In the case of the example above, the fields are
  • username = root
  • hash method = $6$ (SHA512)
  • Salt = 1hjjWhtS
  • Encrypted password = Or2xL2Eedes/ajatnSc0g ..... 6h7eVs8jlkHVptD0
We still don't have enough information to retrieve the password, as the hashing algorithm, if it is SHA256 or SHA512 is repeated a number of times (rounds). We need to know the number of rounds that have been used as this can be changed, the more rounds, the longer it takes to hash the password which is inconvenient to the user but makes it harder for the attacker if they are brute forcing the password.

If we examine the /etc/login.defs file we will find section giving the number of rounds used.

# Only used if ENCRYPT_METHOD is set to SHA256 or SHA512.
# Define the number of SHA rounds.
# With a lot of rounds, it is more difficult to brute forcing the password.
# But note also that it more CPU resources will be needed to authenticate
# users.
# If not specified, the libc will choose the default number of rounds (5000).
# The values must be inside the 1000-999999999 range.
# If only one of the MIN or MAX values is set, then this value will be used.
# If MIN > MAX, the highest value will be used.

We know have enough information to attempt to write a script to retrieve the password, we can copy the shadow file to a text file "shadow.txt" and we need a dictionary file "dictionary.txt" contain a word per line.

We can read each line of the shadow.txt, parse the line to extract the username, salt and encrypted password. We can combine the salt with the word from our dictionary.txt file and hash the word and compare it to the encrypted password, if it matches we have guessed the password. In order to do this we need the correct hashing library, the hashlib is not suitable, the correct one is Passlib which is not installed by default on Backtrack 5R3 but can easily be added using the following command

easy_install passlib

To use passlib we can send it the guessed word, the salt value and number of rounds to be used, as shown in the following commands to import the hashing routine and call it.

from passlib.hash import sha512_crypt
sha512_crypt.encrypt(word,salt=salt, rounds=5000)

The lib passlib when it produces a hash digest the output consists of a number of fields and uses by default 60,000 rounds.

  • Hash method
  • Number of rounds
  • Salt Value
  • Encrypted Password

An oddity is that when the number of rounds is set to 5000, the number of rounds is not outputted, making it compatible with the shadow file format.

All we need to do is parse the returned line and compare the encrypted value of the guessed word to the value retrieved from the shadow file.

Security point

Changing the default number of rounds to a higher value can considerable delay an attacker and often make tools that use the default value unusable. Assuming 250ms to hash a word using 5000 rounds, changing to 60,000 rounds will increase the time to 3 secs, over a dictionary attack using several thousand words this will dramatically increase the time to try every word.

Learning outcome

Understanding an operating system and how it is configured will help the security professional develop techniques and tools for testing the security posture of the operating system. The exercise in the book was impossible to complete without understanding how the shadow password system was configured.

Remote procedure call (RPC)

is an inter-process communication that allows a computer program to execute a subroutine or procedure in another address space (commonly on another computer on a shared network) without the programmer explicitly coding the connection for this remote interaction.

The idea of treating network operations as remote procedure calls can be traced back to the ARPANET in the 1980s.  Xerox under the name "Courier" implemented one of the first business uses of RPC in 1981. The first popular implementation of RPC on Unix was Sun's RPC (now called ONC RPC), this was used as the basis for Network File System (NFS).

The RPC (Remote Procedure Call) mechanism allows an application to seamlessly invoke remote procedures, as if these procedures were executed locally. There are two main implementations of the RPC mechanism:
  • ONC RPC 
RPC allows one program to request a service from a program located in another computer in a network without having to understand network details. RPC uses the client/server model. The requesting program is a client and the service-providing program is the server. A number of interesting services run as Remote Procedure Call (RPC) services using dynamically assigned high ports. 


To keep track of registered endpoints and present clients with accurate details of listening RPC services, a portmapper service listens on known TCP and UDP ports and maps RPC program numbers and versions to Internet port numbers.

  • The ONCRPC portmapper (also known as rpcbind within Solaris) can be queried using the rpcinfo command found on most Unix-based platform and listens on TCP and UDP port 111
  • The Microsoft RPC endpoint mapper (also known as the DCE locator service) listens on both TCP and UDP port 135


Open Network Computing (ONC) Remote Procedure Call (RPC)  was originally developed by Sun Microsystems as part of their Network File System project. It was orginally described in RFC 1831, published in 1995. RFC 5531, published in 2009, is the current version. Authentication mechanisms used by ONC RPC are described in RFC 2695, RFC 2203, and RFC 2623. In 2009, Sun relicensed the ONC RPC code under the standard 3-clause BSD license and then reconfirmed by Oracle Corporation in 2010 following confusion about the scope of the re-licensing.

The port mapper (rpc.portmap or just portmap, or rpcbind) is an Open Network Computing Remote Procedure Call (ONC RPC) service that runs on network nodes that provide other ONC RPC services.

The port mapper service always uses TCP or UDP port 111; a fixed port is required for it, as a client would not be able to get the port number for the port mapper service from the port mapper itself. The port mapper must be started before any other RPC servers are started.


Microsoft RPC (Microsoft Remote Procedure Call) is a modified version of DCE/RPC. Additions include support for Unicode strings, implicit handles, inheritance of interfaces (which are extensively used in DCOM). Examples of Microsoft applications and services that use port 135 for endpoint mapping include Outlook, Exchange, and the Messenger Service.

Depending on the host configuration, the RPC endpoint mapper can be accessed through TCP and UDP port 135, via SMB with a null or authenticated session (TCP 139 and 445), and as a web service listening on TCP port 593


Both ONC RPC and MSRPC portmappers can be interrogated to provide information on the services that are running through them.

The rpcinfo tool can be used on Unix systems to enumerate the services running on port 111 (rpcbind) or 32771 (Sun's alternate portmapper). For windows systems tools such as edump can be used. Nmap has a number of useful scripts

  • msrpc-enum
  • rpc-grind
  • rpcap-brute
  • rpcap-info
  • rpcinfo
  • msrpc
  • msrpctypes
  • nrpc

In addition to those listed above a number of the smb scripts use RPC to enumerate services. When enumerating the services we are looking for interesting services such as nfs, rusers, mountd along with information on smb.

In networks protected by firewalls and other mechanisms, access to the RPC portmapper service running on port 111 is often filtered. Therefore, determined attackers can scan high port ranges (UDP and TCP ports 32771 through 34000 on Solaris hosts) to identify RPC services that are open to direct attack.
You can run nmap with the -sR option to identify RPC services listening on high ports if the portmapper is inaccessible.

Wednesday, 27 March 2013

Server Message Block

The Server Message Block (SMB) operates as an application-layer network protocol mainly used for providing shared access to files, printers, serial ports, and miscellaneous communications between nodes on a network. Using this protocol, an application (or the user of an application) can access files at a remote server as well as other resources, including printers, mailslots, and named pipes. A client application can read, create, and update files on the remote server.

Most usage of SMB involves computers running Microsoft Windows, where it was known as "Microsoft Windows Network" before the subsequent introduction of Active Directory. Corresponding Windows services are the "Server Service" (for the server component) and "Workstation Service" (for the client component).

The Server Message Block protocol can run atop the Session (and lower) network layers in several ways:
  • directly over TCP, port 445
  • via the NetBIOS API, which in turn can run on several transports:
    • on UDP ports 137, 138 & TCP ports 137, 139
    • on several legacy protocols such as NBF


There have been a number of versions of SMB starting with its forerunner CIFS up to version 3 in Windows Server 2012. The versions and the corresponding Operating Systems are shown below.
  • CIFS – The ancient version of SMB that was part of Microsoft Windows NT 4.0 in 1996.
  • SMB 1.0 (or SMB1) – The version used in Windows 2000, Windows XP, Windows Server 2003 and Windows Server 2003 R2
  • SMB 2.0 (or SMB2) – The version used in Windows Vista (SP1 or later) and Windows Server 2008
  • SMB 2.1 (or SMB2.1) – The version used in Windows 7 and Windows Server 2008 R2
  • SMB 3.0 (or SMB3) – The version used in Windows 8 and Windows Server 2012
Although the protocol is proprietary, its specification has been published to allow other systems to interoperate with Microsoft operating systems that use the new protocol.

The SMB protocol can provide a lot of information for the enumeration of targets and this is shown below.

SMB & NMap

Nmap can discovery a lot of information about a target using smb, typical output from against a Windows target is show below.

| smb-os-discovery:
|   OS: Windows XP (Windows 2000 LAN Manager)
|   OS CPE: cpe:/o:microsoft:windows_xp::-
|   Computer name: insecure-62400a
|   NetBIOS computer name: INSECURE-62400A
|   Workgroup: WORKGROUP
|_  System time: 2013-03-26T17:10:49+00:00
| smb-security-mode:
|   Account that was used for smb scripts: <blank>
|   User-level authentication
|   SMB Security: Challenge/response passwords supported
|_  Message signing disabled (dangerous, but default)
|_smbv2-enabled: Server doesn't support SMBv2 protocol

Nmap supports the following scripts with their designated categories.

NMAP scripts (category)

  • smb-brute (brute) (intrusive)
  • smb-check-vulns (dos) (exploit) (intrusive) (vuln)
  • smb-enum-domains (discovery) (intrusive)
  • smb-enum-groups (discovery) (intrusive)
  • smb-enum-processes (discovery) (intrusive)
  • smb-enum-sessions (discovery) (intrusive)
  • smb-enum-shares (discovery) (intrusive)
  • smb-enum-users (auth) (intrusive)
  • smb-flood (dos) (intrusive)
  • smb-ls (safe) (discovery)
  • smb-mbenum (safe) (discovery)
  • smb-os-discovery (safe) (default) (discovery)
  • smb-print-text (intrusive)
  • smb-psexec (intrusive)
  • smb-security-mode (safe) (default) (discovery)
  • smb-server-stats (discovery) (intrusive)
  • smb-system-info (discovery) (intrusive)
  • smb-vuln-ms10-054 (intrusive) (vuln)
  • smb-vuln-ms10-061 (intrusive) (vuln)
  • smbv2-enabled (safe) (default)

Some of these scripts will require you to specify the unsafe script argument "--script-args=unsafe=1" in order for them to run..

smb-flood is not recommended as a general purpose script, because a) it is designed to harm the server and has no useful output, and b) it never ends (until timeout).

The smb-psexec is not included by default and needs downloading from http://nmap.org/psexec/

Null Sessions

A key feature an attacker will be looking for is null sessions, where an attacker can connect via an anonymous user, where a connection can be made using a command as shown below.

net use \\\IPC$ "" /u:""

Once a connection has been formed it is possible to enumerate shares on the remote system, a lot of this activity can be done using the scripts in Nmap.

Friday, 22 March 2013

nbtstat usage

Putting together a resource on nbtstat.

Nbtstat is a diagnostic tool provided by Microsoft in several of its WIndows versions for NetBIOS. It can provide a source of information for a PenTester that may be useful.

Understanding NetBIOS and the output of the nbtstat tool can help identify machines and servers within a network.

From Wikipedia

NetBIOS is an acronym for Network Basic Input/Output System. It provides services related to the session layer of the OSI model allowing applications on separate computers to communicate over a local area network.  NetBIOS normally runs over TCP/IP via the NetBIOS over TCP/IP (NBT) protocol. This results in each computer in the network having both an IP address and a NetBIOS name corresponding to a (possibly different) host name.

NetBIOS runs over TCP/IP and is the network component that performs computer name to IP address mapping, name resolution. It provides three distinct services:

  • Name service for name registration and resolution.
  • Datagram distribution service for connectionless communication.
  • Session service for connection-oriented communication.

These operate over the following network ports

  • the name service operates on UDP port 137 (TCP port 137 can also be used, but rarely is).
  • the datagram service runs on UDP port 138
  • the session service runs on TCP port 139.

Computers names

Microsoft Windows are identified by names, there is the DNS host name which is out of scope for this article and the hostname which has limitations.

Minimum name length: 1 character.
Maximum name length: 15 characters.

You may of expected the length to be 16 characters however the last character is reserved to identify the functionality that is installed on the registered network device.

From Wikipedia

The NetBIOS name is 16 ASCII characters, however Microsoft limits the host name to 15 characters and reserves the 16th character as a NetBIOS Suffix. This suffix describes the service or name record type such as host record, master browser record, or domain controller record. The host name (or short host name) is specified when Windows networking is installed/configured, the suffixes registered are determined by the individual services supplied by the host.

I have collated from a number of sources some of the unique Identifiers and these are listed below.

Unique Identifiers

Number (Hex) Usage for unique usernames Name
03 name of the user currently logged on in the WINS database <username>
Number (Hex) Usage for unique names Name
00 Workstation, Domain Name <computername>
01 Messenger (Workstation) <computername>
03 Messenger (User) <computername>
06 Remote Access Server <computername>
1F NetDDE <computername>
20 File Server <computername>
21 Remote Access Server Client <computername>
22 Microsoft Exchange Interchange <computername>
23 Microsoft Exchange Store <computername>
24 Microsoft Exchange Directory <computername>
30 Modem Sharing Server Service <computername>
31 Modem Sharing Client Service <computername>
42 mccaffee anti-virus <computername>
43 SMS clients remote control <computername>
44 SMS Administrators Remote Control tool <computername>
45 SMS Clients Remote Chat <computername>
46 SMS Clients Remote Transfer <computername>
4C DEC Pathworks TCPIP service on Windows NT <computername>
52 DEC Pathworks TCPIP service on Windows NT <computername>
53 Domain Name Service (DNS)?? <computername>
87 Microsoft Exchange MTA <computername>
6A Microsoft Exchange IMC <computername>
1B Domain Master Browser <computername>
1F NetDDE Service ID <computername>
BE Network monitor agent <computername>
BF Network monitor utility ID <computername>
Number (Hex) Usage for group names Name
00 Name Domain <domain>
01 Master Browser <\\--__MSBROWSE__>
20 Internet Group name ID <domain>
1C Domain Controller <domain>
1D Master Browser name <domain>
1E Browser Service Elections <domain>
Number (Hex) Usage for group names (IIS) Name
00 IS~computer name <INet~Services>
01 INet~Services <computername>    

nbtstat tool

To run Nbtstat there is no authentication required across domains and workgroups on the Windows computers.

NBTSTAT [ [-a RemoteName] [-A IP address] [-c] [-n] [-r] [-R] [-RR] [-s] [-S] [interval] ]

-a   (adapter status)
     Lists the remote machine's name table given its name
-A   (Adapter status)
      Lists the remote machine's name table given its IP address.
-c   (cache)        
     Lists NBT's cache of remote [machine] names and their IP addresses
-n   (names)        
     Lists local NetBIOS names.
-r   (resolved)    
     Lists names resolved by broadcast and via WINS
-R   (Reload)      
     Purges and reloads the remote cache name table
-S   (Sessions)    
     Lists sessions table with the destination IP addresses
-s   (sessions)    
     Lists sessions table converting destination IP addresses to computer NETBIOS names.
-RR  (ReleaseRefresh)
     Sends Name Release packets to WINS and then, starts Refresh


RemoteName - Remote host machine name.
IP address - Dotted decimal representation of the IP address.
interval - Redisplays selected statistics, pausing interval seconds between each display. Press Ctrl+C to stop redisplaying statistics.

The useful commands are the -a,-A which allow querying of remote machines either by name or IP address, the -n returns the local hosts information.

From security point of view by analysis of the NetBIOS information on a computer it is possible to identity a machines NetBIOS name, the domain it is part of and the domain controller, along with the function of the machine. All this is from understanding the NetBIOS and the character type identifier. To gain this information we need to use the nbtstat tool or equivalents.

Other tools

The output of nbtstat is collected by a number of standard PenTesting tools, below is an example of the output from NMAP when run against a target machine

| nbstat:
|   NetBIOS name: INSECURE-62400A, NetBIOS user: <unknown>, NetBIOS MAC: 08:00:27:fc:55:b3 (Cadmus Computer Systems)
|   Names
|     INSECURE-62400A<00>  Flags: <unique><active>
|     WORKGROUP<00>        Flags: <group><active>
|     INSECURE-62400A<20>  Flags: <unique><active>
|     WORKGROUP<1e>        Flags: <group><active>
|     WORKGROUP<1d>        Flags: <unique><active>
|_    \x01\x02__MSBROWSE__\x02<01>  Flags: <group><active>

This clears shows the machine workstation with a workgroup, however if it had been part of a domain we could identify the domain and the domain controller.

Useful NMAP scripts

broadcast-netbios-master-browser - Attempts to discover master browsers and the domains they manage.
nbstat - Attempts to retrieve the target's NetBIOS names and MAC address.

Tuesday, 19 March 2013

PenTest Machine configuration (pt2)

PenTest machine configuration (pt2)

Continuing my set of notes on preparing a machine for the  CREST registered tester exam. An important disclaimer is that this is set-up the tools that I use for some PenTesting, it is not a recommend set-up for the exam, each candidate need to assemble their own test machine to suit their own methodology.

One of the items I set-up was a Windows machine using VirtualBox, this machine is to allow easy access to Windows passed tools to help with attacking Windows targets. The tools I have installed are as follows

  • Cain & Abel
  • SysInternals suite of tools
  • Command here (Power toy)
  • WinnFingerprint
  • nbtscan
  • Scanline
  • Netcat
  • TFTP 32 server
  • hxdef100
  • dcomexploit

The last couple are ones that occasionally useful on old unpatched machines, likelihood of needing them is very low but it does hurt to have some old faithfuls around

Sunday, 17 March 2013

Nikto & MagicTree

Magic Tree is a Pen testing productivity tool on Backtrack 5R3 and from a standard install it can't launch Nikto from with itself and access the output file.

What are Nikto & Magic Tree

Nikto is an Open Source (GPL) web server scanner which performs comprehensive tests against web servers for multiple items, including over 6500 potentially dangerous files/CGIs, checks for outdated versions of over 1250 servers, and version specific problems on over 270 servers. It also checks for server configuration items such as the presence of multiple index files, HTTP server options, and will attempt to identify installed web servers and software. Scan items and plugins are frequently updated and can be automatically updated

MagicTree is a penetration tester productivity tool. It is designed to allow easy and straightforward data consolidation, querying, external command execution and report generation.  "Tree" is because all the data is stored in a tree structure, and "Magic" is because it is designed to magically do the most cumbersome and boring part of penetration testing - data management and reporting.

Configuring Nikto to work with other tools such as MagicTree on Backtrack 5R3

Create a symbolic link for nikto.pl

ln -s /pentest/web/nikto/nikto.pl /usr/local/bin

Edit /pentest/web/nikto/nikto.pl, modify the configfile variable line to be

$VARIABLES{'configfile'} = /pentest/web/nikto/nikto.conf"

Edit the nikto.conf file


Using Nikto from Magic Tree

Queries can be run on the data gathered within Magic Tree which generate host and port number details, these can be fed into Nikto in the following command, the use of $out allows the XML formatted data from Nikto to be merged with the existing data in Magic Tree

nikto.pl -host $host -port $port -Format xml -output $out


Nikto - http://www.cirt.net/nikto2
MagicTree - http://www.gremwell.com/what_is_magictree

PenTest machine configuration

Notes on preparing a machine for the  CREST registered tester exam. An important disclaimer is that this is set-up the tools that I use for some PenTesting, it is not a recommend set-up for the exam, each candidate need to assemble their own test machine to suit their methodology.

I am starting with Backtrack 5R3 as a basis


Installed virtualbox for running a windows virtual machine to allow access to windows based tools for testing Windows clients.

Download the version for Ubuntu 10.04 from the official site

wget http://download.virtualbox.org/virtualbox/4.0.10/virtualbox-4.0_4.0.10-72479~Ubuntu~lucid_i386.deb

Install some dependencies and install virtualbox

apt-get -f -y autoremove
apt-get install libqt4-opengl libqt4-opengl-dev
dpkg -i virtualbox-4.0_4.0.10-72479~Ubuntu~lucid_i386.deb 


installed rlogin to allow use of the R* services, if not installed attempting to rlogin uses SSH

apt-get install rsh-client


installed tftp clients and services

apt-get install tftpd

apt-get install atftpd


OpenNAS is a fork of Nessus v2 and to be honest I would prefer to use the professional feed version of Nessus for this testing, however we are working on a couple of jobs for clients and I can't hijack the application for use on the test. So since I have used OpenNAS before and it has given good results I will be using that. However on the backtrack distro is requires setting up and some additional configurations to get it working fully.

At any stage of the configuration you can always run the following script to check what is missing:


The stages to go through are

Configure certificates


Then sync the NVTs:


Create an admin account:

openvasad -c 'add_user' -n admin -r Admin

Configure access for the OpenVas Manager:

 openvas-mkcert-client -n om -i

Start the scanner (this will take some time after the NVTs has been sync'd) :


Finally rebuild the database and run the services:

openvasmd --rebuild
openvasmd -p 9390
openvasad -p 9393
gsad --http-only -p 9392

Now browse to port 9392 on your machine and login with the account you created or use the security desktop. OpenVas will be unable to run other additional scanners, to enable it to use them do the following

Install Arachni: apt-get update;

apt-get install arachni

Create the following symbolic links:

ln -s /pentest/web/dirb/dirb /usr/local/bin
ln -s /pentest/web/nikto/nikto.pl /usr/local/bin

Edit /pentest/web/nikto/nikto.pl, modify the configfile variable line to be

$VARIABLES{'configfile'} = /pentest/web/nikto/nikto.conf"

Edit /pentest/web/nikto/nikto.conf and set EXECDIR to /pentest/web/nikto


In order to get Wapiti to work it needs more than a symbolic link in the /usr/local/bin directory. Both the python scripts (wapiti.py & vulnerability.py) should be owned by root and executable, in my set-up only the vulnerability.py script needed setting to executable.

chmod 755 /pentest/web/wapiti/vulnerability.py

Need to create a wapiti script in the /usr/local/bin directory containing the following lines

cd  /pentest/web/wapiti/
./wapiti.py $*

Ensure it is executable and owned by root, finally to make it easier to start OpenVAS create a startup script which contains the following

openvasmd --rebuild
openvasmd -p 9390
openvasad -p 9393
gsad --http-only -p 9392

Kali update

One of the tools I would of liked installed within Kali would of been Armitage, however quick as flash on the fast and easy hacking website a post on installing it http://www.fastandeasyhacking.com/faq, the good news it was in the Kali Linux repository.

So using

apt-get install armitage

Installed one of my favourite tools, in order to use it you need to start metasploit which is described on the Kali documentation pages

service postgresql start
service metasploit start

As always to start the console and when started for the first time it will create it's own database


If you would prefer to have PostgreSQL and Metasploit launch at startup, you can use update-rc.d to enable the services as follows.

update-rc.d postgresql enable
update-rc.d metasploit enable

My next look will be OpenVAS

Thursday, 14 March 2013

Installing Kali

Managed yesterday to download the new enterprise version of backtrack which is called Kali last night and decided to play with installing into a virtual machine. I used to use VMWare a lot but have been using virtual box recently.

For this exercise decided to try and install into VirtualBox and after going through the graphical install routine from the ISO boot menu, on restarting got a critical error and at midnight decided to give it a rest and experiment later with the installation.

Tonight, although a VMWare version of Kali is available decided to install from the ISO image into VMWare, absoultly no problems with the process, although to get the open VM tools working fully in the graphical mode needed to run an additional command that is not on the Kali Documentation website http://docs.kali.org/general-use/install-vmware-tools-kali-guest 

In addition to using

apt-get install open-vm-tools

you need to run the following

apt-get install open-vm-toolbox

A restart later and instance moving of mouse between guest and host.

Now to do some further playing and will post anything interesting

Wednesday, 13 March 2013

Tools Update (13th Mar 13)

My slightly irregular update on new and updated Information Security tools that I have come across or use. The tools are mainly those for PenTesting although other tools are sometimes included. As a bit of background into how I find these tools, I keep a close watch on twitter and other websites to find updates or new releases, I also search for pen testing and security projects on Source Forge. Some of the best sites I have found for details of new tools and releases are http://www.toolswatch.org/ http://tools.hackerjournals.com 

Big news is the release of Kali Linux, the enterprise version of Backtrack, the announcement http://www.backtrack-linux.org/backtrack/kali-linux-has-been-released/ points to a new web site http://www.kali.org/ that supports the project.

In their words "From the creators of BackTrack comes Kali Linux, the most advanced and versatile penetration testing distribution ever created. BackTrack has grown far beyond its humble roots as a live CD and has now become a full-fledged operating system. With all this buzz, you might be asking yourself"

I have downloaded a copy to try but the big differences are listed as being that Kali Linux is geared towards professional penetration testing and security auditing. 

As such, several core changes have been implemented in Kali Linux which reflect these needs:

  • Single user, root access by design: Due to the nature of security audits, Kali linux is designed to be used in a “single, root user” scenario
  • Network services disabled by default: Kali Linux contains sysvinit hooks which disable network services by default. These hooks allow us to install various services on Kali Linux, while ensuring that our distribution remains secure by default, no matter what packages are installed. Additional services such as Bluetooth are also blacklisted by default
  • Custom Linux kernel: Kali Linux uses an upstream kernel, patched for wireless injection.

An interesting development is the availability of distro suitable for the Raspberry Pi 

Tuesday, 26 February 2013

Tools Update (26th Feb)

My slightly irregular update on new and updated Information Security tools that I have come across or use. The tools are mainly those for PenTesting although other tools are sometimes included. As a bit of background into how I find these tools, I keep a close watch on twitter and other websites to find updates or new releases, I also search for pen testing and security projects on Source Forge. Some of the best sites I have found for details of new tools and releases are http://www.toolswatch.org/ http://tools.hackerjournals.com

Burp Suite Professional v1.5.06

This release adds a number of useful new features and bugfixes

  • New CSRF technique
  • New SSL options

Wireshark v1.9.0

Development release

Pwn Pad

The Pwn Pad - a commercial grade penetration testing tablet which provides professionals an unprecedented ease of use in evaluating wired and wireless networks.  The sleek form factor of the Pwn Pad makes it an ideal product choice when on the road or conducting a company or agency walk-through.  This highspeed, lightweight device, featuring extended battery life and 7” of screen real estate offers pentesters an alternative never known before.

Monday, 18 February 2013

Tools update (18th Feb)

My slightly irregular update on new and updated Information Security tools that I have come across or use. The tools are mainly those for PenTesting although other tools are sometimes included. As a bit of background into how I find these tools, I keep a close watch on twitter and other websites to find updates or new releases, I also search for pen testing and security projects on Source Forge. Some of the best sites I have found for details of new tools and releases are http://www.toolswatch.org/ & http://tools.hackerjournals.com

WAppEx v2.0 : Web Application exploitation Tool

WAppEx is an integrated Web Application security assessment and exploitation platform designed with the whole spectrum of security professionals to web application hobbyists in mind. It suggests a security assessment model which revolves around an extensible exploit database. Further, it complements the power with various tools required to perform all stages of a web application attack.

Automated HTTP Enumeration Tool

A python script for Automated HTTP Enumeration. currently only in the initial beta stage, but includes basic checking of files including the Apache server-status as well as well IIS WebDAV and Microsoft FrontPage Extensions, many more features will be added to this tool which will make lot of the enumeration process quick and simple.

Weevely 1.01 released

Weevely is a stealth PHP web shell that simulate an SSH-like connection. It is an essential tool for web application post exploitation, and can be used as stealth backdoor or as a web shell to manage legit web accounts, even free hosted ones.

BackBox Linux 3.01 updated to include Weevely

BackBox is a Linux distribution based on Ubuntu. It has been developed to perform penetration tests and security assessments. Designed to be fast, easy to use and provide a minimal yet complete desktop environment, thanks to its own software repositories, always being updated to the latest stable version of the most used and best known ethical hacking tools.

Sunday, 10 February 2013

Keep It Simple Stupid

Reading about the Horse/Beef issue, made me think of  the KISS principle from the US Navy in 1960's, Keep It Simple, Stupid.

The supply chain for Findus reads like this "A Swedish brand - Findus - supplying British supermarkets employed a French company, Comigel, to make its ready meals. To get meat for its factory in Luxembourg, Comigel called on the services of another French firm Spanghero. It used an agent in Cyprus, who in turn used an agent in the Netherlands, who placed the order at an abattoir in Romania."

Additionally isn't the problem with the banks due to over complicated financial instruments that no one understands fully. All this shows it that if a process gets over complicated it is liable to break and have faults, a simply process is easier to fault find and rectify.

The principle of KISS applies to information security as well as software development, overcomplicated systems and software are going to lead to vulnerabilities that will lead to systems and organisations being overcomplicated. Capability Maturity Models (CMM) should be looking at ensuring at processes are not overcomplicated and are easily understand by all those involved in an organisation. When a process is fully understood it is going to be a more mature process.

Tools update (10th Feb)

My slightly irregular update on new and updated Information Security tools that I have come across or use. The tools are mainly those for PenTesting although other tools are sometimes included. As a bit of background into how I find these tools, I keep a close watch on twitter and other websites to find updates or new releases, I also search for pen testing and security projects on Source Forge. Some of the best sites I have found for details of new tools and releases are http://www.toolswatch.org/ & http://tools.hackerjournals.com

DotDotPwn v3.0.1

The latest version of DotDotPwn v3.0.1 released. DotDotPwn is a flexible intelligent fuzzer to discover traversal directory vulnerabilities in software such as HTTP/FTP/TFTP servers, Web platforms such as CMSs, ERPs, Blogs, etc. It's written in perl programming language and can be run either under *NIX or Windows platforms. Also, it has a protocol-independent module to send the desired payload to the host and port specified. On the other hand, it also could be used in a scripting way using the STDOUT module.


Pendmoves v1.2: This update to Pendmoves adds support for 64-bit directories.
Process Explorer v15.3: This major Process Explorer release includes heat-map display for process CPU, private bytes, working set and GPU columns, sortable security groups in the process properties security page, and tooltip reporting of tasks executing in Windows 8 Taskhostex processes. It also creates dump files that match the bitness of the target process and works around a bug introduced in Windows 8 disk counter reporting.
Sigcheck v1.91: This update to Sigcheck prints the link time for executable files instead of the file last-modified time, and fixes a bug introduced in 1.9 where the –q switch didn’t suppress the print out of the banner.
Zoomit v4.42: Zoomit now includes an option to suppress zoom-in and zoom-out animation to better support remote RDP sessions and fixes a bug that caused static zoom to snap to the top and left side of the screen in some cases

NOWASP (Mutillidae)

NOWASP (Mutillidae) is a free, open source, deliberately vulnerable web-application providing a target for web-security enthusiest. NOWASP (Mutillidae) can be installed on Linux and Windows using LAMP, WAMP, and XAMMP for users who do not want to administrate a webserver.