SKIPFISH(1)
NAME
skipfish - active web application security reconnaissance tool
SYNOPSIS
skipfish [options] -o output-directory start-url [start-url2 ...]
DESCRIPTION
skipfish is an active web application security reconnaissance tool. It
prepares an interactive sitemap for the targeted site by carrying out a
recursive crawl and dictionary-based probes. The resulting map is then
annotated with the output from a number of active (but hopefully nondisruptive) security checks. The final report generated by the tool is
meant to serve as a foundation for professional web application security assessments.
OPTIONS
- Authentication and access options:
- -A user:pass
use specified HTTP authentication credentials
- -F host:IP
pretend that 'host' resolves to 'IP'
- -C name=val
append a custom cookie to all requests
- -H name=val
append a custom HTTP header to all requests
- -b (i|f)
use headers consistent with MSIE / Firefox
- -N do not accept any new cookies
- Crawl scope options:
- -d max_depth
maximum crawl tree depth (default: 16)
- -c max_child
maximum children to index per node (default: 1024)
- -r r_limit
max total number of requests to send (default: 100000000)
- -p crawl%
node and link crawl probability (default: 100%)
- -q hex repeat a scan with a particular random seed
- -I string
only follow URLs matching 'string'
- -X string
exclude URLs matching 'string'
- -S string
exclude pages containing 'string'
- -D domain
also crawl cross-site links to a specified domain
- -B domain
trust, but do not crawl, content included from a third-party domain
- -O do not submit any forms
- -P do not parse HTML and other documents to find new links
- Reporting options:
- -o dir write output to specified directory (required)
- -J be less noisy about MIME / charset mismatches on probably static
content
- -M log warnings about mixed content
- -E log all HTTP/1.0 / HTTP/1.1 caching intent mismatches
- -U log all external URLs and e-mails seen
- -Q completely suppress duplicate nodes in reports
- -u be quiet, do not display realtime scan statistics
- Dictionary management options:
- -W wordlist
load an alternative wordlist (skipfish.wl)
- -L do not auto-learn new keywords for the site
- -V do not update wordlist based on scan results
- -Y do not fuzz extensions during most directory brute-force steps
- -R age purge words that resulted in a hit more than 'age' scans ago
- -T name=val
add new form auto-fill rule
- -G max_guess
maximum number of keyword guesses to keep in the jar (default: 256)
- Performance settings:
- -g max_conn
maximum simultaneous TCP connections, global (default: 50)
- -m host_conn
maximum simultaneous connections, per target IP (default: 10)
- -f max_fail
maximum number of consecutive HTTP errors to accept (default: 100)
- -t req_tmout
total request response timeout (default: 20 s)
- -w rw_tmout
individual network I/O timeout (default: 10 s)
- -i idle_tmout
timeout on idle HTTP connections (default: 10 s)
- -s s_limit
response size limit (default: 200000 B)
- -h, --help
Show summary of options.
AUTHOR
skipfish was written by Michal Zalewski <lcamtuf@google.com>.
- This manual page was written by Thorsten Schifferdecker
<tsd@debian.systs.org>, for the Debian project (and may be used by others).