October 23, 2012 Leave a comment
I hope you have read Ron’s excellent post about .git on web sites and how you can take advantage of nmap script to find out if you have them. In the comments you can find even Google Dork to find out indexed ones. Problem about his approach was assuming that directory browsing is enabled which was not my case. Recent post on carnal0wnage also gives good tips about getting .git files on the web server. It’s hard to miss DVCS-pillage tool and Baldwin’s paper. It’s pretty good tool, but DVCS-Pillage did not support https for git (you can find patch on my github page) and it was also very slow due to repeatable “git log” usage.
Step further is git plugin for Metasploit. Still, they all hope they downloaded everything.
I really wanted to have support for other branches than master and make sure that I downloaded whole git tree, so I can get ALL files and do it fast. So, of course, I’ve made my own solution in Perl available here (only for git now):
You just need to say:
rip-git.pl -v -u http://www.example.com/.git/
rip-git.pl will download git repository, check what is missing and download that, so you can fully checkout the source.
Note that it will do “git checkout -f” for you as well. That assumes that you have git on the same machine as script as it is using git commands.
It also supports other branches (just specify -b branch for other than master).
One neat trick is that my tool is actually using “git fsck” to find missing entries and download them which is quite faster than using repeatable “git log”.
Let me know if it works for you!