First post on my new Blog! I hope you don't mind the cheesy domain name, once someone said it... I couldn't allow myself choose any other. College work has been keeping me occupied and the only other thing I've worked on since my last post is this site. I had to give myself a way of making blog posts easier so I added a control management system of sorts...A FORM
So what is this crawl.py of which I speak, it's a simple script I wrote today that basically scrapes a url and recovers more urls, it then recursively scrapes all of the unique ones until it runs out of... you guessed it, urls. It writes all of the found urls to a file. Then prints the amount of recovered urls and total visited ones.
I started off with the intention of making a script to scan for parameter passing urls to automate sql injection vulnerable urls but got lazy and decided I'd done enough for today. It will run for a large time if the site is huge, there is a clean exit on ctrl+c and the totals are still printed/stored in your specified file.
Here is a link to Crawl.txt. This "software" or code is free to use in anyway you please.
Some help! Use -h for help options.
Now that I am all settled in at my new domain, expect my blog posts to regularly increase until this becomes a twitter substitute... Maybe I should add a twitter feed over there --->
In all honesty though, Social media whoring is for pricks and cunts isn't it. Thank you for reading... <3