Wednesday, January 15, 2014

6:03 AM
Good day!

This is somewhat to be called a web crawler.

A Web crawler starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.
The large volume implies that the crawler can only download a limited number of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that the pages might have already been updated or even deleted.
The number of possible URLs crawled being generated by server-side software has also made it difficult for web crawlers to avoid retrieving duplicate content. Endless combinations of HTTP GET (URL-based) parameters exist, of which only a small selection will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified through HTTP GET parameters in the URL. If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-provided content, then the same set of content can be accessed with 48 different URLs, all of which may be linked on the site. This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor scripted changes in order to retrieve unique content.
As Edwards et al. noted, "Given that the bandwidth for conducting crawls is neither infinite nor free, it is becoming essential to crawl the Web in not only a scalable.
Here, I just want to share my page finder written in python using dictionary based for pentesting your page or crawling for malicious backdoor shells possibly uploaded by defacers or crackers. And also crawling for admin pages.

Here's a screenshot on how to use

pyshadmincrawler.py


Download along with dictionary file: http://www.4shared.com/archive/v-8hZC6mce/pyshadmintar.html
View code snippet:

import sys
import httplib
import os.path
import socket
import re
from threading import Thread
from time import sleep
def usage():
print('\n\t:: [PH] Index Python Shell and Admin Web Crawler ::')
print('Usage:')
print('python pyshadmincrawler.py -t [HOST] -e [EXTENSION] -w [WORDLIST]')
print('Mode: ')
print(' -t = The target site or host address')
print(' -e = Extension of the pages ie. asp,php,aspx,cgi etc. \n\tfollowed by comma as sequence\n')
print('Example: \n$ python pyshadmincrawler.py -t targetsite.com -e asp,php,cgi -w shadmin-dict.txt\n')
return
def isOK(url, page):
conn = httplib.HTTPConnection(host, 80)
try:
conn.request('HEAD', '/%s' % (page))
r = conn.getresponse()
return r.status == 0xc8
except socket.gaierror: return False
except socket.error: return False
finally: conn.close()
def validateArgs(host, ext, wlist):
if not isOK(host, ''):
print 'Server/Host: %s is not up!' %(host)
return False
if not re.match('^[a-z,]+$',ext):
print 'Extensions must be asp,php,cgi,html for ex.'
return False
if not os.path.isfile(wlist):
print 'Dictionary file not found in \'%s\' on current directory' %(wlist)
return False
return True
def getPage():
global m_curr_row
contents
if m_curr_row > m_total_rows: return None
page = contents[m_curr_row]
m_curr_row += 1
return page
def scanpage(url):
if isOK(host, url):
#print 'Found: %s' %(url)
found_pages.append(url)
def clearscreen():
os.system('cls' if os.name == 'nt' else 'clear')
def run():
u = getPage()
while u != None:
if len(u) > 0:
t = Thread(target=scanpage, args=(u,))
t.start()
t.join()
clearscreen()
print 'Scanning %s (%d/%d)' %(u, m_curr_row, m_total_rows + 1)
while(t.isAlive()):
time.sleep(0xa)
u = getPage()
if __name__ == '__main__':
if len(sys.argv) == 7:
mode = [sys.argv[1], sys.argv[3], sys.argv[-2]]
host = str(sys.argv[2])
host = host.replace('http://', '')
ext = sys.argv[4].lower()
wordlist = sys.argv[-1]
if mode[0] == '-t' and mode[1] == '-e' and mode[-1] == '-w' :
if validateArgs(host, ext, wordlist):
#if host[-1] != '/': host = host + '/'
extensions = []
extensions.extend(ext.split(','))
contents = []
found_pages = []
try:
with open(wordlist, 'r') as f:
#contents = [lines.strip() for lines in f]
for line in f:
line = line.strip()
if line[-1] == '/':
contents.append(line)
else:
for e in extensions:
contents.append(line + '.' + e)
except IOError:
print 'Error reading file. Please check file if readable/corrupt.'
exit()
m_curr_row = 0
m_total_rows = len(contents) - 1
mainthread = Thread(target=run)
mainthread.start()
mainthread.join()
while(mainthread.isAlive()):
time.sleep(0xa)
clearscreen()
print '\nFinished Scanning!\n'
if len(found_pages) > 0:
for p in set(found_pages): print 'Found page: %s/%s' %(host, p)
else:
print 'No pages found!'
print '\n###############GAME OVER###############\n\n'
else:
exit()
else:
usage()
else:
usage()

0 comments:

Post a Comment

:) :)) ;(( :-) =)) ;( ;-( :d :-d @-) :p :o :>) (o) [-( :-? (p) :-s (m) 8-) :-t :-b b-( :-# =p~ $-) (b) (f) x-) (k) (h) (c) cheer
Click to see the code!
To insert emoticon you must added at least one space before the code.