python中的被动信息搜集

2021-06-05 0 322

概述:

被动信息搜集主要通过搜索引擎或者社交等方式对目标资产信息进行提取,通常包括IP查询,Whois查询,子域名搜集等。进行被动信息搜集时不与目标产生交互,可以在不接触到目标系统的情况下挖掘目标信息。

主要方法:DNS解析,子域名挖掘,邮件爬取等。

DNS解析:

1、概述:

DNS(Domain Name System,域名系统)是一种分布式网络目录服务,主要用于域名与IP地址的相互转换,能够使用户更方便地访问互联网,而不用去记住一长串数字(能够被机器直接读取的IP)。

2、IP查询:

IP查询是通过当前所获取的URL去查询对应IP地址的过程。可以利用Socket库函数中的gethostbyname()获取域名对应的IP值。

代码:

import socket

ip = socket.gethostbyname(\'www.baidu.com\')
print(ip)

返回:

39.156.66.14

3、Whois查询:

Whois是用来查询域名的IP以及所有者信息的传输协议。Whois相当于一个数据库,用来查询域名是否已经被注册,以及注册域名的详细信息(如域名所有人,域名注册商等)。

Python中的python-whois模块可用于Whois查询。

代码:

from whois import whois

data = whois(\'www.baidu.com\')
print(data)

返回:

E:\\python\\python.exe \"H:/code/Python Security/Day01/Whois查询.py\"
{
  \"domain_name\": [
    \"BAIDU.COM\",
    \"baidu.com\"
  ],
  \"registrar\": \"MarkMonitor, Inc.\",
  \"whois_server\": \"whois.markmonitor.com\",
  \"referral_url\": null,
  \"updated_date\": [
    \"2020-12-09 04:04:41\",
    \"2021-04-07 12:52:21\"
  ],
  \"creation_date\": [
    \"1999-10-11 11:05:17\",
    \"1999-10-11 04:05:17\"
  ],
  \"expiration_date\": [
    \"2026-10-11 11:05:17\",
    \"2026-10-11 00:00:00\"
  ],
  \"name_servers\": [
    \"NS1.BAIDU.COM\",
    \"NS2.BAIDU.COM\",
    \"NS3.BAIDU.COM\",
    \"NS4.BAIDU.COM\",
    \"NS7.BAIDU.COM\",
    \"ns3.baidu.com\",
    \"ns2.baidu.com\",
    \"ns7.baidu.com\",
    \"ns1.baidu.com\",
    \"ns4.baidu.com\"
  ],
  \"status\": [
    \"clientDeleteProhibited https://icann.org/epp#clientDeleteProhibited\",
    \"clientTransferProhibited https://icann.org/epp#clientTransferProhibited\",
    \"clientUpdateProhibited https://icann.org/epp#clientUpdateProhibited\",
    \"serverDeleteProhibited https://icann.org/epp#serverDeleteProhibited\",
    \"serverTransferProhibited https://icann.org/epp#serverTransferProhibited\",
    \"serverUpdateProhibited https://icann.org/epp#serverUpdateProhibited\",
    \"clientUpdateProhibited (https://www.icann.org/epp#clientUpdateProhibited)\",
    \"clientTransferProhibited (https://www.icann.org/epp#clientTransferProhibited)\",
    \"clientDeleteProhibited (https://www.icann.org/epp#clientDeleteProhibited)\",
    \"serverUpdateProhibited (https://www.icann.org/epp#serverUpdateProhibited)\",
    \"serverTransferProhibited (https://www.icann.org/epp#serverTransferProhibited)\",
    \"serverDeleteProhibited (https://www.icann.org/epp#serverDeleteProhibited)\"
  ],
  \"emails\": [
    \"abusecomplaints@markmonitor.com\",
    \"whoisrequest@markmonitor.com\"
  ],
  \"dnssec\": \"unsigned\",
  \"name\": null,
  \"org\": \"Beijing Baidu Netcom Science Technology Co., Ltd.\",
  \"address\": null,
  \"city\": null,
  \"state\": \"Beijing\",
  \"zipcode\": null,
  \"country\": \"CN\"
}

Process finished with exit code 0

子域名挖掘:

1、概述:

域名可以分为顶级域名,一级域名,二级域名等。

子域名(subdomain)是顶级域名(一级域名或父域名)的下一级。

在测试过程中,测试目标主站时如果未发现任何相关漏洞,此时通常会考虑挖掘目标系统的子域名。

子域名挖掘方法有多种,例如,搜索引擎,子域名破解,字典查询等。

2、利用Python编写一个简单的子域名挖掘工具:

(以https://cn.bing.com/为例)

代码:

# coding=gbk
import requests
from bs4 import BeautifulSoup
from urllib.parse import urlparse
import sys


def Bing_Search(site, pages):
    Subdomain = []
    # 以列表的形式存储子域名
    headers = {
        \'User-Agent\': \'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36\',
        \'Accept\': \'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9\',
        \'Referer\': \'https://cn.bing.com/\',
        \'Cookie\': \'MUID=37FA745F1005602C21A27BB3117A61A3; SRCHD=AF=NOFORM; SRCHUID=V=2&GUID=DA7BDD699AFB4AEB8C68A0B4741EFA74&dmnchg=1; MUIDB=37FA745F1005602C21A27BB3117A61A3; ULC=P=9FD9|1:1&H=9FD9|1:1&T=9FD9|1:1; PPLState=1; ANON=A=CEC39B849DEE39838493AF96FFFFFFFF&E=1943&W=1; NAP=V=1.9&E=18e9&C=B8-HXGvKTE_2lQJ0I3OvbJcIE8caEa9H4f3XNrd3z07nnV3pAxmVJQ&W=1; _tarLang=default=en; _TTSS_IN=hist=WyJ6aC1IYW5zIiwiYXV0by1kZXRlY3QiXQ==; _TTSS_OUT=hist=WyJlbiJd; ABDEF=V=13&ABDV=13&MRB=1618913572156&MRNB=0; KievRPSSecAuth=FABSARRaTOJILtFsMkpLVWSG6AN6C/svRwNmAAAEgAAACPyKw8I/CYhDEAFiUHPfZQSWnp%2BMm43NyhmcUtEqcGeHpvygEOz6CPQIUrTCcE3VESTgWkhXpYVdYAKRL5u5EH0y3%2BmSTi5KxbOq5zlLxOf61W19jGuTQGjb3TZhsv5Wb58a2I8NBTwIh/cFFvuyqDM11s7xnw/ZZoqc9tNuD8ZG9Hi29RgIeOdoSL/Kzz5Lwb/cfSW6GbawOVtMcToRJr20K0C0zGzLhxA7gYH9CxajTo7w5kRx2/b/QjalnzUh7lvZCNrF5naagj10xHhZyHItlNtjNe3yqqLyLZmgNrzT8o7QWfpJWHqAak4AFt3nY9R0NGLHM6UxPC8ph9hEaYbWtIsY7JNvVYFwbDk6o4oqu33kHeyqW/JTVhQACnpn2v74dZzvk4xRp%2BpcQIoRIzI%3D; _U=1ll1JNraa8gnrWOg3NTDw_PUniDnXYIikDzB-R_hVgutXRRVFcrnaPKxVBXA1w-dBZJsJJNfk6vGHSqJtUsLXvZswsd5A1xFvQ_V_nUInstIfDUs7q7FyY2DmvDRlfMIqbgdt-KEqazoz-r_TLWScg4_WDNFXRwg6Ga8k2cRyOTfGNkon7kVCJ7IoPDTAdqdP; WLID=kQRArdi2czxUqvURk62VUr88Lu/DLn6bFfcwTmB8EoKbi3UZYvhKiOCdmPbBTs0PQ3jO42l3O5qWZgTY4FNT8j837l8J9jp0NwVh2ytFKZ4=; _EDGE_S=SID=01830E382F4863360B291E1B2E6662C7; SRCHS=PC=ATMM; WLS=C=3d04cfe82d8de394&N=%e5%81%a5; SRCHUSR=DOB=20210319&T=1619277515000&TPC=1619267174000&POEX=W; SNRHOP=I=&TS=; _SS=PC=ATMM&SID=01830E382F4863360B291E1B2E6662C7&bIm=656; ipv6=hit=1619281118251&t=4; SRCHHPGUSR=SRCHLANGV2=zh-Hans&BRW=W&BRH=S&CW=1462&CH=320&DPR=1.25&UTC=480&DM=0&WTS=63754766339&HV=1619277524&BZA=0&TH=ThAb5&NEWWND=1&NRSLT=-1&LSL=0&SRCHLANG=&AS=1&NNT=1&HAP=0&VSRO=0\'
    }
    for i in range(1, int(pages)+1):
        url = \"https://cn.bing.com/search?q=site%3a\" + site + \"&go=Search&qs=ds&first=\" + str((int(i)-1)*10) + \"&FORM=PERE\"
        html = requests.get(url, headers=headers)
        soup = BeautifulSoup(html.content, \'html.parser\')
        job_bt = soup.findAll(\'h2\')
        for i in job_bt:
            link = i.a.get(\'href\')
            domain = str(urlparse(link).scheme + \"://\" + urlparse(link).netloc)
            if domain in Subdomain:
                pass
            else:
                Subdomain.append(domain)
                print(domain)


if __name__ == \'__main__\':
    if len(sys.argv) == 3:
        site = sys.argv[1]
        page = sys.argv[2]
    else:
        print(\"usge: %s baidu.com 10\" % sys.argv[0])
        # 输出帮助信息
        sys.exit(-1)
    Subdomain = Bing_Search(\'www.baidu.com\', 15)

返回:

python中的被动信息搜集

邮件爬取:

1、概述:

针对目标系统进行渗透的过程中,如果目标服务器安全性很高,通过服务器很难获取目标权限时,通常会采用社工的方式对目标服务进行进一步攻击。

针对搜索界面的相关邮件信息进行爬取、处理等操作之后。利用获得的邮箱账号批量发送钓鱼邮件,诱骗、欺诈目标用户或管理员进行账号登录或点击执行,进而获取目标系统的其权限。

该邮件采集工具所用到的相关库函数如下:

import sys
import getopt
import requests
from bs4 import BeautifulSoup
import re

2、过程:

①:在程序的起始部分,当执行过程中没有发生异常时,则执行定义的start()函数。

通过sys.argv[ ] 实现外部指令的接收。其中,sys.argv[0] 代表代码本身的文件路径,sys.argv[1:] 表示从第一个命令行参数到输入的最后一个命令行参数,存储形式为list。

代码如下:

if __name__ == \'__main__\':
    # 定义异常
    try:
        start(sys.argv[1: ])
    except:
        print(\"interrupted by user, killing all threads ... \")

②:编写命令行参数处理功能。此处主要应用  getopt.getopt()函数处理命令行参数,该函数目前有短选项和长选项两种格式。

短选项格式为“ – ”加上单个字母选项;

长选项格式为“ — ”加上一个单词选项。

opts为一个两元组列表,每个元素形式为“(选项串,附加参数)”。当没有附加参数时,则为空串。之后通过for语句循环输出opts列表中的数值并赋值给自定义的变量。

代码如下:

def start(argv):
    url = \"\"
    pages = \"\"
    if len(sys.argv) < 2:
        print(\"-h 帮助信息;\\n\")
        sys.exit()
    # 定义异常处理
    try:
        banner()
        opts, args = getopt.getopt(argv, \"-u:-p:-h\")
    except:
        print(\'Error an argument\')
        sys.exit()
    for opt, arg in opts:
        if opt == \"-u\":
            url = arg
        elif opt == \"-p\":
            pages = arg
        elif opt == \"-h\":
            print(usage())
    launcher(url, pages)

③:输出帮助信息,增加代码工具的可读性和易用性。为了使输出的信息更加美观简洁,可以通过转义字符设置输出字体颜色,从而实现所需效果。

开头部分包含三个参数:显示方式,前景色,背景色。这三个参数是可选的,可以只写其中一个参数。结尾可以省略,但为了书写规范,建议以 “\\033[0m” 结尾。

代码如下:

print(\'\\033[0:30;41m 3cH0 - Nu1L \\033[0m\')
print(\'\\033[0:30;42m 3cH0 - Nu1L \\033[0m\')
print(\'\\033[0:30;43m 3cH0 - Nu1L \\033[0m\')
print(\'\\033[0:30;44m 3cH0 - Nu1L \\033[0m\')
# banner信息
def banner():
    print(\'\\033[1:34m ################################ \\033[0m\\n\')
    print(\'\\033[1:34m 3cH0 - Nu1L \\033[0m\\n\')
    print(\'\\033[1:34m ################################ \\033[0m\\n\')
# 使用规则
def usage():
    print(\'-h: --help 帮助;\')
    print(\'-u: --url 域名;\')
    print(\'-p --pages 页数;\')
    print(\'eg: python -u \"www.baidu.com\" -p 100\' + \'\\n\')
    sys.exit()

④:确定搜索邮件的关键字,并调用bing_search()和baidu_search()两个函数,返回Bing与百度两大搜索引擎的查询结果。由获取到的结果进行列表合并,去重之后,循环输出。

代码如下:

# 漏洞回调函数
def launcher(url, pages):
    email_num = []
    key_words = [\'email\', \'mail\', \'mailbox\', \'邮件\', \'邮箱\', \'postbox\']
    for page in range(1, int(pages)+1):
        for key_word in key_words:
            bing_emails = bing_search(url, page, key_word)
            baidu_emails = baidu_search(url, page, key_word)
            sum_emails = bing_emails + baidu_emails
            for email in sum_emails:
                if email in email_num:
                    pass
                else:
                    print(email)
                    with open(\'data.txt\', \'a+\')as f:
                        f.write(email + \'\\n\')
                    email_num.append(email)

⑤:用Bing搜索引擎进行邮件爬取。Bing引擎具有反爬防护,会通过限定referer、cookie等信息来确定是否网页爬取操作。

可以通过指定referer与requeses.session()函数自动获取cookie信息,绕过Bing搜索引擎的反爬防护。

代码如下:

# Bing_search
def bing_search(url, page, key_word):
    referer = \"http://cn.bing.com/search?q=email+site%3abaidu.com&sp=-1&pq=emailsite%3abaidu.com&first=1&FORM=PERE1\"
    conn = requests.session()
    bing_url = \"http://cn.bing.com/search?q=\" + key_word + \"+site%3a\" + url + \"&qa=n&sp=-1&pq=\" + key_word + \"site%3a\" + url +\"&first=\" + str((page-1)*10) + \"&FORM=PERE1\"  
    conn.get(\'http://cn.bing.com\', headers=headers(referer))
    r = conn.get(bing_url, stream=True, headers=headers(referer), timeout=8)
    emails = search_email(r.text)
    return emails

⑥:用百度搜索引擎进行邮件爬取。百度搜索引擎同样设定了反爬防护,相对Bing来说,百度不仅对referer和cookie进行校验,还同时在页面中通过JavaScript语句进行动态请求链接,从而导致不能动态获取页面中的信息。

可以通过对链接的提取,在进行request请求,从而绕过反爬设置。

代码如下:

# Baidu_search
def baidu_search(url, page, key_word):
    email_list = []
    emails = []
    referer = \"https://www.baidu.com/s?wd=email+site%3Abaidu.com&pn=1\"
    baidu_url = \"https://www.baidu.com/s?wd=\" + key_word + \"+site%3A\" + url + \"&pn=\" + str((page-1)*10)
    conn = requests.session()   
    conn.get(baidu_url, headers=headers(referer))
    r = conn.get(baidu_url, headers=headers(referer))
    soup = BeautifulSoup(r.text, \'lxml\')
    tagh3 = soup.find_all(\'h3\')
    for h3 in tagh3:
        href = h3.find(\'a\').get(\'href\')
        try:
            r = requests.get(href, headers=headers(referer))
            emails = search_email(r.text)   
        except Exception as e:
            pass
        for email in emails:
            email_list.append(email)
    return email_list

⑦:通过正则表达式获取邮箱号码。此处也可以换成目标企业邮箱的正则表达式。

代码如下:

# search_email
def search_email(html):
    emails = re.findall(r\"[a-z0-9\\.\\-+_]+@[a-z0-9\\.\\-+_]+\\.[a-z]\" + html, re.I)
    return emails
# headers(referer)
def headers(referer):
    headers = {
        \'User-Agent\':\'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36\',
        \'Accept\': \'application/json, text/javascript, */*; q=0.01\',
        \'Accept-Language\': \'zh-CN,zh;q=0.9,en;q=0.8\',
        \'Accept-Encoding\': \'gzip, deflate, br\',
        \'Referer\': referer
    }
    return headers

3、完整代码:

# coding=gbk
import sys
import getopt
import requests
from bs4 import BeautifulSoup
import re


# 主函数,传入用户输入的参数
def start(argv):
    url = \"\"
    pages = \"\"
    if len(sys.argv) < 2:
        print(\"-h 帮助信息;\\n\")
        sys.exit()
    # 定义异常处理
    try:
        banner()
        opts, args = getopt.getopt(argv, \"-u:-p:-h\")
    except:
        print(\'Error an argument\')
        sys.exit()
    for opt, arg in opts:
        if opt == \"-u\":
            url = arg
        elif opt == \"-p\":
            pages = arg
        elif opt == \"-h\":
            print(usage())
    launcher(url, pages)


# banner信息
def banner():
    print(\'\\033[1:34m ################################ \\033[0m\\n\')
    print(\'\\033[1:34m 3cH0 - Nu1L \\033[0m\\n\')
    print(\'\\033[1:34m ################################ \\033[0m\\n\')


# 使用规则
def usage():
    print(\'-h: --help 帮助;\')
    print(\'-u: --url 域名;\')
    print(\'-p --pages 页数;\')
    print(\'eg: python -u \"www.baidu.com\" -p 100\' + \'\\n\')
    sys.exit()


# 漏洞回调函数
def launcher(url, pages):
    email_num = []
    key_words = [\'email\', \'mail\', \'mailbox\', \'邮件\', \'邮箱\', \'postbox\']
    for page in range(1, int(pages)+1):
        for key_word in key_words:
            bing_emails = bing_search(url, page, key_word)
            baidu_emails = baidu_search(url, page, key_word)
            sum_emails = bing_emails + baidu_emails
            for email in sum_emails:
                if email in email_num:
                    pass
                else:
                    print(email)
                    with open(\'data.txt\', \'a+\')as f:
                        f.write(email + \'\\n\')
                    email_num.append(email)


# Bing_search
def bing_search(url, page, key_word):
    referer = \"http://cn.bing.com/search?q=email+site%3abaidu.com&sp=-1&pq=emailsite%3abaidu.com&first=1&FORM=PERE1\"
    conn = requests.session()
    bing_url = \"http://cn.bing.com/search?q=\" + key_word + \"+site%3a\" + url + \"&qa=n&sp=-1&pq=\" + key_word + \"site%3a\" + url +\"&first=\" + str((page-1)*10) + \"&FORM=PERE1\"
    conn.get(\'http://cn.bing.com\', headers=headers(referer))
    r = conn.get(bing_url, stream=True, headers=headers(referer), timeout=8)
    emails = search_email(r.text)
    return emails


# Baidu_search
def baidu_search(url, page, key_word):
    email_list = []
    emails = []
    referer = \"https://www.baidu.com/s?wd=email+site%3Abaidu.com&pn=1\"
    baidu_url = \"https://www.baidu.com/s?wd=\" + key_word + \"+site%3A\" + url + \"&pn=\" + str((page-1)*10)
    conn = requests.session()
    conn.get(baidu_url, headers=headers(referer))
    r = conn.get(baidu_url, headers=headers(referer))
    soup = BeautifulSoup(r.text, \'lxml\')
    tagh3 = soup.find_all(\'h3\')
    for h3 in tagh3:
        href = h3.find(\'a\').get(\'href\')
        try:
            r = requests.get(href, headers=headers(referer))
            emails = search_email(r.text)
        except Exception as e:
            pass
        for email in emails:
            email_list.append(email)
    return email_list


# search_email
def search_email(html):
    emails = re.findall(r\"[a-z0-9\\.\\-+_]+@[a-z0-9\\.\\-+_]+\\.[a-z]\" + html, re.I)
    return emails


# headers(referer)
def headers(referer):
    headers = {
        \'User-Agent\':\'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36\',
        \'Accept\': \'application/json, text/javascript, */*; q=0.01\',
        \'Accept-Language\': \'zh-CN,zh;q=0.9,en;q=0.8\',
        \'Accept-Encoding\': \'gzip, deflate, br\',
        \'Referer\': referer
    }
    return headers


if __name__ == \'__main__\':
    # 定义异常
    try:
        start(sys.argv[1: ])
    except:
        print(\"interrupted by user, killing all threads ... \")

以上就是python中的被动信息搜集的详细内容,更多关于python 被动信息搜集的资料请关注自学编程网其它相关文章!

遇见资源网 Python python中的被动信息搜集 http://www.ox520.com/28684.html

常见问题

相关文章

发表评论
暂无评论
官方客服团队

为您解决烦忧 - 24小时在线 专业服务