首页 编程教程正文

获取代{过}{滤}理ip并多线程模拟访问网站增加流量[开源]

piaodoo 编程教程 2020-02-22 22:09:57 1024 0 python教程

本文来源吾爱破解论坛

本帖最后由 Thending 于 2018-7-27 10:45 编辑

无聊时写下了这个利用pytthon获取代{过}{滤}理ip并多线程模拟访问网站增加流量,还有很多可以改进的地方这里就不提了。
建议先将爬下来的代{过}{滤}理IP保存下来,不然爬多了人家会封了你ip。这里的用到的库有time、requests、threading和BeautifulSoup
把两个visit里的url改成您要访问的网站,可以自行增加访问深度

[Python] 纯文本查看 复制代码

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# [url=home.php?mod=space&uid=686237]@date[/url]    : 2018-07-27 09:24:40
# [url=home.php?mod=space&uid=686208]@AuThor[/url]  : Huelse ([url=mailto:huelse@oini.top]huelse@oini.top[/url])
# [url=home.php?mod=space&uid=282837]@link[/url]    : http://www.oini.top
# [url=home.php?mod=space&uid=918291]@Version[/url] : $Id$

import time
import requests
import threading
from bs4 import BeautifulSoup

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'
}
proxy = {"http": "221.228.17.172:8181", "https": "221.228.17.172:8181"}
iphttps = []
iphttp = []
s = 0

def IPspider(numpage):

    url1 = 'http://www.xicidaili.com/wt/'
    for num in range(1, numpage+1):
        ipurl = url1+str(num)
        res = requests.get(ipurl, headers=headers, proxies=proxy)
        bs = BeautifulSoup(res.text, 'html.parser')
        ipres = bs.find_all('tr')
        for item in ipres:
            try:
                ip = []
                tds = item.find_all('td')
                ip.append(tds[1].text)
                ip.append(tds[2].text)
                iphttp.append(ip[0]+':'+ip[1])
            except IndexError:
                pass
    return iphttp

def IPspiders(numpage):

    url1 = 'http://www.xicidaili.com/wn/'
    for num in range(1, numpage+1):
        ipurl = url1+str(num)
        res = requests.get(ipurl, headers=headers, proxies=proxy)
        bs = BeautifulSoup(res.text, 'html.parser')
        ipres = bs.find_all('tr')
        for item in ipres:
            try:
                ip = []
                tds = item.find_all('td')
                ip.append(tds[1].text)
                ip.append(tds[2].text)
                iphttps.append(ip[0]+':'+ip[1])
            except IndexError:
                pass
    return iphttps

def visit(data):

    global s
    proxy = 'http://'+data
    proxies = {"http": proxy,}
    try:
        requests.get('http://www.oini.top/', headers=headers, proxies=proxies, timeout=2)
        time.sleep(1)
        requests.get('http://www.oini.top/thread-180-1-1.html', headers=headers, proxies=proxies, timeout=2)
        time.sleep(0.5)
        s += 1
        print('第%d次访问' % s)
        pass
    except:
        print('Error')
        pass

def visits(data):

    global s
    proxy = 'http://'+data
    proxies = {"https": proxy,}
    try:
        requests.get('http://www.oini.top/', headers=headers, proxies=proxies, timeout=5)
        time.sleep(1)
        requests.get('http://www.oini.top/thread-173-1-1.html', headers=headers, proxies=proxies, timeout=5)
        time.sleep(1)
        s += 1
        print('第%d次访问' % s)
        pass
    except:
        print('Error')
        pass

def main():

    thread = []
    L = len(ip)
    for i in range(L):
        ip1 = ip[i]
        ip2 = ips[i]
        t1 = threading.Thread(target=visit, args=(ip1,))
        t2 = threading.Thread(target=visits, args=(ip2,))
        thread.append(t1)
        thread.append(t2)
    for i in range(L):
        thread[i].setDaemon(True)
        time.sleep(0.1)
        thread[i].start()
    for i in range(L):
        time.sleep(0.1)
        thread[i].join(5)

if __name__ == '__main__':
    ip = IPspider(5)
    ips = IPspiders(5)
    while s < 1000:
        main()


版权声明:

本站所有资源均为站长或网友整理自互联网或站长购买自互联网,站长无法分辨资源版权出自何处,所以不承担任何版权以及其他问题带来的法律责任,如有侵权或者其他问题请联系站长删除!站长QQ754403226 谢谢。

有关影视版权:本站只供百度云网盘资源,版权均属于影片公司所有,请在下载后24小时删除,切勿用于商业用途。本站所有资源信息均从互联网搜索而来,本站不对显示的内容承担责任,如您认为本站页面信息侵犯了您的权益,请附上版权证明邮件告知【754403226@qq.com】,在收到邮件后72小时内删除。本文链接:https://www.piaodoo.com/7737.html

搜索