欢迎来到尧图网

客户服务 关于我们

您的位置:首页 > 汽车 > 维修 > 爬虫遇到base64编码(非常规版)

爬虫遇到base64编码(非常规版)

2025/6/13 18:59:02 来源:https://blog.csdn.net/m0_64408930/article/details/148539338  浏览:    关键词:爬虫遇到base64编码(非常规版)

一.特征

从 Base64 的核心特性入手,比如它的编码原理(将二进制数据转换为 ASCII 字符集)和字符集的组成(A-Z、a-z、0-9、+、/ 和 =)。这是 Base64 最基础的特点,几乎每个回答都应该包括这些内容。基于 64 个字符 :Base64 使用 64 个可打印的 ASCII 字符(包括大小写英文字母、数字以及 “+” 和 “/”)来表示二进制数据。这 64 个字符可以组合成各种序列,从而表示任意的二进制数据。长度被四整除,不够被四整除后补零。

一.常规版base64加解密,直接编码即可

python实现

import base64def base64_encode(input_string):# 将字符串转换为字节input_bytes = input_string.encode('utf-8')# 进行 Base64 编码encoded_bytes = base64.b64encode(input_bytes)# 将结果转换为字符串encoded_string = encoded_bytes.decode('utf-8')return encoded_stringinput_string = "Hello, World!"
encoded_string = base64_encode(input_string)
print(f"Base64 Encoded: {encoded_string}")
def base64_decode(encoded_string):# 将字符串转换为字节encoded_bytes = encoded_string.encode('utf-8')# 进行 Base64 解码decoded_bytes = base64.b64decode(encoded_bytes)# 将结果转换为字符串decoded_string = decoded_bytes.decode('utf-8')return decoded_string# 示例
decoded_string = base64_decode(encoded_string)
print(f"Base64 Decoded: {decoded_string}")

爬虫遇到不规范base64,有的还需要映射值。今天总结两种不规范base64解码。

2.映射值

有一次提取一个网站详情,采购供应商门户翻页加解密出来了,接口为

https://ebuy.spdb.com.cn/app/noticeManagement/findSupplierCollect

这个网站可以当作爬虫逆向练习的网站,挺好的

但响应加密,解密出来还有密文我推断是base64加密(有+号,\号,末尾还要=号)

后面解密出来的密文是详情,我用常规base64解出来 ,直接报这个错误

后面查了网页JS代码,接口在

https://ebuy.spdb.com.cn/assets/Base64-2f4ca03a.js

发现个映射值 

创建映射表

    def decode_custom_b64(self, enc_str):custom_b64 = "RSTUVWXYZaDEFGHIJKLMNOPQklmnopqrstuvwxyzbc45678defghijABC01239+/="std_b64 = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/="# 创建转换映射表trans_table = str.maketrans(custom_b64, std_b64)# 转换到标准Base64std_str = enc_str.translate(trans_table)# 标准Base64解码decoded_bytes = base64.b64decode(std_str)# UTF-8解码return decoded_bytes.decode('utf-8')

 完整案例:

import requests
import subprocess
from functools import partial
import base64subprocess.Popen = partial(subprocess.Popen, encoding="utf-8")
def get_js_code():return '''var CryptoJS =require('crypto-js')function vr(F) {const z = CryptoJS.enc.Utf8.parse(F);if (z.sigBytes < 16) {const d = Q.lib.WordArray.random(16 - z.sigBytes);z.concat(d), z.sigBytes = 16} else z.sigBytes > 16 && (z.sigBytes = 16, z.words = z.words.slice(0, 4));return z
}function Bx(F, z) {let d = z + "39457352";return CryptoJS.AES.decrypt(F, vr(d), {mode: CryptoJS.mode.ECB, padding: CryptoJS.pad.Pkcs7}).toString(CryptoJS.enc.Utf8)
}
d = {"page": 1, "rows": 10, "startDate": "", "endDate": "", "noticeStatus": 9, "validFlag": 1, "orderRule": 1
}function iv2(e = 8) {for (var t = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z"], n = "", l = 0; l < e; l++) {var i = Math.ceil(Math.random() * 35);n += t[i]}return n
}function Ce(F) {let z = {uuid: iv2(16), userId: ""}, d = "nppszdcfw339457352";return {visa: CryptoJS.AES.encrypt(JSON.stringify(z), vr(d), {mode: CryptoJS.mode.ECB, padding: CryptoJS.pad.Pkcs7}).toString(), params: F}
}f = "/noticeManagement/findPurchaseNotice"function pe(F, z) {return F && true ? false ? lx(F) : Ce(F) : {visa: "", params: F}
}function get_data(encode, f) {txt = JSON.parse(Bx(encode, f))return txt
}function get_headers(d,f) {f = "/noticeManagement/findPurchaseNotice"return pe(d, f)['visa']
}d = {"page": 1, "rows": 10, "startDate": "", "endDate": "", "noticeStatus": 9, "validFlag": 1, "orderRule": 1
}'''
import requests
import execjsheaders = {"Accept": "application/json, text/plain, */*","Accept-Language": "zh-CN,zh;q=0.9","Authorization": "null","Cache-Control": "no-cache","Connection": "keep-alive","Content-Type": "application/x-www-form-urlencoded;charset:utf-8","Content-Visa": "qsNpoek2BvNXOt353QRp7BsZ6/OchE6IkSFG/UK+nKiXWqgzyVBe3+pZjB7+YsME","Origin": "https://ebuy.spdb.com.cn","Pragma": "no-cache","Referer": "https://ebuy.spdb.com.cn/","Sec-Fetch-Dest": "empty","Sec-Fetch-Mode": "cors","Sec-Fetch-Site": "same-origin","User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/137.0.0.0 Safari/537.36","X-CSRF-TOKEN": "ab9dfd31-fec3-4f26-872f-6650269027a1","sec-ch-ua": "\"Google Chrome\";v=\"137\", \"Chromium\";v=\"137\", \"Not/A)Brand\";v=\"24\"","sec-ch-ua-mobile": "?0","sec-ch-ua-platform": "\"Windows\""
}
cookies = {"SESSION": "ZjM1NTFkY2MtNjgyZi00NWZhLWIxNTQtNGZlN2FhNjEzNDA3"
}
url = "https://ebuy.spdb.com.cn/app/noticeManagement/findSupplierCollect"
data = {"page": "2","rows": "10","noticeType": "00100010","startDate": "","endDate": "","noticeStatus": "9","validFlag": "1","orderRule": "1"
}
response = requests.post(url, headers=headers, cookies=cookies, data=data)
url = "https://ebuy.spdb.com.cn/app/csrf/getToken"
js = execjs.compile(get_js_code())
response1 = requests.get(url, headers=headers,)
data = js.call('get_data', response1.json()['data'], response1.headers['content-visa'])
headers['x-csrf-token'] = data['token']
json_obj = js.call('get_data', response.json()['data'], response.headers['content-visa'])import base64def decode_custom_b64(enc_str):custom_b64 = "RSTUVWXYZaDEFGHIJKLMNOPQklmnopqrstuvwxyzbc45678defghijABC01239+/="std_b64 = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/="# 创建转换映射表trans_table = str.maketrans(custom_b64, std_b64)# 转换到标准Base64std_str = enc_str.translate(trans_table)# 标准Base64解码decoded_bytes = base64.b64decode(std_str)# UTF-8解码return decoded_bytes.decode('utf-8')# 示例
for i in json_obj['rows']:print(i['content'])decoded_string = decode_custom_b64(i['content'])print(decoded_string)

最后结果得出网页源代码

3.转为标准base64.

在工作中要求提取文件链接

https://scut.gzebid.cn/#/noticeDetail?id=cd7f555d3ce78b3f09027ce07a50f6f6&tenderMode=zb&noticeType=8&categoryId=130000

抓包分析,发现数据是加密的 

遇到这个数据很坑,首先|分为两个base64编码,解码为几个json。然后base64没有减号,没有_号,数据里把加号变成减号,把/变为_.要转换过来。base编码长度被四整除,不整除后面补等于号=。真是奇葩的加密数据。而且有些不能被四整除,就很麻烦。

最后详细处理如下,

encoded_str2 = encoded_str.replace('-', '+').replace('_', '/').strip()
# 长度被四整除,不整除后补‘=’
padding_needed = len(encoded_str) % 4
if padding_needed != 0:encoded_str2 += '=' * (4 - padding_needed)
try:decoded_bytes = base64.b64decode(encoded_str2)decoded_str = decoded_bytes.decode('utf-8')json_obj = json.loads(decoded_str)print(json_obj)
except Exception as e:print(e)

完整案例

import requests
import json
import base64
headers = {"Accept": "application/json, text/plain, */*","Accept-Language": "zh-CN,zh;q=0.9","Cache-Control": "no-cache","Connection": "keep-alive","Pragma": "no-cache","Referer": "https://scut.gzebid.cn/","Sec-Fetch-Dest": "empty","Sec-Fetch-Mode": "cors","Sec-Fetch-Site": "same-origin","User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/137.0.0.0 Safari/537.36","sec-ch-ua": "\"Google Chrome\";v=\"137\", \"Chromium\";v=\"137\", \"Not/A)Brand\";v=\"24\"","sec-ch-ua-mobile": "?0","sec-ch-ua-platform": "\"Windows\"","tenantId": "d2c7335ee6754bdbbc1ad0b9b83207b3"
}
cookies = {"acw_tc": "0a47308517495382118966853e00691d9c985fa1bc61d123ece31e2b39b71c"
}
url = "https://scut.gzebid.cn/api/bid/share/api/platform/articleNew/cd7f555d3ce78b3f09027ce07a50f6f6/zb/8"
response = requests.get(url, headers=headers, cookies=cookies)
encoded_str = response.json()['data']['articles'][0]['files']
# 多个文件,用‘|’分割
encoded_str_list = encoded_str.split('|')
for encoded_str in encoded_str_list:# 转为标准base64编码encoded_str2 = encoded_str.replace('-', '+').replace('_', '/').strip()# 长度被四整除,不整除后补‘=’padding_needed = len(encoded_str) % 4if padding_needed != 0:encoded_str2 += '=' * (4 - padding_needed)try:decoded_bytes = base64.b64decode(encoded_str2)decoded_str = decoded_bytes.decode('utf-8')json_obj = json.loads(decoded_str)print(json_obj)except Exception as e:print(e)

结果如下:

链接,名称,所有数据都出来了

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com

热搜词