网络科技

    今日:366| 主题:255979
收藏本版
互联网、科技极客的综合动态。

[其他] CVE-2016-6187: Exploiting Linux kernel heap off-by-one

[复制链接]
持续爱到几多岁 发表于 2016-10-17 22:20:01
200 4

立即注册CoLaBug.com会员,免费获得投稿人的专业资料,享用更多功能,玩转个人品牌!

您需要 登录 才可以下载或查看,没有帐号?立即注册

x
by Vitaly Nikolenko
  Posted on October 16, 2016 at 8:38 PM
  Introduction

   I guess the reason I decided to write about this vulnerability is because when I posted it on Twitter, I've received a few DMs saying that either this kernel path wasn't vulnerable (i.e., couldn't see where off-by-1 was) or it wasn't exploitable. The other reason is that I wanted to try the usefaultfd() syscall in practice and I needed a real UAF to play with.
   First, I don't know if this vulnerability got into any upstream kernels on any major distributions. I've only checked the Ubuntu line and Yakkety wasn't affected. But hey, backports happen quite often :]. The bug was introduced by the bb646cdb12e75d82258c2f2e7746d5952d3e321a commit and fixed in 30a46a4647fd1df9cf52e43bf467f0d9265096ca .
  Since I couldn't find a vulnerable Ubuntu kernel, I've compiled the 4.5.1 kernel on Ubuntu 16.04 (x86_64). It's worth mentioning that this vulnerability only affects distributions that use AppArmor by default (such as Ubuntu).
  Vulnerability

   Writing into /proc/self/attr/current triggers the proc_pid_attr_write() function. The following is the code before the vulnerability was introduced:
  [code]static ssize_t proc_pid_attr_write(struct file * file, const char __user * buf,
                                   size_t count, loff_t *ppos)
{
        struct inode * inode = file_inode(file);
        char *page;
        ssize_t length;
        struct task_struct *task = get_proc_task(inode);

        length = -ESRCH;
        if (!task)
                goto out_no_task;
        if (count > PAGE_SIZE)                            [1]
                count = PAGE_SIZE;

        /* No partial writes. */
        length = -EINVAL;
        if (*ppos != 0)
                goto out;

        length = -ENOMEM;
        page = (char*)__get_free_page(GFP_TEMPORARY);     [2]
        if (!page)
                goto out;

        length = -EFAULT;
        if (copy_from_user(page, buf, count))             [3]
                goto out_free;

        /* Guard against adverse ptrace interaction */
        length = mutex_lock_interruptible(&task-;>signal->cred_guard_mutex);
        if (length < 0)
                goto out_free;

        length = security_setprocattr(task,
                                      (char*)file->f_path.dentry->d_name.name,
                                      (void*)page, count);

...[/code]   The buf parameter represents the user-supplied buffer (with length count ) that's being written to /proc/self/attr/current . In [1], the check is performed to ensure that this buffer will fit into a single page (4096 bytes by default). In [2] and [3], a single page is allocated and the user-space buffer is copied into the newly allocated page . This page is then passed to security_setprocattr which represents the LSM hook (AppArmour, SELinux, Smack). In case of Ubuntu, this hook triggers apparmor_setprocattr() function shown below:
  [code]static int apparmor_setprocattr(struct task_struct *task, char *name,
                                void *value, size_t size)
{
        struct common_audit_data sa;
        struct apparmor_audit_data aad = {0,};
        char *command, *args = value;
        size_t arg_size;
        int error;

        if (size == 0)
                return -EINVAL;
        /* args points to a PAGE_SIZE buffer, AppArmor requires that
         * the buffer must be null terminated or have size <= PAGE_SIZE -1
         * so that AppArmor can null terminate them
         */
        if (args[size - 1] != '\0') {                     [4]
                if (size == PAGE_SIZE)
                        return -EINVAL;
                args[size] = '\0';
        }
...[/code]   In [4], if the last byte of the user-supplied buffer is not null and the size of the buffer is not equal to the page size, the buffer is terminated with a null. On the other hand, if the user-supplied buffer exceeds (or equal to) the size of a single page (allocated in [2]), the path is terminated and -EINVAL is returned.
   The following shows the change (in [3]) to proc_pid_attr_write() after the vulnerability was introduced:
  [code]static ssize_t proc_pid_attr_write(struct file * file, const char __user * buf,
                                   size_t count, loff_t *ppos)
{
        struct inode * inode = file_inode(file);
        void *page;
        ssize_t length;
        struct task_struct *task = get_proc_task(inode);

        length = -ESRCH;
        if (!task)
                goto out_no_task;
        if (count > PAGE_SIZE)
                count = PAGE_SIZE;

        /* No partial writes. */
        length = -EINVAL;
        if (*ppos != 0)
                goto out;

        page = memdup_user(buf, count);                   [5]
        if (IS_ERR(page)) {
                length = PTR_ERR(page);
                goto out;
        }

        /* Guard against adverse ptrace interaction */
        length = mutex_lock_interruptible(&task-;>signal->cred_guard_mutex);
        if (length < 0)
                goto out_free;

        length = security_setprocattr(task,
                                      (char*)file->f_path.dentry->d_name.name,
                                      page, count);
...[/code]   Unlike __get_free_page() , memdup_user() allocates a block of memory specified by the count parameter and copies the user-supplied data into it. Hence, the size of the object being allocated is no longer restricted to 4096 bytes (even though that's still the maximum buffer size). Let's assume that the user-supplied data is 128 bytes in size and the last byte of this buffer is not null. When apparmor_setprocattr() is triggered, args[128] will be set to 0 because the check is still for PAGE_SIZE and not the actual size of the buffer:
  [code]if (args[size - 1] != '\0') {
                if (size == PAGE_SIZE)
                        return -EINVAL;
                args[size] = '\0';
        }[/code]   Since the objects are allocated dynamically on the heap, the first (least-significant byte) of the next object will be overwritten with a null. The standard technique for placing a target object (containing a function pointer as the first member) right after the vulnerable object won't work here. One idea was to overwrite a reference counter in some object (of the same size as the vulnerable object) and then trigger a UAF (thanks to Nicolas Trippar for suggesting this). While on the subject of counter overflows, if you'll be at Ruxcon (yeah, not Kiwicon because this talk was just too lame for their lineup this year :) next week, check out my talk on exploiting counter overflows in the kernel. Objects reference counters (represented by the atomic_t type = signed int) are generally the first members of the struct. Since counter values are typically under 255 for most objects, overwriting the least-significant byte of such an object would clear the counter and result in a standard UAF. However, to exploit this vulnerability, I've decided to go with a different approach: overwriting SLUB freelist pointers.
  Exploitation

  The nice thing about this vulnerablity is that we control the size of the target object. To trigger the vulnerabilty, the object size should be set to one of the cache sizes (i.e., 8, 16, 32, 64, 96, etc.). We won't go into details on how the SLUB allocator (default kernel memmory allocator on Linux) works. All we need to know is that (different) objects of the same size (in powers of 2) are accumulated into same caches for both general-purpose and special-purpose allocations. Slabs are basically pages in caches that contain objects of the same size. Free objects have a "next free" pointer at offset 0 (by default) pointing to the next free object in the slab.
   The idea is to place our vulnerable object ( A ) next to a free object ( B ) in the same slab and then clear the least-significant byte of this "next free" pointer of object B . When two new objects are allocated in the same slab, the last object will be allocated over objects A and/or B depending the original value of the "next free" pointer:
     
CVE-2016-6187: Exploiting Linux kernel heap off-by-one-1 (practice,received,Twitter,because,checked)
     The scenario above (overlapping both A and B objects) is just one of the possible outcomes. The "shift" value for the target object is 1 byte (0 to 255) and the final target object's position would depend on the original "next free" pointer value and the object size.
   Assuming that the target object will overlap both objects A and B , we would like to control the contents of both of these objects.
  At a high level, the exploitation procedure is as follows:
  
       
  • Place the vulnerable object A next to free object B in the same slab   
  • Overwrite the least-significant byte of the "next free" pointer in B   
  • Allocate two new objects in the same slab: the first object will be placed in B and the second object will represent our target object C   
  • If we control the contents of objects A and B , we can force object C to be allocated in user space   
  • Assuming object C has a function pointer that can be triggered from user space, set this pointer to our privilege escalation payload in user space or possibly a ROP chain (to bypass SMEP).  
  To perform steps 1-3, sequential object allocations can be achieved using a standard heap exhaustion technique.
   Next, we need to pick the right object size. Objects that are larger than 128 bytes (i.e., kmalloc caches 256, 512, 1024, etc.) won't work here. Let's assume that the start slab address is 0x1000 (note that slab start addresses are aligned to the page size and sequential object allocations are contiguous). The following C program lists the allocations for a single page given the object size:
  [code]// page_align.c
#include

  
   


int main(int argc, char **argv) {
        int i;
        void *page_begin = 0x1000;

        for (i = 0; i < 0x1000; i += atoi(argv[1]))
                printf("%p\n", page_begin + i);

}


  [/code]  For objects that are 256 bytes (or > 128 and<= 256 bytes), we have the following pattern:
  [code][email protected]:~$ ./align 256
0x1000
0x1100
0x1200
0x1300
0x1400
0x1500
0x1600
0x1700
0x1800
...[/code]  The least significant byte for all allocations in the slab is 0 and overwriting the "next free" pointer of the adjacent free object with a null will have no effect:

CVE-2016-6187: Exploiting Linux kernel heap off-by-one-2 (practice,received,Twitter,because,checked)
    For the 128-byte cache, there're two possible options:
  [code][email protected]:~$ ./align 128
0x1000
0x1080
0x1100
0x1180
0x1200
0x1280
0x1300
0x1380
0x1400
...[/code]
12下一页
友荐云推荐




上一篇:The 10 Most Frequently Asked Questions During Dev Interviews (at netguru)
下一篇:CSS 实现 1px 以内的移动
酷辣虫提示酷辣虫禁止发表任何与中华人民共和国法律有抵触的内容!所有内容由用户发布,并不代表酷辣虫的观点,酷辣虫无法对用户发布内容真实性提供任何的保证,请自行验证并承担风险与后果。如您有版权、违规等问题,请通过"联系我们"或"违规举报"告知我们处理。

√无限循环 发表于 2016-10-17 23:35:30
专业顶帖的!哈哈
回复 支持 反对

使用道具 举报

我是你的信仰 发表于 2016-10-18 11:15:51
众里寻他千百度,蓦然回首在这里!
回复 支持 反对

使用道具 举报

孤独的猪手 发表于 2016-10-19 04:43:39
非常好,顶一下
回复 支持 反对

使用道具 举报

我很屌 发表于 2016-10-21 05:53:06
报告!别开枪,我就是路过来看看的。。。
回复 支持 反对

使用道具 举报

*滑动验证:
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

我要投稿

推荐阅读

扫码访问 @iTTTTT瑞翔 的微博
回页顶回复上一篇下一篇回列表手机版
手机版/CoLaBug.com ( 粤ICP备05003221号 | 文网文[2010]257号 )|网站地图 酷辣虫

© 2001-2017 Comsenz Inc. Design: Dean. DiscuzFans.

返回顶部 返回列表