-
Alexander Kartashov authored
The current sys_mmap error analysis code doesn't work on 32-bit architectures with 3G/1G userspace/kernel virtual address space split since the syscall allocates anonymous memory above the first 2G of the address space --- such an address is a negative integer so it's interpreted as a error code. The problem isn't encountered on x86-64 becauase it doesn't use negative virtual addresses in the userspace. The 3G/1G split is used because memory allocation is currently broken for other values of the split on ARM: the value of TASK_UNMAPPED_BASE (arch/arm/include/asm/memory.h) isn't page-aligned if other split value is used so the value of the field mm_struct::mmap_base is initialized with a page-unaligned value by the function arch_pick_mmap_layout() (arch/arm/mm/mmap.c) in some circumstances that breaks page-alignment checks in the kernel memory management code. This patch modifies sys_mmap return value analysis code replacing tests for negativeness of the signed return value with tests that checks that the return value isn't greater than TASK_SIZE. Signed-off-by:
Alexander Kartashov <alekskartashov@parallels.com> Signed-off-by:
Pavel Emelyanov <xemul@parallels.com>
3f12d688