-
Cyrill Gorcunov authored
At now we pretend that all threads are sharing seccomp chains and at checkpoint moment we test seccomp modes to make sure if this assumption is valid refusing to dump otherwise. Still the kernel tacks seccomp filter chains per each thread and now we've faced applications (such as java) where per-thread chains are actively used. Thus we need to bring support of handling filters via per-thread basis. In this a bit intrusive patch the restore engine is lifted up to treat each thread separately. Here what is done: - Image core file is modified to keep seccomp filters inside thread_core_entry. For backward compatibility former seccomp_mode and seccomp_filter members in task_core_entry are renamed to have old_ prefix and on restore we test if we're dealing with old images. Since per-thread dump is not yet implemeneted the dumping procedure continue operating with old_ members. - In pie restorer code memory containing filters are addressed from inside thread_restore_args structure which now contains seccomp mode itself and chain attributes (number of filters and etc). Reading of per-thread data is done in seccomp_prepare_threads helper -- we take one pstree_item and walks over every thread inside to allocate pie memory and pin data there. Because of PIE specific, before jumping into pie code we have to relocate this memory into new place and for this seccomp_rst_reloc is served. In restorer itself we check if thread_restore_args provides us enabled seccomp mode (strict or filter passed) and call for restore_seccomp_filter if needed. - To unify names we start using seccomp_ prefix for all related stuff involved into this change (prepare_seccomp_filters renamed to seccomp_read_image because it only reads image and nothing more, image handler is renamed to seccomp_img_entry instead of too short 'se'. With this change we're now allowed to start collecting and dumping seccomp filters per each thread, which will be done in next patch. Signed-off-by:
Cyrill Gorcunov <gorcunov@gmail.com> Signed-off-by:
Andrei Vagin <avagin@virtuozzo.com>
0f5cce7a