<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://kernsec.org/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=DavidWindsor</id>
	<title>Linux Kernel Security Subsystem - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="http://kernsec.org/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=DavidWindsor"/>
	<link rel="alternate" type="text/html" href="http://kernsec.org/wiki/index.php/Special:Contributions/DavidWindsor"/>
	<updated>2026-05-07T10:56:16Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.36.1</generator>
	<entry>
		<id>http://kernsec.org/wiki/index.php?title=Kernel_Protections/refcount_t&amp;diff=3850</id>
		<title>Kernel Protections/refcount t</title>
		<link rel="alternate" type="text/html" href="http://kernsec.org/wiki/index.php?title=Kernel_Protections/refcount_t&amp;diff=3850"/>
		<updated>2017-02-06T14:54:25Z</updated>

		<summary type="html">&lt;p&gt;DavidWindsor: Minor language change in Reference Counting API&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Summary = &lt;br /&gt;
&lt;br /&gt;
HARDENED_ATOMIC is a kernel self-protection mechanism that greatly helps with the prevention of [[Bug Classes/Use after free|use-after-free]] bugs.  It is based off of work done by the [https://pax.grsecurity.net PaX Team], originally called [https://forums.grsecurity.net/viewtopic.php?f=7&amp;amp;t=4173 PAX_REFCOUNT].&lt;br /&gt;
&lt;br /&gt;
= Reference Counting API =&lt;br /&gt;
&lt;br /&gt;
HARDENED_ATOMIC introduces a new data type: &amp;lt;code&amp;gt;refcount_t&amp;lt;/code&amp;gt;.  This type is to be used for all kernel reference counters.  &lt;br /&gt;
&lt;br /&gt;
The following is the kernel reference counting API.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;REFCOUNT_INIT(unsigned int)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Initialize a &amp;lt;code&amp;gt;refcount_t&amp;lt;/code&amp;gt; object.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;void refcount_set(refcount_t *, unsigned int)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Set a &amp;lt;code&amp;gt;refcount_t&amp;lt;/code&amp;gt; object's internal value.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;unsigned int refcount_read(refcount_t *)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Returns the &amp;lt;code&amp;gt;refcount_t&amp;lt;/code&amp;gt; object's internal value.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_add_not_zero(unsigned int v, refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Add &amp;lt;code&amp;gt;v&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  If &amp;lt;code&amp;gt;r + v&amp;lt;/code&amp;gt; causes an overflow, the result of the addition operation is not saved to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if the resulting value of &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is non-zero, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;void refcount_add(unsigned int v, refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Adds &amp;lt;code&amp;gt;v&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; and stores the value in &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_inc_not_zero(refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Increments &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; and tests whether &amp;lt;code&amp;gt;r + 1&amp;lt;/code&amp;gt; causes an overflow.  If an overflow does occur, the result of the increment operation is not saved to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  Will saturate at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;WARN&amp;lt;/code&amp;gt;. Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if the resulting value of &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is non-zero, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.  &lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;void refcount_inc(refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Increment &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  Will saturate at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;WARN&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_sub_and_test(unsigned int v, refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Subtract &amp;lt;code&amp;gt;v&amp;lt;/code&amp;gt; from &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; and tests whether &amp;lt;code&amp;gt;r - v&amp;lt;/code&amp;gt; causes an underflow.  If an underflow does occur, the result of the decrement operation is not saved to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  Will fail to decrement when saturated at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt;.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if the resulting value of &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is non-zero, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;void refcount_dec(refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  If &amp;lt;code&amp;gt;r - 1&amp;lt;/code&amp;gt; causes an underflow, the result of the decrement operation is not saved to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  Will fail to decrement when saturated at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_dec_if_one(refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Attempts to transition &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; from 1 to 0.  If &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is 1, decrement it to 0.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; was decremented, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.  &lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_dec_not_one(refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; unless the value of &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is 1.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; was decremented, &amp;lt;/code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; and lock mutex if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; becomes 0.  Will &amp;lt;code&amp;gt;WARN&amp;lt;/code&amp;gt; on underflow and fail to decrement if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is saturated at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt;.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is 0 and mutex is held, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_dec_and_lock(refcount_t *r, spinlock_t *s)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; and lock spinlock if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; becomes 0.  Will &amp;lt;code&amp;gt;WARN&amp;lt;/code&amp;gt; on underflow and fail to decrement if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is saturated at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt;.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is 0 and spinlock is held, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
&lt;br /&gt;
The following use case is an instance of correct usage of the &amp;lt;code&amp;gt;refcount_t&amp;lt;/code&amp;gt; API.  The object being counted is &amp;lt;code&amp;gt;struct super_block&amp;lt;/code&amp;gt;, which represents a virtual filesystem superblock, an object containing a particular filesystem's metadata such as block size, the root inode, etc.  &lt;br /&gt;
&lt;br /&gt;
==== Member Definition ====&lt;br /&gt;
&lt;br /&gt;
This is the definition of the reference counter field in the &amp;lt;code&amp;gt;struct super_block&amp;lt;/code&amp;gt; object.  If the object being counted is a structure, the reference counter is typically defined as a field of the counted structure, as we see in &amp;lt;code&amp;gt;struct super_block&amp;lt;/code&amp;gt; below.&lt;br /&gt;
  &lt;br /&gt;
From &amp;lt;code&amp;gt;[http://lxr.free-electrons.com/source/include/linux/fs.h include/linux/fs.h]&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
    struct super_block {&lt;br /&gt;
        ...&lt;br /&gt;
        refcount_t s_active;&lt;br /&gt;
        ...&lt;br /&gt;
    };&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Object Initialization ====&lt;br /&gt;
&lt;br /&gt;
When a counted object is created, its reference counter must be initialized to something sane, typically 1 (since, by virtue of being called in an &amp;quot;allocation&amp;quot; method, a user of the object already exists).  &lt;br /&gt;
&lt;br /&gt;
From &amp;lt;code&amp;gt;[http://lxr.free-electrons.com/source/fs/super.c fs/super.c]&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
    static struct super_block *alloc_super(struct file_system_type *type, int flags, struct user_namespace *user_ns)&lt;br /&gt;
    {&lt;br /&gt;
        struct super_block *s = kzalloc(sizeof(struct super_block), GFP_USER);   &lt;br /&gt;
        ...&lt;br /&gt;
        refcount_set(&amp;amp;s-&amp;gt;s_active, 1);&lt;br /&gt;
        ...&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Getting a New Reference ====&lt;br /&gt;
This code is executed when a user wishes to obtain a new reference to a &amp;lt;code&amp;gt;struct super_block&amp;lt;/code&amp;gt; object.  The following code corresponds to the traditional reference counting &amp;quot;get&amp;quot; method.&lt;br /&gt;
&lt;br /&gt;
From &amp;lt;code&amp;gt;[http://lxr.free-electrons.com/source/fs/super.c fs/super.c]&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
    static int grab_super(struct super_block *s) __releases(sb_lock)&lt;br /&gt;
    {&lt;br /&gt;
        s-&amp;gt;s_count++;&lt;br /&gt;
        spin_unlock(&amp;amp;sb_lock);&lt;br /&gt;
        down_write(&amp;amp;s-&amp;gt;s_umount);&lt;br /&gt;
        if ((s-&amp;gt;s_flags &amp;amp; MS_BORN) &amp;amp;&amp;amp; refcount_inc_not_zero(&amp;amp;s-&amp;gt;s_active)) {&lt;br /&gt;
            put_super(s);&lt;br /&gt;
            return 1;&lt;br /&gt;
        }&lt;br /&gt;
        up_write(&amp;amp;s-&amp;gt;s_umount);&lt;br /&gt;
        put_super(s);&lt;br /&gt;
        return 0;&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Releasing an Existing Reference ====&lt;br /&gt;
This code is executed when a user currently holding a reference to a &amp;lt;code&amp;gt;struct super_block&amp;lt;/code&amp;gt; object no longer needs the object and wants to release it.  The following code corresponds to the traditional reference counting &amp;quot;put&amp;quot; method.  &lt;br /&gt;
&lt;br /&gt;
From &amp;lt;code&amp;gt;[http://lxr.free-electrons.com/source/fs/super.c fs/super.c]&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
    void deactivate_locked_super(struct super_block *s) {&lt;br /&gt;
        ...&lt;br /&gt;
        if (refcount_dec_and_test(&amp;amp;s-&amp;gt;s_active)) {&lt;br /&gt;
            ...&lt;br /&gt;
            put_super(s);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    void deactivate_super(struct super_block *s)&lt;br /&gt;
    {&lt;br /&gt;
        if (!refcount_dec_not_one(&amp;amp;s-&amp;gt;s_active)) {&lt;br /&gt;
            down_write(&amp;amp;s-&amp;gt;s_umount);nnNnnn&lt;br /&gt;
            deactivate_locked_super(s);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavidWindsor</name></author>
	</entry>
	<entry>
		<id>http://kernsec.org/wiki/index.php?title=Kernel_Protections/refcount_t&amp;diff=3849</id>
		<title>Kernel Protections/refcount t</title>
		<link rel="alternate" type="text/html" href="http://kernsec.org/wiki/index.php?title=Kernel_Protections/refcount_t&amp;diff=3849"/>
		<updated>2017-02-06T14:53:47Z</updated>

		<summary type="html">&lt;p&gt;DavidWindsor: Minor language change in Summary&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Summary = &lt;br /&gt;
&lt;br /&gt;
HARDENED_ATOMIC is a kernel self-protection mechanism that greatly helps with the prevention of [[Bug Classes/Use after free|use-after-free]] bugs.  It is based off of work done by the [https://pax.grsecurity.net PaX Team], originally called [https://forums.grsecurity.net/viewtopic.php?f=7&amp;amp;t=4173 PAX_REFCOUNT].&lt;br /&gt;
&lt;br /&gt;
= Reference Counting API =&lt;br /&gt;
&lt;br /&gt;
HARDENED_ATOMIC introduces a new data type: &amp;lt;code&amp;gt;refcount_t&amp;lt;/code&amp;gt;.  This type is to be used for all kernel reference counters.  &lt;br /&gt;
&lt;br /&gt;
The following is the kernel reference counting API.  Please note that all operations are atomic, unless otherwise specified. &lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;REFCOUNT_INIT(unsigned int)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Initialize a &amp;lt;code&amp;gt;refcount_t&amp;lt;/code&amp;gt; object.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;void refcount_set(refcount_t *, unsigned int)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Set a &amp;lt;code&amp;gt;refcount_t&amp;lt;/code&amp;gt; object's internal value.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;unsigned int refcount_read(refcount_t *)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Returns the &amp;lt;code&amp;gt;refcount_t&amp;lt;/code&amp;gt; object's internal value.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_add_not_zero(unsigned int v, refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Add &amp;lt;code&amp;gt;v&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  If &amp;lt;code&amp;gt;r + v&amp;lt;/code&amp;gt; causes an overflow, the result of the addition operation is not saved to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if the resulting value of &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is non-zero, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;void refcount_add(unsigned int v, refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Adds &amp;lt;code&amp;gt;v&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; and stores the value in &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_inc_not_zero(refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Increments &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; and tests whether &amp;lt;code&amp;gt;r + 1&amp;lt;/code&amp;gt; causes an overflow.  If an overflow does occur, the result of the increment operation is not saved to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  Will saturate at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;WARN&amp;lt;/code&amp;gt;. Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if the resulting value of &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is non-zero, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.  &lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;void refcount_inc(refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Increment &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  Will saturate at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;WARN&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_sub_and_test(unsigned int v, refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Subtract &amp;lt;code&amp;gt;v&amp;lt;/code&amp;gt; from &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; and tests whether &amp;lt;code&amp;gt;r - v&amp;lt;/code&amp;gt; causes an underflow.  If an underflow does occur, the result of the decrement operation is not saved to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  Will fail to decrement when saturated at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt;.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if the resulting value of &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is non-zero, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;void refcount_dec(refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  If &amp;lt;code&amp;gt;r - 1&amp;lt;/code&amp;gt; causes an underflow, the result of the decrement operation is not saved to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  Will fail to decrement when saturated at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_dec_if_one(refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Attempts to transition &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; from 1 to 0.  If &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is 1, decrement it to 0.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; was decremented, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.  &lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_dec_not_one(refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; unless the value of &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is 1.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; was decremented, &amp;lt;/code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; and lock mutex if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; becomes 0.  Will &amp;lt;code&amp;gt;WARN&amp;lt;/code&amp;gt; on underflow and fail to decrement if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is saturated at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt;.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is 0 and mutex is held, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_dec_and_lock(refcount_t *r, spinlock_t *s)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; and lock spinlock if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; becomes 0.  Will &amp;lt;code&amp;gt;WARN&amp;lt;/code&amp;gt; on underflow and fail to decrement if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is saturated at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt;.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is 0 and spinlock is held, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
&lt;br /&gt;
The following use case is an instance of correct usage of the &amp;lt;code&amp;gt;refcount_t&amp;lt;/code&amp;gt; API.  The object being counted is &amp;lt;code&amp;gt;struct super_block&amp;lt;/code&amp;gt;, which represents a virtual filesystem superblock, an object containing a particular filesystem's metadata such as block size, the root inode, etc.  &lt;br /&gt;
&lt;br /&gt;
==== Member Definition ====&lt;br /&gt;
&lt;br /&gt;
This is the definition of the reference counter field in the &amp;lt;code&amp;gt;struct super_block&amp;lt;/code&amp;gt; object.  If the object being counted is a structure, the reference counter is typically defined as a field of the counted structure, as we see in &amp;lt;code&amp;gt;struct super_block&amp;lt;/code&amp;gt; below.&lt;br /&gt;
  &lt;br /&gt;
From &amp;lt;code&amp;gt;[http://lxr.free-electrons.com/source/include/linux/fs.h include/linux/fs.h]&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
    struct super_block {&lt;br /&gt;
        ...&lt;br /&gt;
        refcount_t s_active;&lt;br /&gt;
        ...&lt;br /&gt;
    };&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Object Initialization ====&lt;br /&gt;
&lt;br /&gt;
When a counted object is created, its reference counter must be initialized to something sane, typically 1 (since, by virtue of being called in an &amp;quot;allocation&amp;quot; method, a user of the object already exists).  &lt;br /&gt;
&lt;br /&gt;
From &amp;lt;code&amp;gt;[http://lxr.free-electrons.com/source/fs/super.c fs/super.c]&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
    static struct super_block *alloc_super(struct file_system_type *type, int flags, struct user_namespace *user_ns)&lt;br /&gt;
    {&lt;br /&gt;
        struct super_block *s = kzalloc(sizeof(struct super_block), GFP_USER);   &lt;br /&gt;
        ...&lt;br /&gt;
        refcount_set(&amp;amp;s-&amp;gt;s_active, 1);&lt;br /&gt;
        ...&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Getting a New Reference ====&lt;br /&gt;
This code is executed when a user wishes to obtain a new reference to a &amp;lt;code&amp;gt;struct super_block&amp;lt;/code&amp;gt; object.  The following code corresponds to the traditional reference counting &amp;quot;get&amp;quot; method.&lt;br /&gt;
&lt;br /&gt;
From &amp;lt;code&amp;gt;[http://lxr.free-electrons.com/source/fs/super.c fs/super.c]&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
    static int grab_super(struct super_block *s) __releases(sb_lock)&lt;br /&gt;
    {&lt;br /&gt;
        s-&amp;gt;s_count++;&lt;br /&gt;
        spin_unlock(&amp;amp;sb_lock);&lt;br /&gt;
        down_write(&amp;amp;s-&amp;gt;s_umount);&lt;br /&gt;
        if ((s-&amp;gt;s_flags &amp;amp; MS_BORN) &amp;amp;&amp;amp; refcount_inc_not_zero(&amp;amp;s-&amp;gt;s_active)) {&lt;br /&gt;
            put_super(s);&lt;br /&gt;
            return 1;&lt;br /&gt;
        }&lt;br /&gt;
        up_write(&amp;amp;s-&amp;gt;s_umount);&lt;br /&gt;
        put_super(s);&lt;br /&gt;
        return 0;&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Releasing an Existing Reference ====&lt;br /&gt;
This code is executed when a user currently holding a reference to a &amp;lt;code&amp;gt;struct super_block&amp;lt;/code&amp;gt; object no longer needs the object and wants to release it.  The following code corresponds to the traditional reference counting &amp;quot;put&amp;quot; method.  &lt;br /&gt;
&lt;br /&gt;
From &amp;lt;code&amp;gt;[http://lxr.free-electrons.com/source/fs/super.c fs/super.c]&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
    void deactivate_locked_super(struct super_block *s) {&lt;br /&gt;
        ...&lt;br /&gt;
        if (refcount_dec_and_test(&amp;amp;s-&amp;gt;s_active)) {&lt;br /&gt;
            ...&lt;br /&gt;
            put_super(s);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    void deactivate_super(struct super_block *s)&lt;br /&gt;
    {&lt;br /&gt;
        if (!refcount_dec_not_one(&amp;amp;s-&amp;gt;s_active)) {&lt;br /&gt;
            down_write(&amp;amp;s-&amp;gt;s_umount);nnNnnn&lt;br /&gt;
            deactivate_locked_super(s);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavidWindsor</name></author>
	</entry>
	<entry>
		<id>http://kernsec.org/wiki/index.php?title=Kernel_Protections/refcount_t&amp;diff=3848</id>
		<title>Kernel Protections/refcount t</title>
		<link rel="alternate" type="text/html" href="http://kernsec.org/wiki/index.php?title=Kernel_Protections/refcount_t&amp;diff=3848"/>
		<updated>2017-02-06T14:45:47Z</updated>

		<summary type="html">&lt;p&gt;DavidWindsor: Add Examples section&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Summary = &lt;br /&gt;
&lt;br /&gt;
HARDENED_ATOMIC is a kernel self-protection mechanism that greatly helps with the mitigation of [[Bug Classes/Use after free|use-after-free]] bugs.  It is based off of work done by the [https://pax.grsecurity.net PaX Team], originally called [https://forums.grsecurity.net/viewtopic.php?f=7&amp;amp;t=4173 PAX_REFCOUNT].&lt;br /&gt;
&lt;br /&gt;
= Reference Counting API =&lt;br /&gt;
&lt;br /&gt;
HARDENED_ATOMIC introduces a new data type: &amp;lt;code&amp;gt;refcount_t&amp;lt;/code&amp;gt;.  This type is to be used for all kernel reference counters.  &lt;br /&gt;
&lt;br /&gt;
The following is the kernel reference counting API.  Please note that all operations are atomic, unless otherwise specified. &lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;REFCOUNT_INIT(unsigned int)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Initialize a &amp;lt;code&amp;gt;refcount_t&amp;lt;/code&amp;gt; object.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;void refcount_set(refcount_t *, unsigned int)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Set a &amp;lt;code&amp;gt;refcount_t&amp;lt;/code&amp;gt; object's internal value.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;unsigned int refcount_read(refcount_t *)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Returns the &amp;lt;code&amp;gt;refcount_t&amp;lt;/code&amp;gt; object's internal value.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_add_not_zero(unsigned int v, refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Add &amp;lt;code&amp;gt;v&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  If &amp;lt;code&amp;gt;r + v&amp;lt;/code&amp;gt; causes an overflow, the result of the addition operation is not saved to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if the resulting value of &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is non-zero, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;void refcount_add(unsigned int v, refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Adds &amp;lt;code&amp;gt;v&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; and stores the value in &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_inc_not_zero(refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Increments &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; and tests whether &amp;lt;code&amp;gt;r + 1&amp;lt;/code&amp;gt; causes an overflow.  If an overflow does occur, the result of the increment operation is not saved to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  Will saturate at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;WARN&amp;lt;/code&amp;gt;. Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if the resulting value of &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is non-zero, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.  &lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;void refcount_inc(refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Increment &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  Will saturate at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;WARN&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_sub_and_test(unsigned int v, refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Subtract &amp;lt;code&amp;gt;v&amp;lt;/code&amp;gt; from &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; and tests whether &amp;lt;code&amp;gt;r - v&amp;lt;/code&amp;gt; causes an underflow.  If an underflow does occur, the result of the decrement operation is not saved to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  Will fail to decrement when saturated at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt;.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if the resulting value of &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is non-zero, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;void refcount_dec(refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  If &amp;lt;code&amp;gt;r - 1&amp;lt;/code&amp;gt; causes an underflow, the result of the decrement operation is not saved to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  Will fail to decrement when saturated at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_dec_if_one(refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Attempts to transition &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; from 1 to 0.  If &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is 1, decrement it to 0.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; was decremented, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.  &lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_dec_not_one(refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; unless the value of &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is 1.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; was decremented, &amp;lt;/code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; and lock mutex if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; becomes 0.  Will &amp;lt;code&amp;gt;WARN&amp;lt;/code&amp;gt; on underflow and fail to decrement if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is saturated at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt;.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is 0 and mutex is held, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_dec_and_lock(refcount_t *r, spinlock_t *s)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; and lock spinlock if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; becomes 0.  Will &amp;lt;code&amp;gt;WARN&amp;lt;/code&amp;gt; on underflow and fail to decrement if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is saturated at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt;.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is 0 and spinlock is held, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
&lt;br /&gt;
The following use case is an instance of correct usage of the &amp;lt;code&amp;gt;refcount_t&amp;lt;/code&amp;gt; API.  The object being counted is &amp;lt;code&amp;gt;struct super_block&amp;lt;/code&amp;gt;, which represents a virtual filesystem superblock, an object containing a particular filesystem's metadata such as block size, the root inode, etc.  &lt;br /&gt;
&lt;br /&gt;
==== Member Definition ====&lt;br /&gt;
&lt;br /&gt;
This is the definition of the reference counter field in the &amp;lt;code&amp;gt;struct super_block&amp;lt;/code&amp;gt; object.  If the object being counted is a structure, the reference counter is typically defined as a field of the counted structure, as we see in &amp;lt;code&amp;gt;struct super_block&amp;lt;/code&amp;gt; below.&lt;br /&gt;
  &lt;br /&gt;
From &amp;lt;code&amp;gt;[http://lxr.free-electrons.com/source/include/linux/fs.h include/linux/fs.h]&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
    struct super_block {&lt;br /&gt;
        ...&lt;br /&gt;
        refcount_t s_active;&lt;br /&gt;
        ...&lt;br /&gt;
    };&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Object Initialization ====&lt;br /&gt;
&lt;br /&gt;
When a counted object is created, its reference counter must be initialized to something sane, typically 1 (since, by virtue of being called in an &amp;quot;allocation&amp;quot; method, a user of the object already exists).  &lt;br /&gt;
&lt;br /&gt;
From &amp;lt;code&amp;gt;[http://lxr.free-electrons.com/source/fs/super.c fs/super.c]&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
    static struct super_block *alloc_super(struct file_system_type *type, int flags, struct user_namespace *user_ns)&lt;br /&gt;
    {&lt;br /&gt;
        struct super_block *s = kzalloc(sizeof(struct super_block), GFP_USER);   &lt;br /&gt;
        ...&lt;br /&gt;
        refcount_set(&amp;amp;s-&amp;gt;s_active, 1);&lt;br /&gt;
        ...&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Getting a New Reference ====&lt;br /&gt;
This code is executed when a user wishes to obtain a new reference to a &amp;lt;code&amp;gt;struct super_block&amp;lt;/code&amp;gt; object.  The following code corresponds to the traditional reference counting &amp;quot;get&amp;quot; method.&lt;br /&gt;
&lt;br /&gt;
From &amp;lt;code&amp;gt;[http://lxr.free-electrons.com/source/fs/super.c fs/super.c]&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
    static int grab_super(struct super_block *s) __releases(sb_lock)&lt;br /&gt;
    {&lt;br /&gt;
        s-&amp;gt;s_count++;&lt;br /&gt;
        spin_unlock(&amp;amp;sb_lock);&lt;br /&gt;
        down_write(&amp;amp;s-&amp;gt;s_umount);&lt;br /&gt;
        if ((s-&amp;gt;s_flags &amp;amp; MS_BORN) &amp;amp;&amp;amp; refcount_inc_not_zero(&amp;amp;s-&amp;gt;s_active)) {&lt;br /&gt;
            put_super(s);&lt;br /&gt;
            return 1;&lt;br /&gt;
        }&lt;br /&gt;
        up_write(&amp;amp;s-&amp;gt;s_umount);&lt;br /&gt;
        put_super(s);&lt;br /&gt;
        return 0;&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Releasing an Existing Reference ====&lt;br /&gt;
This code is executed when a user currently holding a reference to a &amp;lt;code&amp;gt;struct super_block&amp;lt;/code&amp;gt; object no longer needs the object and wants to release it.  The following code corresponds to the traditional reference counting &amp;quot;put&amp;quot; method.  &lt;br /&gt;
&lt;br /&gt;
From &amp;lt;code&amp;gt;[http://lxr.free-electrons.com/source/fs/super.c fs/super.c]&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
    void deactivate_locked_super(struct super_block *s) {&lt;br /&gt;
        ...&lt;br /&gt;
        if (refcount_dec_and_test(&amp;amp;s-&amp;gt;s_active)) {&lt;br /&gt;
            ...&lt;br /&gt;
            put_super(s);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    void deactivate_super(struct super_block *s)&lt;br /&gt;
    {&lt;br /&gt;
        if (!refcount_dec_not_one(&amp;amp;s-&amp;gt;s_active)) {&lt;br /&gt;
            down_write(&amp;amp;s-&amp;gt;s_umount);nnNnnn&lt;br /&gt;
            deactivate_locked_super(s);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavidWindsor</name></author>
	</entry>
	<entry>
		<id>http://kernsec.org/wiki/index.php?title=Kernel_Protections/refcount_t&amp;diff=3847</id>
		<title>Kernel Protections/refcount t</title>
		<link rel="alternate" type="text/html" href="http://kernsec.org/wiki/index.php?title=Kernel_Protections/refcount_t&amp;diff=3847"/>
		<updated>2017-02-06T12:19:19Z</updated>

		<summary type="html">&lt;p&gt;DavidWindsor: Change &amp;lt;tt&amp;gt; tags to &amp;lt;code&amp;gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Summary = &lt;br /&gt;
&lt;br /&gt;
HARDENED_ATOMIC is a kernel self-protection mechanism that greatly helps with the mitigation of [[Bug Classes/Use after free|use-after-free]] bugs.  It is based off of work done by the [https://pax.grsecurity.net PaX Team], originally called [https://forums.grsecurity.net/viewtopic.php?f=7&amp;amp;t=4173 PAX_REFCOUNT].&lt;br /&gt;
&lt;br /&gt;
= Reference Counting API =&lt;br /&gt;
&lt;br /&gt;
HARDENED_ATOMIC introduces a new data type: &amp;lt;code&amp;gt;refcount_t&amp;lt;/code&amp;gt;.  This type is to be used for all kernel reference counters.  &lt;br /&gt;
&lt;br /&gt;
The following is the kernel reference counting API.  Please note that all operations are atomic, unless otherwise specified. &lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;REFCOUNT_INIT(unsigned int)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Initialize a &amp;lt;code&amp;gt;refcount_t&amp;lt;/code&amp;gt; object.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;void refcount_set(refcount_t *, unsigned int)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Set a &amp;lt;code&amp;gt;refcount_t&amp;lt;/code&amp;gt; object's internal value.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;unsigned int refcount_read(refcount_t *)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Returns the &amp;lt;code&amp;gt;refcount_t&amp;lt;/code&amp;gt; object's internal value.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_add_not_zero(unsigned int v, refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Add &amp;lt;code&amp;gt;v&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  If &amp;lt;code&amp;gt;r + v&amp;lt;/code&amp;gt; causes an overflow, the result of the addition operation is not saved to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if the resulting value of &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is non-zero, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;void refcount_add(unsigned int v, refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Adds &amp;lt;code&amp;gt;v&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; and stores the value in &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_inc_not_zero(refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Increments &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; and tests whether &amp;lt;code&amp;gt;r + 1&amp;lt;/code&amp;gt; causes an overflow.  If an overflow does occur, the result of the increment operation is not saved to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  Will saturate at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;WARN&amp;lt;/code&amp;gt;. Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if the resulting value of &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is non-zero, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.  &lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;void refcount_inc(refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Increment &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  Will saturate at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;WARN&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_sub_and_test(unsigned int v, refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Subtract &amp;lt;code&amp;gt;v&amp;lt;/code&amp;gt; from &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; and tests whether &amp;lt;code&amp;gt;r - v&amp;lt;/code&amp;gt; causes an underflow.  If an underflow does occur, the result of the decrement operation is not saved to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  Will fail to decrement when saturated at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt;.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if the resulting value of &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is non-zero, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;void refcount_dec(refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  If &amp;lt;code&amp;gt;r - 1&amp;lt;/code&amp;gt; causes an underflow, the result of the decrement operation is not saved to &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt;.  Will fail to decrement when saturated at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_dec_if_one(refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Attempts to transition &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; from 1 to 0.  If &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is 1, decrement it to 0.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; was decremented, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.  &lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_dec_not_one(refcount_t *r)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; unless the value of &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is 1.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; was decremented, &amp;lt;/code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; and lock mutex if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; becomes 0.  Will &amp;lt;code&amp;gt;WARN&amp;lt;/code&amp;gt; on underflow and fail to decrement if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is saturated at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt;.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is 0 and mutex is held, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;code&amp;gt;bool refcount_dec_and_lock(refcount_t *r, spinlock_t *s)&amp;lt;/code&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; and lock spinlock if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; becomes 0.  Will &amp;lt;code&amp;gt;WARN&amp;lt;/code&amp;gt; on underflow and fail to decrement if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is saturated at &amp;lt;code&amp;gt;UINT_MAX&amp;lt;/code&amp;gt;.  Returns &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt; if &amp;lt;code&amp;gt;r&amp;lt;/code&amp;gt; is 0 and spinlock is held, &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt; otherwise.&lt;/div&gt;</summary>
		<author><name>DavidWindsor</name></author>
	</entry>
	<entry>
		<id>http://kernsec.org/wiki/index.php?title=Kernel_Protections/refcount_t&amp;diff=3846</id>
		<title>Kernel Protections/refcount t</title>
		<link rel="alternate" type="text/html" href="http://kernsec.org/wiki/index.php?title=Kernel_Protections/refcount_t&amp;diff=3846"/>
		<updated>2017-02-04T11:20:47Z</updated>

		<summary type="html">&lt;p&gt;DavidWindsor: Minor language change in Reference Counting API&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Summary = &lt;br /&gt;
&lt;br /&gt;
HARDENED_ATOMIC is a kernel self-protection mechanism that greatly helps with the mitigation of [[Bug Classes/Use after free|use-after-free]] bugs.  It is based off of work done by the [https://pax.grsecurity.net PaX Team], originally called [https://forums.grsecurity.net/viewtopic.php?f=7&amp;amp;t=4173 PAX_REFCOUNT].&lt;br /&gt;
&lt;br /&gt;
= Reference Counting API =&lt;br /&gt;
&lt;br /&gt;
HARDENED_ATOMIC introduces a new data type: &amp;lt;tt&amp;gt;refcount_t&amp;lt;/tt&amp;gt;.  This type is to be used for all kernel reference counters.  &lt;br /&gt;
&lt;br /&gt;
The following is the kernel reference counting API.  Please note that all operations are atomic, unless otherwise specified. &lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;REFCOUNT_INIT(unsigned int)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Initialize a &amp;lt;tt&amp;gt;refcount_t&amp;lt;/tt&amp;gt; object.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;void refcount_set(refcount_t *, unsigned int)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Set a &amp;lt;tt&amp;gt;refcount_t&amp;lt;/tt&amp;gt; object's internal value.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;unsigned int refcount_read(refcount_t *)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Returns the &amp;lt;tt&amp;gt;refcount_t&amp;lt;/tt&amp;gt; object's internal value.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;bool refcount_add_not_zero(unsigned int v, refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Add &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.  If &amp;lt;tt&amp;gt;r + v&amp;lt;/tt&amp;gt; causes an overflow, the result of the addition operation is not saved to &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.  Returns &amp;lt;tt&amp;gt;true&amp;lt;/tt&amp;gt; if the resulting value of &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is non-zero, &amp;lt;tt&amp;gt;false&amp;lt;/tt&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;void refcount_add(unsigned int v, refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Adds &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; and stores the value in &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;bool refcount_inc_not_zero(refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Increments &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; and tests whether &amp;lt;tt&amp;gt;r + 1&amp;lt;/tt&amp;gt; causes an overflow.  If an overflow does occur, the result of the increment operation is not saved to &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.  Will saturate at &amp;lt;tt&amp;gt;UINT_MAX&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;WARN&amp;lt;/tt&amp;gt;. Returns &amp;lt;tt&amp;gt;true&amp;lt;/tt&amp;gt; if the resulting value of &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is non-zero, &amp;lt;tt&amp;gt;false&amp;lt;/tt&amp;gt; otherwise.  &lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;void refcount_inc(refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Increment &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.  Will saturate at &amp;lt;tt&amp;gt;UINT_MAX&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;WARN&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;bool refcount_sub_and_test(unsigned int v, refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Subtract &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt; from &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; and tests whether &amp;lt;tt&amp;gt;r - v&amp;lt;/tt&amp;gt; causes an underflow.  If an underflow does occur, the result of the decrement operation is not saved to &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.  Will fail to decrement when saturated at &amp;lt;tt&amp;gt;UINT_MAX&amp;lt;/tt&amp;gt;.  Returns &amp;lt;tt&amp;gt;true&amp;lt;/tt&amp;gt; if the resulting value of &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is non-zero, &amp;lt;tt&amp;gt;false&amp;lt;/tt&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;void refcount_dec(refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.  If &amp;lt;tt&amp;gt;r - 1&amp;lt;/tt&amp;gt; causes an underflow, the result of the decrement operation is not saved to &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.  Will fail to decrement when saturated at &amp;lt;tt&amp;gt;UINT_MAX&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;bool refcount_dec_if_one(refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Attempts to transition &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; from 1 to 0.  If &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is 1, decrement it to 0.  Returns &amp;lt;tt&amp;gt;true&amp;lt;/tt&amp;gt; if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; was decremented, &amp;lt;tt&amp;gt;false&amp;lt;/tt&amp;gt; otherwise.  &lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;bool refcount_dec_not_one(refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; unless the value of &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is 1.  Returns &amp;lt;tt&amp;gt;true&amp;lt;/tt&amp;gt; if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; was decremented, &amp;lt;/tt&amp;gt;false&amp;lt;/tt&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; and lock mutex if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; becomes 0.  Will &amp;lt;tt&amp;gt;WARN&amp;lt;/tt&amp;gt; on underflow and fail to decrement if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is saturated at &amp;lt;tt&amp;gt;UINT_MAX&amp;lt;/tt&amp;gt;.  Returns &amp;lt;tt&amp;gt;true&amp;lt;/tt&amp;gt; if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is 0 and mutex is held, &amp;lt;tt&amp;gt;false&amp;lt;/tt&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;bool refcount_dec_and_lock(refcount_t *r, spinlock_t *s)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; and lock spinlock if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; becomes 0.  Will &amp;lt;tt&amp;gt;WARN&amp;lt;/tt&amp;gt; on underflow and fail to decrement if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is saturated at &amp;lt;tt&amp;gt;UINT_MAX&amp;lt;/tt&amp;gt;.  Returns &amp;lt;tt&amp;gt;true&amp;lt;/tt&amp;gt; if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is 0 and spinlock is held, &amp;lt;tt&amp;gt;false&amp;lt;/tt&amp;gt; otherwise.&lt;/div&gt;</summary>
		<author><name>DavidWindsor</name></author>
	</entry>
	<entry>
		<id>http://kernsec.org/wiki/index.php?title=Kernel_Protections/refcount_t&amp;diff=3845</id>
		<title>Kernel Protections/refcount t</title>
		<link rel="alternate" type="text/html" href="http://kernsec.org/wiki/index.php?title=Kernel_Protections/refcount_t&amp;diff=3845"/>
		<updated>2017-02-04T11:19:13Z</updated>

		<summary type="html">&lt;p&gt;DavidWindsor: Minor language change in Summary&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Summary = &lt;br /&gt;
&lt;br /&gt;
HARDENED_ATOMIC is a kernel self-protection mechanism that greatly helps with the mitigation of [[Bug Classes/Use after free|use-after-free]] bugs.  It is based off of work done by the [https://pax.grsecurity.net PaX Team], originally called [https://forums.grsecurity.net/viewtopic.php?f=7&amp;amp;t=4173 PAX_REFCOUNT].&lt;br /&gt;
&lt;br /&gt;
= Reference Counting API =&lt;br /&gt;
&lt;br /&gt;
HARDENED_ATOMIC introduces a new data type: &amp;lt;tt&amp;gt;refcount_t&amp;lt;/tt&amp;gt;.  This type is to be used for all kernel reference counters.  &lt;br /&gt;
&lt;br /&gt;
The following operations are the kernel reference counting API.  Please note that all operations are atomic, unless otherwise specified. &lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;REFCOUNT_INIT(unsigned int)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Initialize a &amp;lt;tt&amp;gt;refcount_t&amp;lt;/tt&amp;gt; object.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;void refcount_set(refcount_t *, unsigned int)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Set a &amp;lt;tt&amp;gt;refcount_t&amp;lt;/tt&amp;gt; object's internal value.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;unsigned int refcount_read(refcount_t *)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Returns the &amp;lt;tt&amp;gt;refcount_t&amp;lt;/tt&amp;gt; object's internal value.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;bool refcount_add_not_zero(unsigned int v, refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Add &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.  If &amp;lt;tt&amp;gt;r + v&amp;lt;/tt&amp;gt; causes an overflow, the result of the addition operation is not saved to &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.  Returns &amp;lt;tt&amp;gt;true&amp;lt;/tt&amp;gt; if the resulting value of &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is non-zero, &amp;lt;tt&amp;gt;false&amp;lt;/tt&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;void refcount_add(unsigned int v, refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Adds &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; and stores the value in &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;bool refcount_inc_not_zero(refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Increments &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; and tests whether &amp;lt;tt&amp;gt;r + 1&amp;lt;/tt&amp;gt; causes an overflow.  If an overflow does occur, the result of the increment operation is not saved to &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.  Will saturate at &amp;lt;tt&amp;gt;UINT_MAX&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;WARN&amp;lt;/tt&amp;gt;. Returns &amp;lt;tt&amp;gt;true&amp;lt;/tt&amp;gt; if the resulting value of &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is non-zero, &amp;lt;tt&amp;gt;false&amp;lt;/tt&amp;gt; otherwise.  &lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;void refcount_inc(refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Increment &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.  Will saturate at &amp;lt;tt&amp;gt;UINT_MAX&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;WARN&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;bool refcount_sub_and_test(unsigned int v, refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Subtract &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt; from &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; and tests whether &amp;lt;tt&amp;gt;r - v&amp;lt;/tt&amp;gt; causes an underflow.  If an underflow does occur, the result of the decrement operation is not saved to &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.  Will fail to decrement when saturated at &amp;lt;tt&amp;gt;UINT_MAX&amp;lt;/tt&amp;gt;.  Returns &amp;lt;tt&amp;gt;true&amp;lt;/tt&amp;gt; if the resulting value of &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is non-zero, &amp;lt;tt&amp;gt;false&amp;lt;/tt&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;void refcount_dec(refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.  If &amp;lt;tt&amp;gt;r - 1&amp;lt;/tt&amp;gt; causes an underflow, the result of the decrement operation is not saved to &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.  Will fail to decrement when saturated at &amp;lt;tt&amp;gt;UINT_MAX&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;bool refcount_dec_if_one(refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Attempts to transition &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; from 1 to 0.  If &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is 1, decrement it to 0.  Returns &amp;lt;tt&amp;gt;true&amp;lt;/tt&amp;gt; if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; was decremented, &amp;lt;tt&amp;gt;false&amp;lt;/tt&amp;gt; otherwise.  &lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;bool refcount_dec_not_one(refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; unless the value of &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is 1.  Returns &amp;lt;tt&amp;gt;true&amp;lt;/tt&amp;gt; if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; was decremented, &amp;lt;/tt&amp;gt;false&amp;lt;/tt&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; and lock mutex if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; becomes 0.  Will &amp;lt;tt&amp;gt;WARN&amp;lt;/tt&amp;gt; on underflow and fail to decrement if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is saturated at &amp;lt;tt&amp;gt;UINT_MAX&amp;lt;/tt&amp;gt;.  Returns &amp;lt;tt&amp;gt;true&amp;lt;/tt&amp;gt; if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is 0 and mutex is held, &amp;lt;tt&amp;gt;false&amp;lt;/tt&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;bool refcount_dec_and_lock(refcount_t *r, spinlock_t *s)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; and lock spinlock if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; becomes 0.  Will &amp;lt;tt&amp;gt;WARN&amp;lt;/tt&amp;gt; on underflow and fail to decrement if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is saturated at &amp;lt;tt&amp;gt;UINT_MAX&amp;lt;/tt&amp;gt;.  Returns &amp;lt;tt&amp;gt;true&amp;lt;/tt&amp;gt; if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is 0 and spinlock is held, &amp;lt;tt&amp;gt;false&amp;lt;/tt&amp;gt; otherwise.&lt;/div&gt;</summary>
		<author><name>DavidWindsor</name></author>
	</entry>
	<entry>
		<id>http://kernsec.org/wiki/index.php?title=Kernel_Self_Protection_Project&amp;diff=3844</id>
		<title>Kernel Self Protection Project</title>
		<link rel="alternate" type="text/html" href="http://kernsec.org/wiki/index.php?title=Kernel_Self_Protection_Project&amp;diff=3844"/>
		<updated>2017-02-04T06:44:15Z</updated>

		<summary type="html">&lt;p&gt;DavidWindsor: Fix link to HARDENED_ATOMIC&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Mission Statement =&lt;br /&gt;
&lt;br /&gt;
This project starts with the premise that [https://lwn.net/Articles/410606/ kernel bugs have a very long lifetime], and that the kernel must be designed in ways to protect against these flaws. We must think of [http://lwn.net/Articles/662219/ security beyond fixing bugs]. As a community, we already find and fix individual bugs via static checkers (compiler flags, [http://smatch.sourceforge.net/ smatch], [http://coccinelle.lip6.fr/ coccinelle], [https://scan.coverity.com/projects/linux?tab=overview coverity]) and dynamic checkers (kernel configs, [http://codemonkey.org.uk/projects/trinity/ trinity], [https://www.kernel.org/doc/Documentation/kasan.txt KASan]). Those efforts are important and on-going, but if we want to protect our [http://www.techspot.com/news/57228-google-shows-off-new-version-of-android-announces-1-billion-active-monthly-users.html billion Android phones], our [http://www.zdnet.com/article/2014-the-year-of-the-linux-car/ cars], the [https://training.linuxfoundation.org/why-our-linux-training/training-reviews/linux-foundation-training-prepares-the-international-space-station-for-linux-migration International Space Station], and everything else running Linux, we must get proactive defensive technologies built into the upstream Linux kernel. We need the kernel to [http://kernsec.org/files/lss2015/giant-bags-of-mostly-water.pdf fail safely, instead of just running safely].&lt;br /&gt;
&lt;br /&gt;
These kinds of protections have existed for years in [https://pax.grsecurity.net/ PaX], [https://grsecurity.net/features.php grsecurity], and piles of academic papers. For various social, cultural, and technical reasons, they have not made their way into the upstream kernel, and this project seeks to change that. Our focus is on kernel self-protection, rather than kernel-supported userspace protections. The goal is to eliminate classes of bugs and eliminate methods of exploitation.&lt;br /&gt;
&lt;br /&gt;
= Principles =&lt;br /&gt;
A short list of things to keep in mind when designing self-protection features:&lt;br /&gt;
&lt;br /&gt;
* Patience and an open mind will be needed. We're all trying to make Linux better, so let's stay focused on the results.&lt;br /&gt;
* Features will be more than finding bugs. Should be active at run-time to catch previously unknown flaws.&lt;br /&gt;
* Features will not be developer-&amp;quot;opt-in&amp;quot;. When a feature is enabled at build time, it should work for all code built into the kernel (which has the side-effect of also covering out-of-tree code, like in vendor forks).&lt;br /&gt;
&lt;br /&gt;
= Get Involved =&lt;br /&gt;
&lt;br /&gt;
Want to get involved? [http://www.openwall.com/lists/#subscribe Join] the [http://www.openwall.com/lists/kernel-hardening/ kernel hardening mailing list] and introduce yourself. Then pick an area of work from below (or add a new one), coordinate on the mailing list, and get started. If your employer is brave enough to understand how critical this work is, they'll pay you to work on it. If not, the [http://www.linuxfoundation.org/ Linux Foundation]'s [https://www.coreinfrastructure.org/faq Core Infrastructure Initiative] is in a great position to fund specific work proposals. We need kernel developers, compiler developers, testers, backporters, a documentation writers.&lt;br /&gt;
&lt;br /&gt;
= Work Areas =&lt;br /&gt;
&lt;br /&gt;
While there are already a number of upstream [[Feature List|kernel security features]], we are still missing many. While the following is far from a comprehensive list, it's at least a starting point we can add to:&lt;br /&gt;
&lt;br /&gt;
== [[Bug Classes]] ==&lt;br /&gt;
&lt;br /&gt;
* [[Bug Classes/Stack overflow|Stack overflow]]&lt;br /&gt;
* [[Bug Classes/Integer overflow|Integer overflow]]&lt;br /&gt;
* [[Bug Classes/Heap overflow|Heap overflow]]&lt;br /&gt;
* [[Bug Classes/Format string injection|Format string injection]]&lt;br /&gt;
* [[Bug Classes/Kernel pointer leak|Kernel pointer leak]]&lt;br /&gt;
* [[Bug Classes/Uninitialized variables|Uninitialized variables]]&lt;br /&gt;
* [[Bug Classes/Use after free|Use-after-free]]&lt;br /&gt;
&lt;br /&gt;
== [[Exploit Methods|Exploitation Methods]] ==&lt;br /&gt;
&lt;br /&gt;
* [[Exploit Methods/Kernel location|Kernel location]]&lt;br /&gt;
* [[Exploit Methods/Text overwrite|Text overwrite]]&lt;br /&gt;
* [[Exploit Methods/Function pointer overwrite|Function pointer overwrite]]&lt;br /&gt;
* [[Exploit Methods/Userspace execution|Userspace execution]]&lt;br /&gt;
* [[Exploit Methods/Userspace data usage|Userspace data usage]]&lt;br /&gt;
* [[Exploit Methods/Reused code chunks|Reused code chunks]]&lt;br /&gt;
&lt;br /&gt;
= Completed Kernel Protections =&lt;br /&gt;
&lt;br /&gt;
The following kernel protections have been already been accepted into the mainline Linux kernel, or are in some stage of development.&lt;br /&gt;
&lt;br /&gt;
==== [[Kernel_Protections/HARDENED_ATOMIC|HARDENED_ATOMIC]] ====&lt;br /&gt;
: Kernel reference counter overflow protection&lt;br /&gt;
&lt;br /&gt;
= Specific TODO Items =&lt;br /&gt;
&lt;br /&gt;
Besides the general work outlined above, there are number of specific tasks that have either been asked about frequently or are otherwise in need some time and attention:&lt;br /&gt;
&lt;br /&gt;
* Split thread_info off of kernel stack (Done: x86, arm64, s390. Needed on arm, powerpc and others?)&lt;br /&gt;
* Move kernel stack to vmap area (Done: x86, arm64, s390. Needed on powerpc and others?)&lt;br /&gt;
* Implement kernel relocation and KASLR for ARM&lt;br /&gt;
* Write a plugin to clear struct padding&lt;br /&gt;
* Write a plugin to do format string warnings correctly (gcc's -Wformat-security is bad about const strings)&lt;br /&gt;
* Reorganize and rename CONFIG_DEBUG_RODATA (and related options) to something without &amp;quot;DEBUG&amp;quot; in the name (in progress)&lt;br /&gt;
* Make CONFIG_DEBUG_RODATA mandatory (done for arm64 and x86, other archs still need it)&lt;br /&gt;
* Convert remaining BPF JITs to eBPF JIT (with blinding)&lt;br /&gt;
* Write lib/test_bpf.c tests for eBPF constant blinding&lt;br /&gt;
* Further restriction of perf_event_open (e.g. perf_event_paranoid=3)&lt;br /&gt;
* Identify and extend HARDENED_USERCOPY to other usercopy functions (e.g. maybe csum_partial_copy_from_user, csum_and_copy_from_user, csum_and_copy_to_user, csum_partial_copy_nocheck?)&lt;br /&gt;
* Extend HARDENED_USERCOPY to use slab whitelisting&lt;br /&gt;
* Extend HARDENED_USERCOPY to split user-facing malloc()s and in-kernel malloc()svmalloc stack guard pages&lt;br /&gt;
* protect ARM vector table as fixed-location kernel target&lt;br /&gt;
* disable kuser helpers on arm&lt;br /&gt;
* harden and rename CONFIG_DEBUG_LIST better and default=y&lt;br /&gt;
* add zeroing of copy_from_user on failure test to test_usercopy.c&lt;br /&gt;
* consolidate all architecture's use of usercopy into asm-generic/uaccess.h&lt;br /&gt;
* add WARN path for page-spanning usercopy checks (instead of the separate CONFIG)&lt;br /&gt;
* create UNEXPECTED(), like BUG() but without the lock-busting, etc&lt;br /&gt;
* adjust usercopy CONFIG to be  !DEVKMEM &amp;amp;&amp;amp; STRICT_DEVMEM=y (PROC_KCORE is incompat with usercopy too)&lt;br /&gt;
* provide mechanism to check for ro_after_init memory areas, and reject structures not marked ro_after_init in vmbus_register()&lt;br /&gt;
* expand use of __ro_after_init, especially in arch/arm64&lt;br /&gt;
* Add stack-frame walking to usercopy implementations (Done: x86. In progress: arm64. Needed on arm, others?)&lt;br /&gt;
&lt;br /&gt;
= Recommended settings =&lt;br /&gt;
&lt;br /&gt;
People ask from time to time what a good security set of build CONFIGs and runtime sysctl are. This is a brain-dump of the various options for a particularly paranoid system.&lt;br /&gt;
&lt;br /&gt;
== CONFIGs ==&lt;br /&gt;
&lt;br /&gt;
 # Report BUG() conditions and kill the offending process.&lt;br /&gt;
 CONFIG_BUG=y&lt;br /&gt;
 &lt;br /&gt;
 # Make sure kernel page tables have safe permissions.&lt;br /&gt;
 CONFIG_DEBUG_KERNEL=y&lt;br /&gt;
 CONFIG_DEBUG_RODATA=y&lt;br /&gt;
 &lt;br /&gt;
 # Use -fstack-protector-strong (gcc 4.9+) for best stack canary coverage.&lt;br /&gt;
 CONFIG_CC_STACKPROTECTOR=y&lt;br /&gt;
 CONFIG_CC_STACKPROTECTOR_STRONG=y&lt;br /&gt;
 &lt;br /&gt;
 # Do not allow direct physical memory access (but if you must have it, at least enable STRICT mode...)&lt;br /&gt;
 # CONFIG_DEVMEM is not set&lt;br /&gt;
 CONFIG_STRICT_DEVMEM=y&lt;br /&gt;
 CONFIG_IO_STRICT_DEVMEM=y&lt;br /&gt;
 &lt;br /&gt;
 # Provides some protections against SYN flooding.&lt;br /&gt;
 CONFIG_SYN_COOKIES=y&lt;br /&gt;
 &lt;br /&gt;
 # Perform additional validation of various commonly targetted structures.&lt;br /&gt;
 CONFIG_DEBUG_CREDENTIALS=y&lt;br /&gt;
 CONFIG_DEBUG_NOTIFIERS=y&lt;br /&gt;
 CONFIG_DEBUG_LIST=y&lt;br /&gt;
 CONFIG_BUG_ON_DATA_CORRUPTION=y&lt;br /&gt;
 &lt;br /&gt;
 # Provide userspace with seccomp BPF API for syscall attack surface reduction.&lt;br /&gt;
 CONFIG_SECCOMP=y&lt;br /&gt;
 CONFIG_SECCOMP_FILTER=y&lt;br /&gt;
 &lt;br /&gt;
 # Provide userspace with ptrace ancestry protections.&lt;br /&gt;
 CONFIG_SECURITY=y&lt;br /&gt;
 CONFIG_SECURITY_YAMA=y&lt;br /&gt;
 &lt;br /&gt;
 # Perform usercopy bounds checking.&lt;br /&gt;
 CONFIG_HARDENED_USERCOPY=y&lt;br /&gt;
 &lt;br /&gt;
 # Randomize allocator freelists.&lt;br /&gt;
 CONFIG_SLAB_FREELIST_RANDOM=y&lt;br /&gt;
 &lt;br /&gt;
 # Allow allocator validation checking to be enabled (see &amp;quot;slub_debug=P&amp;quot; below).&lt;br /&gt;
 CONFIG_SLUB_DEBUG=y&lt;br /&gt;
 &lt;br /&gt;
 # Wipe higher-level memory allocations when they are freed (needs &amp;quot;page_poison=1&amp;quot; command line below).&lt;br /&gt;
 # (If you can afford even more performance penalty, leave CONFIG_PAGE_POISONING_NO_SANITY=n)&lt;br /&gt;
 CONFIG_PAGE_POISONING=y&lt;br /&gt;
 CONFIG_PAGE_POISONING_NO_SANITY=y&lt;br /&gt;
 CONFIG_PAGE_POISONING_ZERO=y&lt;br /&gt;
 &lt;br /&gt;
 # Dangerous; enabling this allows direct physical memory writing.&lt;br /&gt;
 # CONFIG_ACPI_CUSTOM_METHOD is not set&lt;br /&gt;
 &lt;br /&gt;
 # Dangerous; enabling this disables brk ASLR.&lt;br /&gt;
 # CONFIG_COMPAT_BRK is not set&lt;br /&gt;
 &lt;br /&gt;
 # Dangerous; enabling this allows direct kernel memory writing.&lt;br /&gt;
 # CONFIG_DEVKMEM is not set&lt;br /&gt;
 &lt;br /&gt;
 # Dangerous; enabling this disables VDSO ASLR.&lt;br /&gt;
 # CONFIG_COMPAT_VDSO is not set&lt;br /&gt;
 &lt;br /&gt;
 # Dangerous; enabling this allows replacement of running kernel.&lt;br /&gt;
 # CONFIG_KEXEC is not set&lt;br /&gt;
 &lt;br /&gt;
 # Dangerous; enabling this allows replacement of running kernel.&lt;br /&gt;
 # CONFIG_HIBERNATION is not set&lt;br /&gt;
 &lt;br /&gt;
 # Prior to v4.1, assists heap memory attacks; best to keep interface disabled.&lt;br /&gt;
 # CONFIG_INET_DIAG is not set&lt;br /&gt;
 &lt;br /&gt;
 # Easily confused by misconfigured userspace, keep off.&lt;br /&gt;
 # CONFIG_BINFMT_MISC is not set&lt;br /&gt;
 &lt;br /&gt;
 # Use the modern PTY interface (devpts) only.&lt;br /&gt;
 # CONFIG_LEGACY_PTYS is not set&lt;br /&gt;
 &lt;br /&gt;
 # Reboot devices immediately if kernel experiences an Oops.&lt;br /&gt;
 CONFIG_PANIC_ON_OOPS=y&lt;br /&gt;
 CONFIG_PANIC_TIMEOUT=-1&lt;br /&gt;
 &lt;br /&gt;
 # Keep root from altering kernel memory via loadable modules.&lt;br /&gt;
 # CONFIG_MODULES is not set&lt;br /&gt;
 &lt;br /&gt;
 # But if CONFIG_MODULE=y is needed, at least they must be signed with a per-build key.&lt;br /&gt;
 CONFIG_DEBUG_SET_MODULE_RONX=y&lt;br /&gt;
 CONFIG_MODULE_SIG=y&lt;br /&gt;
 CONFIG_MODULE_SIG_FORCE=y&lt;br /&gt;
 CONFIG_MODULE_SIG_ALL=y&lt;br /&gt;
 CONFIG_MODULE_SIG_SHA512=y&lt;br /&gt;
 CONFIG_MODULE_SIG_HASH=&amp;quot;sha512&amp;quot;&lt;br /&gt;
 CONFIG_MODULE_SIG_KEY=&amp;quot;certs/signing_key.pem&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== x86_32 ===&lt;br /&gt;
&lt;br /&gt;
 # On 32-bit kernels, require PAE for NX bit support.&lt;br /&gt;
 # CONFIG_M486 is not set&lt;br /&gt;
 # CONFIG_HIGHMEM4G is not set&lt;br /&gt;
 CONFIG_HIGHMEM64G=y&lt;br /&gt;
 CONFIG_X86_PAE=y&lt;br /&gt;
 &lt;br /&gt;
 # Disallow allocating the first 64k of memory.&lt;br /&gt;
 CONFIG_DEFAULT_MMAP_MIN_ADDR=65536&lt;br /&gt;
 &lt;br /&gt;
 # Randomize position of kernel.&lt;br /&gt;
 CONFIG_RANDOMIZE_BASE=y&lt;br /&gt;
&lt;br /&gt;
=== x86_64 ===&lt;br /&gt;
&lt;br /&gt;
 # Full 64-bit means PAE and NX bit.&lt;br /&gt;
 CONFIG_X86_64=y&lt;br /&gt;
 &lt;br /&gt;
 # Disallow allocating the first 64k of memory.&lt;br /&gt;
 CONFIG_DEFAULT_MMAP_MIN_ADDR=65536&lt;br /&gt;
 &lt;br /&gt;
 # Randomize position of kernel and memory.&lt;br /&gt;
 CONFIG_RANDOMIZE_BASE=y&lt;br /&gt;
 CONFIG_RANDOMIZE_MEMORY=y&lt;br /&gt;
 &lt;br /&gt;
 # Modern libc no longer needs a fixed-position mapping in userspace, remove it as a possible target.&lt;br /&gt;
 CONFIG_LEGACY_VSYSCALL_NONE=y&lt;br /&gt;
 &lt;br /&gt;
 # Remove additional attack surface, unless you really need them.&lt;br /&gt;
 # CONFIG_IA32_EMULATION is not set&lt;br /&gt;
 # CONFIG_X86_X32 is not set&lt;br /&gt;
 # CONFIG_MODIFY_LDT_SYSCALL is not set&lt;br /&gt;
&lt;br /&gt;
=== arm ===&lt;br /&gt;
&lt;br /&gt;
 # Disallow allocating the first 32k of memory (cannot be 64k due to ARM loader).&lt;br /&gt;
 CONFIG_DEFAULT_MMAP_MIN_ADDR=32768&lt;br /&gt;
 &lt;br /&gt;
 # For maximal userspace memory area (and maximum ASLR).&lt;br /&gt;
 CONFIG_VMSPLIT_3G=y&lt;br /&gt;
 &lt;br /&gt;
 # If building an out-of-tree Qualcomm kernel, this is similar to CONFIG_DEBUG_RODATA.&lt;br /&gt;
 CONFIG_STRICT_MEMORY_RWX=y&lt;br /&gt;
 &lt;br /&gt;
 # Make sure PXN/PAN emulation is enabled.&lt;br /&gt;
 CONFIG_CPU_SW_DOMAIN_PAN=y&lt;br /&gt;
 &lt;br /&gt;
 # Dangerous; old interfaces and needless additional attack surface.&lt;br /&gt;
 # CONFIG_OABI_COMPAT is unset&lt;br /&gt;
&lt;br /&gt;
=== arm64 ===&lt;br /&gt;
&lt;br /&gt;
 # Disallow allocating the first 32k of memory (cannot be 64k due to ARM loader).&lt;br /&gt;
 CONFIG_DEFAULT_MMAP_MIN_ADDR=32768&lt;br /&gt;
 &lt;br /&gt;
 # Randomize position of kernel (requires UEFI RNG or bootloader support for /chosen/kaslr-seed DT property).&lt;br /&gt;
 CONFIG_RANDOMIZE_BASE=y&lt;br /&gt;
&lt;br /&gt;
== kernel command line options ==&lt;br /&gt;
&lt;br /&gt;
 # Enable slub/slab allocator free poisoning (requires CONFIG_SLUB_DEBUG=y above).&lt;br /&gt;
 slub_debug=P&lt;br /&gt;
 &lt;br /&gt;
 # Enable buddy allocator free poisoning (requires CONFIG_PAGE_POISONING=y above).&lt;br /&gt;
 page_poison=1&lt;br /&gt;
&lt;br /&gt;
=== x86_64 ===&lt;br /&gt;
&lt;br /&gt;
 # Remove vsyscall entirely to avoid it being a fixed-position ROP target of any kind.&lt;br /&gt;
 # (Same as CONFIG_LEGACY_VSYSCALL_NONE=y above.)&lt;br /&gt;
 vsyscall=none&lt;br /&gt;
&lt;br /&gt;
== sysctls ==&lt;br /&gt;
&lt;br /&gt;
 # Try to keep kernel address exposures out of various /proc files (kallsyms, modules, etc).&lt;br /&gt;
 kernel.kptr_restrict = 1&lt;br /&gt;
 &lt;br /&gt;
 # Avoid kernel memory address exposures via dmesg.&lt;br /&gt;
 kernel.dmesg_restrict = 1&lt;br /&gt;
 &lt;br /&gt;
 # Block non-uid-0 profiling (needs [https://patchwork.kernel.org/patch/9249919/ distro patch], otherwise this is the same as &amp;quot;= 2&amp;quot;)&lt;br /&gt;
 kernel.perf_event_paranoid = 3&lt;br /&gt;
 &lt;br /&gt;
 # Turn off kexec, even if it's built in.&lt;br /&gt;
 kernel.kexec_load_disabled = 1&lt;br /&gt;
 &lt;br /&gt;
 # Avoid non-ancestor ptrace access to running processes and their credentials.&lt;br /&gt;
 kernel.yama.ptrace_scope = 1&lt;/div&gt;</summary>
		<author><name>DavidWindsor</name></author>
	</entry>
	<entry>
		<id>http://kernsec.org/wiki/index.php?title=Kernel_Protections/refcount_t&amp;diff=3843</id>
		<title>Kernel Protections/refcount t</title>
		<link rel="alternate" type="text/html" href="http://kernsec.org/wiki/index.php?title=Kernel_Protections/refcount_t&amp;diff=3843"/>
		<updated>2017-02-04T06:42:17Z</updated>

		<summary type="html">&lt;p&gt;DavidWindsor: Add refcount_t API&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Summary = &lt;br /&gt;
&lt;br /&gt;
HARDENED_ATOMIC is a kernel self-protection feature that greatly helps with the mitigation of [[Bug Classes/Use after free|use-after-free]] bugs.  It is based off of work done by the [https://pax.grsecurity.net PaX Team], originally called [https://forums.grsecurity.net/viewtopic.php?f=7&amp;amp;t=4173 PAX_REFCOUNT].&lt;br /&gt;
&lt;br /&gt;
= Reference Counting API =&lt;br /&gt;
&lt;br /&gt;
HARDENED_ATOMIC introduces a new data type: &amp;lt;tt&amp;gt;refcount_t&amp;lt;/tt&amp;gt;.  This type is to be used for all kernel reference counters.  &lt;br /&gt;
&lt;br /&gt;
The following operations are the kernel reference counting API.  Please note that all operations are atomic, unless otherwise specified. &lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;REFCOUNT_INIT(unsigned int)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Initialize a &amp;lt;tt&amp;gt;refcount_t&amp;lt;/tt&amp;gt; object.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;void refcount_set(refcount_t *, unsigned int)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Set a &amp;lt;tt&amp;gt;refcount_t&amp;lt;/tt&amp;gt; object's internal value.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;unsigned int refcount_read(refcount_t *)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Returns the &amp;lt;tt&amp;gt;refcount_t&amp;lt;/tt&amp;gt; object's internal value.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;bool refcount_add_not_zero(unsigned int v, refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Add &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.  If &amp;lt;tt&amp;gt;r + v&amp;lt;/tt&amp;gt; causes an overflow, the result of the addition operation is not saved to &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.  Returns &amp;lt;tt&amp;gt;true&amp;lt;/tt&amp;gt; if the resulting value of &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is non-zero, &amp;lt;tt&amp;gt;false&amp;lt;/tt&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;void refcount_add(unsigned int v, refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Adds &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; and stores the value in &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;bool refcount_inc_not_zero(refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Increments &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; and tests whether &amp;lt;tt&amp;gt;r + 1&amp;lt;/tt&amp;gt; causes an overflow.  If an overflow does occur, the result of the increment operation is not saved to &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.  Will saturate at &amp;lt;tt&amp;gt;UINT_MAX&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;WARN&amp;lt;/tt&amp;gt;. Returns &amp;lt;tt&amp;gt;true&amp;lt;/tt&amp;gt; if the resulting value of &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is non-zero, &amp;lt;tt&amp;gt;false&amp;lt;/tt&amp;gt; otherwise.  &lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;void refcount_inc(refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Increment &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.  Will saturate at &amp;lt;tt&amp;gt;UINT_MAX&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;WARN&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;bool refcount_sub_and_test(unsigned int v, refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Subtract &amp;lt;tt&amp;gt;v&amp;lt;/tt&amp;gt; from &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; and tests whether &amp;lt;tt&amp;gt;r - v&amp;lt;/tt&amp;gt; causes an underflow.  If an underflow does occur, the result of the decrement operation is not saved to &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.  Will fail to decrement when saturated at &amp;lt;tt&amp;gt;UINT_MAX&amp;lt;/tt&amp;gt;.  Returns &amp;lt;tt&amp;gt;true&amp;lt;/tt&amp;gt; if the resulting value of &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is non-zero, &amp;lt;tt&amp;gt;false&amp;lt;/tt&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;void refcount_dec(refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.  If &amp;lt;tt&amp;gt;r - 1&amp;lt;/tt&amp;gt; causes an underflow, the result of the decrement operation is not saved to &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt;.  Will fail to decrement when saturated at &amp;lt;tt&amp;gt;UINT_MAX&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;bool refcount_dec_if_one(refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Attempts to transition &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; from 1 to 0.  If &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is 1, decrement it to 0.  Returns &amp;lt;tt&amp;gt;true&amp;lt;/tt&amp;gt; if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; was decremented, &amp;lt;tt&amp;gt;false&amp;lt;/tt&amp;gt; otherwise.  &lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;bool refcount_dec_not_one(refcount_t *r)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; unless the value of &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is 1.  Returns &amp;lt;tt&amp;gt;true&amp;lt;/tt&amp;gt; if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; was decremented, &amp;lt;/tt&amp;gt;false&amp;lt;/tt&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; and lock mutex if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; becomes 0.  Will &amp;lt;tt&amp;gt;WARN&amp;lt;/tt&amp;gt; on underflow and fail to decrement if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is saturated at &amp;lt;tt&amp;gt;UINT_MAX&amp;lt;/tt&amp;gt;.  Returns &amp;lt;tt&amp;gt;true&amp;lt;/tt&amp;gt; if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is 0 and mutex is held, &amp;lt;tt&amp;gt;false&amp;lt;/tt&amp;gt; otherwise.&lt;br /&gt;
&lt;br /&gt;
;'''&amp;lt;tt&amp;gt;bool refcount_dec_and_lock(refcount_t *r, spinlock_t *s)&amp;lt;/tt&amp;gt;'''&lt;br /&gt;
: Decrement &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; and lock spinlock if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; becomes 0.  Will &amp;lt;tt&amp;gt;WARN&amp;lt;/tt&amp;gt; on underflow and fail to decrement if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is saturated at &amp;lt;tt&amp;gt;UINT_MAX&amp;lt;/tt&amp;gt;.  Returns &amp;lt;tt&amp;gt;true&amp;lt;/tt&amp;gt; if &amp;lt;tt&amp;gt;r&amp;lt;/tt&amp;gt; is 0 and spinlock is held, &amp;lt;tt&amp;gt;false&amp;lt;/tt&amp;gt; otherwise.&lt;/div&gt;</summary>
		<author><name>DavidWindsor</name></author>
	</entry>
	<entry>
		<id>http://kernsec.org/wiki/index.php?title=Kernel_Self_Protection_Project&amp;diff=3842</id>
		<title>Kernel Self Protection Project</title>
		<link rel="alternate" type="text/html" href="http://kernsec.org/wiki/index.php?title=Kernel_Self_Protection_Project&amp;diff=3842"/>
		<updated>2017-02-04T05:38:25Z</updated>

		<summary type="html">&lt;p&gt;DavidWindsor: Adjust linked page name to be consistent with existing naming scheme&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Mission Statement =&lt;br /&gt;
&lt;br /&gt;
This project starts with the premise that [https://lwn.net/Articles/410606/ kernel bugs have a very long lifetime], and that the kernel must be designed in ways to protect against these flaws. We must think of [http://lwn.net/Articles/662219/ security beyond fixing bugs]. As a community, we already find and fix individual bugs via static checkers (compiler flags, [http://smatch.sourceforge.net/ smatch], [http://coccinelle.lip6.fr/ coccinelle], [https://scan.coverity.com/projects/linux?tab=overview coverity]) and dynamic checkers (kernel configs, [http://codemonkey.org.uk/projects/trinity/ trinity], [https://www.kernel.org/doc/Documentation/kasan.txt KASan]). Those efforts are important and on-going, but if we want to protect our [http://www.techspot.com/news/57228-google-shows-off-new-version-of-android-announces-1-billion-active-monthly-users.html billion Android phones], our [http://www.zdnet.com/article/2014-the-year-of-the-linux-car/ cars], the [https://training.linuxfoundation.org/why-our-linux-training/training-reviews/linux-foundation-training-prepares-the-international-space-station-for-linux-migration International Space Station], and everything else running Linux, we must get proactive defensive technologies built into the upstream Linux kernel. We need the kernel to [http://kernsec.org/files/lss2015/giant-bags-of-mostly-water.pdf fail safely, instead of just running safely].&lt;br /&gt;
&lt;br /&gt;
These kinds of protections have existed for years in [https://pax.grsecurity.net/ PaX], [https://grsecurity.net/features.php grsecurity], and piles of academic papers. For various social, cultural, and technical reasons, they have not made their way into the upstream kernel, and this project seeks to change that. Our focus is on kernel self-protection, rather than kernel-supported userspace protections. The goal is to eliminate classes of bugs and eliminate methods of exploitation.&lt;br /&gt;
&lt;br /&gt;
= Principles =&lt;br /&gt;
A short list of things to keep in mind when designing self-protection features:&lt;br /&gt;
&lt;br /&gt;
* Patience and an open mind will be needed. We're all trying to make Linux better, so let's stay focused on the results.&lt;br /&gt;
* Features will be more than finding bugs. Should be active at run-time to catch previously unknown flaws.&lt;br /&gt;
* Features will not be developer-&amp;quot;opt-in&amp;quot;. When a feature is enabled at build time, it should work for all code built into the kernel (which has the side-effect of also covering out-of-tree code, like in vendor forks).&lt;br /&gt;
&lt;br /&gt;
= Get Involved =&lt;br /&gt;
&lt;br /&gt;
Want to get involved? [http://www.openwall.com/lists/#subscribe Join] the [http://www.openwall.com/lists/kernel-hardening/ kernel hardening mailing list] and introduce yourself. Then pick an area of work from below (or add a new one), coordinate on the mailing list, and get started. If your employer is brave enough to understand how critical this work is, they'll pay you to work on it. If not, the [http://www.linuxfoundation.org/ Linux Foundation]'s [https://www.coreinfrastructure.org/faq Core Infrastructure Initiative] is in a great position to fund specific work proposals. We need kernel developers, compiler developers, testers, backporters, a documentation writers.&lt;br /&gt;
&lt;br /&gt;
= Work Areas =&lt;br /&gt;
&lt;br /&gt;
While there are already a number of upstream [[Feature List|kernel security features]], we are still missing many. While the following is far from a comprehensive list, it's at least a starting point we can add to:&lt;br /&gt;
&lt;br /&gt;
== [[Bug Classes]] ==&lt;br /&gt;
&lt;br /&gt;
* [[Bug Classes/Stack overflow|Stack overflow]]&lt;br /&gt;
* [[Bug Classes/Integer overflow|Integer overflow]]&lt;br /&gt;
* [[Bug Classes/Heap overflow|Heap overflow]]&lt;br /&gt;
* [[Bug Classes/Format string injection|Format string injection]]&lt;br /&gt;
* [[Bug Classes/Kernel pointer leak|Kernel pointer leak]]&lt;br /&gt;
* [[Bug Classes/Uninitialized variables|Uninitialized variables]]&lt;br /&gt;
* [[Bug Classes/Use after free|Use-after-free]]&lt;br /&gt;
&lt;br /&gt;
== [[Exploit Methods|Exploitation Methods]] ==&lt;br /&gt;
&lt;br /&gt;
* [[Exploit Methods/Kernel location|Kernel location]]&lt;br /&gt;
* [[Exploit Methods/Text overwrite|Text overwrite]]&lt;br /&gt;
* [[Exploit Methods/Function pointer overwrite|Function pointer overwrite]]&lt;br /&gt;
* [[Exploit Methods/Userspace execution|Userspace execution]]&lt;br /&gt;
* [[Exploit Methods/Userspace data usage|Userspace data usage]]&lt;br /&gt;
* [[Exploit Methods/Reused code chunks|Reused code chunks]]&lt;br /&gt;
&lt;br /&gt;
= Completed Kernel Protections =&lt;br /&gt;
&lt;br /&gt;
The following kernel protections have been already been accepted into the mainline Linux kernel, or are in some stage of development.&lt;br /&gt;
&lt;br /&gt;
==== [[Kernel_Protections/HARDENED_ATOMC|HARDENED_ATOMIC]] ====&lt;br /&gt;
: Kernel reference counter overflow protection&lt;br /&gt;
&lt;br /&gt;
= Specific TODO Items =&lt;br /&gt;
&lt;br /&gt;
Besides the general work outlined above, there are number of specific tasks that have either been asked about frequently or are otherwise in need some time and attention:&lt;br /&gt;
&lt;br /&gt;
* Split thread_info off of kernel stack (Done: x86, arm64, s390. Needed on arm, powerpc and others?)&lt;br /&gt;
* Move kernel stack to vmap area (Done: x86, arm64, s390. Needed on powerpc and others?)&lt;br /&gt;
* Implement kernel relocation and KASLR for ARM&lt;br /&gt;
* Write a plugin to clear struct padding&lt;br /&gt;
* Write a plugin to do format string warnings correctly (gcc's -Wformat-security is bad about const strings)&lt;br /&gt;
* Reorganize and rename CONFIG_DEBUG_RODATA (and related options) to something without &amp;quot;DEBUG&amp;quot; in the name (in progress)&lt;br /&gt;
* Make CONFIG_DEBUG_RODATA mandatory (done for arm64 and x86, other archs still need it)&lt;br /&gt;
* Convert remaining BPF JITs to eBPF JIT (with blinding)&lt;br /&gt;
* Write lib/test_bpf.c tests for eBPF constant blinding&lt;br /&gt;
* Further restriction of perf_event_open (e.g. perf_event_paranoid=3)&lt;br /&gt;
* Identify and extend HARDENED_USERCOPY to other usercopy functions (e.g. maybe csum_partial_copy_from_user, csum_and_copy_from_user, csum_and_copy_to_user, csum_partial_copy_nocheck?)&lt;br /&gt;
* Extend HARDENED_USERCOPY to use slab whitelisting&lt;br /&gt;
* Extend HARDENED_USERCOPY to split user-facing malloc()s and in-kernel malloc()svmalloc stack guard pages&lt;br /&gt;
* protect ARM vector table as fixed-location kernel target&lt;br /&gt;
* disable kuser helpers on arm&lt;br /&gt;
* harden and rename CONFIG_DEBUG_LIST better and default=y&lt;br /&gt;
* add zeroing of copy_from_user on failure test to test_usercopy.c&lt;br /&gt;
* consolidate all architecture's use of usercopy into asm-generic/uaccess.h&lt;br /&gt;
* add WARN path for page-spanning usercopy checks (instead of the separate CONFIG)&lt;br /&gt;
* create UNEXPECTED(), like BUG() but without the lock-busting, etc&lt;br /&gt;
* adjust usercopy CONFIG to be  !DEVKMEM &amp;amp;&amp;amp; STRICT_DEVMEM=y (PROC_KCORE is incompat with usercopy too)&lt;br /&gt;
* provide mechanism to check for ro_after_init memory areas, and reject structures not marked ro_after_init in vmbus_register()&lt;br /&gt;
* expand use of __ro_after_init, especially in arch/arm64&lt;br /&gt;
* Add stack-frame walking to usercopy implementations (Done: x86. In progress: arm64. Needed on arm, others?)&lt;br /&gt;
&lt;br /&gt;
= Recommended settings =&lt;br /&gt;
&lt;br /&gt;
People ask from time to time what a good security set of build CONFIGs and runtime sysctl are. This is a brain-dump of the various options for a particularly paranoid system.&lt;br /&gt;
&lt;br /&gt;
== CONFIGs ==&lt;br /&gt;
&lt;br /&gt;
 # Report BUG() conditions and kill the offending process.&lt;br /&gt;
 CONFIG_BUG=y&lt;br /&gt;
 &lt;br /&gt;
 # Make sure kernel page tables have safe permissions.&lt;br /&gt;
 CONFIG_DEBUG_KERNEL=y&lt;br /&gt;
 CONFIG_DEBUG_RODATA=y&lt;br /&gt;
 &lt;br /&gt;
 # Use -fstack-protector-strong (gcc 4.9+) for best stack canary coverage.&lt;br /&gt;
 CONFIG_CC_STACKPROTECTOR=y&lt;br /&gt;
 CONFIG_CC_STACKPROTECTOR_STRONG=y&lt;br /&gt;
 &lt;br /&gt;
 # Do not allow direct physical memory access (but if you must have it, at least enable STRICT mode...)&lt;br /&gt;
 # CONFIG_DEVMEM is not set&lt;br /&gt;
 CONFIG_STRICT_DEVMEM=y&lt;br /&gt;
 CONFIG_IO_STRICT_DEVMEM=y&lt;br /&gt;
 &lt;br /&gt;
 # Provides some protections against SYN flooding.&lt;br /&gt;
 CONFIG_SYN_COOKIES=y&lt;br /&gt;
 &lt;br /&gt;
 # Perform additional validation of various commonly targetted structures.&lt;br /&gt;
 CONFIG_DEBUG_CREDENTIALS=y&lt;br /&gt;
 CONFIG_DEBUG_NOTIFIERS=y&lt;br /&gt;
 CONFIG_DEBUG_LIST=y&lt;br /&gt;
 CONFIG_BUG_ON_DATA_CORRUPTION=y&lt;br /&gt;
 &lt;br /&gt;
 # Provide userspace with seccomp BPF API for syscall attack surface reduction.&lt;br /&gt;
 CONFIG_SECCOMP=y&lt;br /&gt;
 CONFIG_SECCOMP_FILTER=y&lt;br /&gt;
 &lt;br /&gt;
 # Provide userspace with ptrace ancestry protections.&lt;br /&gt;
 CONFIG_SECURITY=y&lt;br /&gt;
 CONFIG_SECURITY_YAMA=y&lt;br /&gt;
 &lt;br /&gt;
 # Perform usercopy bounds checking.&lt;br /&gt;
 CONFIG_HARDENED_USERCOPY=y&lt;br /&gt;
 &lt;br /&gt;
 # Randomize allocator freelists.&lt;br /&gt;
 CONFIG_SLAB_FREELIST_RANDOM=y&lt;br /&gt;
 &lt;br /&gt;
 # Allow allocator validation checking to be enabled (see &amp;quot;slub_debug=P&amp;quot; below).&lt;br /&gt;
 CONFIG_SLUB_DEBUG=y&lt;br /&gt;
 &lt;br /&gt;
 # Wipe higher-level memory allocations when they are freed (needs &amp;quot;page_poison=1&amp;quot; command line below).&lt;br /&gt;
 # (If you can afford even more performance penalty, leave CONFIG_PAGE_POISONING_NO_SANITY=n)&lt;br /&gt;
 CONFIG_PAGE_POISONING=y&lt;br /&gt;
 CONFIG_PAGE_POISONING_NO_SANITY=y&lt;br /&gt;
 CONFIG_PAGE_POISONING_ZERO=y&lt;br /&gt;
 &lt;br /&gt;
 # Dangerous; enabling this allows direct physical memory writing.&lt;br /&gt;
 # CONFIG_ACPI_CUSTOM_METHOD is not set&lt;br /&gt;
 &lt;br /&gt;
 # Dangerous; enabling this disables brk ASLR.&lt;br /&gt;
 # CONFIG_COMPAT_BRK is not set&lt;br /&gt;
 &lt;br /&gt;
 # Dangerous; enabling this allows direct kernel memory writing.&lt;br /&gt;
 # CONFIG_DEVKMEM is not set&lt;br /&gt;
 &lt;br /&gt;
 # Dangerous; enabling this disables VDSO ASLR.&lt;br /&gt;
 # CONFIG_COMPAT_VDSO is not set&lt;br /&gt;
 &lt;br /&gt;
 # Dangerous; enabling this allows replacement of running kernel.&lt;br /&gt;
 # CONFIG_KEXEC is not set&lt;br /&gt;
 &lt;br /&gt;
 # Dangerous; enabling this allows replacement of running kernel.&lt;br /&gt;
 # CONFIG_HIBERNATION is not set&lt;br /&gt;
 &lt;br /&gt;
 # Prior to v4.1, assists heap memory attacks; best to keep interface disabled.&lt;br /&gt;
 # CONFIG_INET_DIAG is not set&lt;br /&gt;
 &lt;br /&gt;
 # Easily confused by misconfigured userspace, keep off.&lt;br /&gt;
 # CONFIG_BINFMT_MISC is not set&lt;br /&gt;
 &lt;br /&gt;
 # Use the modern PTY interface (devpts) only.&lt;br /&gt;
 # CONFIG_LEGACY_PTYS is not set&lt;br /&gt;
 &lt;br /&gt;
 # Reboot devices immediately if kernel experiences an Oops.&lt;br /&gt;
 CONFIG_PANIC_ON_OOPS=y&lt;br /&gt;
 CONFIG_PANIC_TIMEOUT=-1&lt;br /&gt;
 &lt;br /&gt;
 # Keep root from altering kernel memory via loadable modules.&lt;br /&gt;
 # CONFIG_MODULES is not set&lt;br /&gt;
 &lt;br /&gt;
 # But if CONFIG_MODULE=y is needed, at least they must be signed with a per-build key.&lt;br /&gt;
 CONFIG_DEBUG_SET_MODULE_RONX=y&lt;br /&gt;
 CONFIG_MODULE_SIG=y&lt;br /&gt;
 CONFIG_MODULE_SIG_FORCE=y&lt;br /&gt;
 CONFIG_MODULE_SIG_ALL=y&lt;br /&gt;
 CONFIG_MODULE_SIG_SHA512=y&lt;br /&gt;
 CONFIG_MODULE_SIG_HASH=&amp;quot;sha512&amp;quot;&lt;br /&gt;
 CONFIG_MODULE_SIG_KEY=&amp;quot;certs/signing_key.pem&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== x86_32 ===&lt;br /&gt;
&lt;br /&gt;
 # On 32-bit kernels, require PAE for NX bit support.&lt;br /&gt;
 # CONFIG_M486 is not set&lt;br /&gt;
 # CONFIG_HIGHMEM4G is not set&lt;br /&gt;
 CONFIG_HIGHMEM64G=y&lt;br /&gt;
 CONFIG_X86_PAE=y&lt;br /&gt;
 &lt;br /&gt;
 # Disallow allocating the first 64k of memory.&lt;br /&gt;
 CONFIG_DEFAULT_MMAP_MIN_ADDR=65536&lt;br /&gt;
 &lt;br /&gt;
 # Randomize position of kernel.&lt;br /&gt;
 CONFIG_RANDOMIZE_BASE=y&lt;br /&gt;
&lt;br /&gt;
=== x86_64 ===&lt;br /&gt;
&lt;br /&gt;
 # Full 64-bit means PAE and NX bit.&lt;br /&gt;
 CONFIG_X86_64=y&lt;br /&gt;
 &lt;br /&gt;
 # Disallow allocating the first 64k of memory.&lt;br /&gt;
 CONFIG_DEFAULT_MMAP_MIN_ADDR=65536&lt;br /&gt;
 &lt;br /&gt;
 # Randomize position of kernel and memory.&lt;br /&gt;
 CONFIG_RANDOMIZE_BASE=y&lt;br /&gt;
 CONFIG_RANDOMIZE_MEMORY=y&lt;br /&gt;
 &lt;br /&gt;
 # Modern libc no longer needs a fixed-position mapping in userspace, remove it as a possible target.&lt;br /&gt;
 CONFIG_LEGACY_VSYSCALL_NONE=y&lt;br /&gt;
 &lt;br /&gt;
 # Remove additional attack surface, unless you really need them.&lt;br /&gt;
 # CONFIG_IA32_EMULATION is not set&lt;br /&gt;
 # CONFIG_X86_X32 is not set&lt;br /&gt;
 # CONFIG_MODIFY_LDT_SYSCALL is not set&lt;br /&gt;
&lt;br /&gt;
=== arm ===&lt;br /&gt;
&lt;br /&gt;
 # Disallow allocating the first 32k of memory (cannot be 64k due to ARM loader).&lt;br /&gt;
 CONFIG_DEFAULT_MMAP_MIN_ADDR=32768&lt;br /&gt;
 &lt;br /&gt;
 # For maximal userspace memory area (and maximum ASLR).&lt;br /&gt;
 CONFIG_VMSPLIT_3G=y&lt;br /&gt;
 &lt;br /&gt;
 # If building an out-of-tree Qualcomm kernel, this is similar to CONFIG_DEBUG_RODATA.&lt;br /&gt;
 CONFIG_STRICT_MEMORY_RWX=y&lt;br /&gt;
 &lt;br /&gt;
 # Make sure PXN/PAN emulation is enabled.&lt;br /&gt;
 CONFIG_CPU_SW_DOMAIN_PAN=y&lt;br /&gt;
 &lt;br /&gt;
 # Dangerous; old interfaces and needless additional attack surface.&lt;br /&gt;
 # CONFIG_OABI_COMPAT is unset&lt;br /&gt;
&lt;br /&gt;
=== arm64 ===&lt;br /&gt;
&lt;br /&gt;
 # Disallow allocating the first 32k of memory (cannot be 64k due to ARM loader).&lt;br /&gt;
 CONFIG_DEFAULT_MMAP_MIN_ADDR=32768&lt;br /&gt;
 &lt;br /&gt;
 # Randomize position of kernel (requires UEFI RNG or bootloader support for /chosen/kaslr-seed DT property).&lt;br /&gt;
 CONFIG_RANDOMIZE_BASE=y&lt;br /&gt;
&lt;br /&gt;
== kernel command line options ==&lt;br /&gt;
&lt;br /&gt;
 # Enable slub/slab allocator free poisoning (requires CONFIG_SLUB_DEBUG=y above).&lt;br /&gt;
 slub_debug=P&lt;br /&gt;
 &lt;br /&gt;
 # Enable buddy allocator free poisoning (requires CONFIG_PAGE_POISONING=y above).&lt;br /&gt;
 page_poison=1&lt;br /&gt;
&lt;br /&gt;
=== x86_64 ===&lt;br /&gt;
&lt;br /&gt;
 # Remove vsyscall entirely to avoid it being a fixed-position ROP target of any kind.&lt;br /&gt;
 # (Same as CONFIG_LEGACY_VSYSCALL_NONE=y above.)&lt;br /&gt;
 vsyscall=none&lt;br /&gt;
&lt;br /&gt;
== sysctls ==&lt;br /&gt;
&lt;br /&gt;
 # Try to keep kernel address exposures out of various /proc files (kallsyms, modules, etc).&lt;br /&gt;
 kernel.kptr_restrict = 1&lt;br /&gt;
 &lt;br /&gt;
 # Avoid kernel memory address exposures via dmesg.&lt;br /&gt;
 kernel.dmesg_restrict = 1&lt;br /&gt;
 &lt;br /&gt;
 # Block non-uid-0 profiling (needs [https://patchwork.kernel.org/patch/9249919/ distro patch], otherwise this is the same as &amp;quot;= 2&amp;quot;)&lt;br /&gt;
 kernel.perf_event_paranoid = 3&lt;br /&gt;
 &lt;br /&gt;
 # Turn off kexec, even if it's built in.&lt;br /&gt;
 kernel.kexec_load_disabled = 1&lt;br /&gt;
 &lt;br /&gt;
 # Avoid non-ancestor ptrace access to running processes and their credentials.&lt;br /&gt;
 kernel.yama.ptrace_scope = 1&lt;/div&gt;</summary>
		<author><name>DavidWindsor</name></author>
	</entry>
	<entry>
		<id>http://kernsec.org/wiki/index.php?title=Kernel_Self_Protection_Project&amp;diff=3841</id>
		<title>Kernel Self Protection Project</title>
		<link rel="alternate" type="text/html" href="http://kernsec.org/wiki/index.php?title=Kernel_Self_Protection_Project&amp;diff=3841"/>
		<updated>2017-02-04T05:34:45Z</updated>

		<summary type="html">&lt;p&gt;DavidWindsor: Create sections for Completed Kernel Protections and HARDENED_ATOMIC subsection&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Mission Statement =&lt;br /&gt;
&lt;br /&gt;
This project starts with the premise that [https://lwn.net/Articles/410606/ kernel bugs have a very long lifetime], and that the kernel must be designed in ways to protect against these flaws. We must think of [http://lwn.net/Articles/662219/ security beyond fixing bugs]. As a community, we already find and fix individual bugs via static checkers (compiler flags, [http://smatch.sourceforge.net/ smatch], [http://coccinelle.lip6.fr/ coccinelle], [https://scan.coverity.com/projects/linux?tab=overview coverity]) and dynamic checkers (kernel configs, [http://codemonkey.org.uk/projects/trinity/ trinity], [https://www.kernel.org/doc/Documentation/kasan.txt KASan]). Those efforts are important and on-going, but if we want to protect our [http://www.techspot.com/news/57228-google-shows-off-new-version-of-android-announces-1-billion-active-monthly-users.html billion Android phones], our [http://www.zdnet.com/article/2014-the-year-of-the-linux-car/ cars], the [https://training.linuxfoundation.org/why-our-linux-training/training-reviews/linux-foundation-training-prepares-the-international-space-station-for-linux-migration International Space Station], and everything else running Linux, we must get proactive defensive technologies built into the upstream Linux kernel. We need the kernel to [http://kernsec.org/files/lss2015/giant-bags-of-mostly-water.pdf fail safely, instead of just running safely].&lt;br /&gt;
&lt;br /&gt;
These kinds of protections have existed for years in [https://pax.grsecurity.net/ PaX], [https://grsecurity.net/features.php grsecurity], and piles of academic papers. For various social, cultural, and technical reasons, they have not made their way into the upstream kernel, and this project seeks to change that. Our focus is on kernel self-protection, rather than kernel-supported userspace protections. The goal is to eliminate classes of bugs and eliminate methods of exploitation.&lt;br /&gt;
&lt;br /&gt;
= Principles =&lt;br /&gt;
A short list of things to keep in mind when designing self-protection features:&lt;br /&gt;
&lt;br /&gt;
* Patience and an open mind will be needed. We're all trying to make Linux better, so let's stay focused on the results.&lt;br /&gt;
* Features will be more than finding bugs. Should be active at run-time to catch previously unknown flaws.&lt;br /&gt;
* Features will not be developer-&amp;quot;opt-in&amp;quot;. When a feature is enabled at build time, it should work for all code built into the kernel (which has the side-effect of also covering out-of-tree code, like in vendor forks).&lt;br /&gt;
&lt;br /&gt;
= Get Involved =&lt;br /&gt;
&lt;br /&gt;
Want to get involved? [http://www.openwall.com/lists/#subscribe Join] the [http://www.openwall.com/lists/kernel-hardening/ kernel hardening mailing list] and introduce yourself. Then pick an area of work from below (or add a new one), coordinate on the mailing list, and get started. If your employer is brave enough to understand how critical this work is, they'll pay you to work on it. If not, the [http://www.linuxfoundation.org/ Linux Foundation]'s [https://www.coreinfrastructure.org/faq Core Infrastructure Initiative] is in a great position to fund specific work proposals. We need kernel developers, compiler developers, testers, backporters, a documentation writers.&lt;br /&gt;
&lt;br /&gt;
= Work Areas =&lt;br /&gt;
&lt;br /&gt;
While there are already a number of upstream [[Feature List|kernel security features]], we are still missing many. While the following is far from a comprehensive list, it's at least a starting point we can add to:&lt;br /&gt;
&lt;br /&gt;
== [[Bug Classes]] ==&lt;br /&gt;
&lt;br /&gt;
* [[Bug Classes/Stack overflow|Stack overflow]]&lt;br /&gt;
* [[Bug Classes/Integer overflow|Integer overflow]]&lt;br /&gt;
* [[Bug Classes/Heap overflow|Heap overflow]]&lt;br /&gt;
* [[Bug Classes/Format string injection|Format string injection]]&lt;br /&gt;
* [[Bug Classes/Kernel pointer leak|Kernel pointer leak]]&lt;br /&gt;
* [[Bug Classes/Uninitialized variables|Uninitialized variables]]&lt;br /&gt;
* [[Bug Classes/Use after free|Use-after-free]]&lt;br /&gt;
&lt;br /&gt;
== [[Exploit Methods|Exploitation Methods]] ==&lt;br /&gt;
&lt;br /&gt;
* [[Exploit Methods/Kernel location|Kernel location]]&lt;br /&gt;
* [[Exploit Methods/Text overwrite|Text overwrite]]&lt;br /&gt;
* [[Exploit Methods/Function pointer overwrite|Function pointer overwrite]]&lt;br /&gt;
* [[Exploit Methods/Userspace execution|Userspace execution]]&lt;br /&gt;
* [[Exploit Methods/Userspace data usage|Userspace data usage]]&lt;br /&gt;
* [[Exploit Methods/Reused code chunks|Reused code chunks]]&lt;br /&gt;
&lt;br /&gt;
= Completed Kernel Protections =&lt;br /&gt;
&lt;br /&gt;
The following kernel protections have been already been accepted into the mainline Linux kernel, or are in some stage of development.&lt;br /&gt;
&lt;br /&gt;
==== [[Protections/HARDENED_ATOMC|HARDENED_ATOMIC]] ====&lt;br /&gt;
: Kernel reference counter overflow protection&lt;br /&gt;
&lt;br /&gt;
= Specific TODO Items =&lt;br /&gt;
&lt;br /&gt;
Besides the general work outlined above, there are number of specific tasks that have either been asked about frequently or are otherwise in need some time and attention:&lt;br /&gt;
&lt;br /&gt;
* Split thread_info off of kernel stack (Done: x86, arm64, s390. Needed on arm, powerpc and others?)&lt;br /&gt;
* Move kernel stack to vmap area (Done: x86, arm64, s390. Needed on powerpc and others?)&lt;br /&gt;
* Implement kernel relocation and KASLR for ARM&lt;br /&gt;
* Write a plugin to clear struct padding&lt;br /&gt;
* Write a plugin to do format string warnings correctly (gcc's -Wformat-security is bad about const strings)&lt;br /&gt;
* Reorganize and rename CONFIG_DEBUG_RODATA (and related options) to something without &amp;quot;DEBUG&amp;quot; in the name (in progress)&lt;br /&gt;
* Make CONFIG_DEBUG_RODATA mandatory (done for arm64 and x86, other archs still need it)&lt;br /&gt;
* Convert remaining BPF JITs to eBPF JIT (with blinding)&lt;br /&gt;
* Write lib/test_bpf.c tests for eBPF constant blinding&lt;br /&gt;
* Further restriction of perf_event_open (e.g. perf_event_paranoid=3)&lt;br /&gt;
* Identify and extend HARDENED_USERCOPY to other usercopy functions (e.g. maybe csum_partial_copy_from_user, csum_and_copy_from_user, csum_and_copy_to_user, csum_partial_copy_nocheck?)&lt;br /&gt;
* Extend HARDENED_USERCOPY to use slab whitelisting&lt;br /&gt;
* Extend HARDENED_USERCOPY to split user-facing malloc()s and in-kernel malloc()svmalloc stack guard pages&lt;br /&gt;
* protect ARM vector table as fixed-location kernel target&lt;br /&gt;
* disable kuser helpers on arm&lt;br /&gt;
* harden and rename CONFIG_DEBUG_LIST better and default=y&lt;br /&gt;
* add zeroing of copy_from_user on failure test to test_usercopy.c&lt;br /&gt;
* consolidate all architecture's use of usercopy into asm-generic/uaccess.h&lt;br /&gt;
* add WARN path for page-spanning usercopy checks (instead of the separate CONFIG)&lt;br /&gt;
* create UNEXPECTED(), like BUG() but without the lock-busting, etc&lt;br /&gt;
* adjust usercopy CONFIG to be  !DEVKMEM &amp;amp;&amp;amp; STRICT_DEVMEM=y (PROC_KCORE is incompat with usercopy too)&lt;br /&gt;
* provide mechanism to check for ro_after_init memory areas, and reject structures not marked ro_after_init in vmbus_register()&lt;br /&gt;
* expand use of __ro_after_init, especially in arch/arm64&lt;br /&gt;
* Add stack-frame walking to usercopy implementations (Done: x86. In progress: arm64. Needed on arm, others?)&lt;br /&gt;
&lt;br /&gt;
= Recommended settings =&lt;br /&gt;
&lt;br /&gt;
People ask from time to time what a good security set of build CONFIGs and runtime sysctl are. This is a brain-dump of the various options for a particularly paranoid system.&lt;br /&gt;
&lt;br /&gt;
== CONFIGs ==&lt;br /&gt;
&lt;br /&gt;
 # Report BUG() conditions and kill the offending process.&lt;br /&gt;
 CONFIG_BUG=y&lt;br /&gt;
 &lt;br /&gt;
 # Make sure kernel page tables have safe permissions.&lt;br /&gt;
 CONFIG_DEBUG_KERNEL=y&lt;br /&gt;
 CONFIG_DEBUG_RODATA=y&lt;br /&gt;
 &lt;br /&gt;
 # Use -fstack-protector-strong (gcc 4.9+) for best stack canary coverage.&lt;br /&gt;
 CONFIG_CC_STACKPROTECTOR=y&lt;br /&gt;
 CONFIG_CC_STACKPROTECTOR_STRONG=y&lt;br /&gt;
 &lt;br /&gt;
 # Do not allow direct physical memory access (but if you must have it, at least enable STRICT mode...)&lt;br /&gt;
 # CONFIG_DEVMEM is not set&lt;br /&gt;
 CONFIG_STRICT_DEVMEM=y&lt;br /&gt;
 CONFIG_IO_STRICT_DEVMEM=y&lt;br /&gt;
 &lt;br /&gt;
 # Provides some protections against SYN flooding.&lt;br /&gt;
 CONFIG_SYN_COOKIES=y&lt;br /&gt;
 &lt;br /&gt;
 # Perform additional validation of various commonly targetted structures.&lt;br /&gt;
 CONFIG_DEBUG_CREDENTIALS=y&lt;br /&gt;
 CONFIG_DEBUG_NOTIFIERS=y&lt;br /&gt;
 CONFIG_DEBUG_LIST=y&lt;br /&gt;
 CONFIG_BUG_ON_DATA_CORRUPTION=y&lt;br /&gt;
 &lt;br /&gt;
 # Provide userspace with seccomp BPF API for syscall attack surface reduction.&lt;br /&gt;
 CONFIG_SECCOMP=y&lt;br /&gt;
 CONFIG_SECCOMP_FILTER=y&lt;br /&gt;
 &lt;br /&gt;
 # Provide userspace with ptrace ancestry protections.&lt;br /&gt;
 CONFIG_SECURITY=y&lt;br /&gt;
 CONFIG_SECURITY_YAMA=y&lt;br /&gt;
 &lt;br /&gt;
 # Perform usercopy bounds checking.&lt;br /&gt;
 CONFIG_HARDENED_USERCOPY=y&lt;br /&gt;
 &lt;br /&gt;
 # Randomize allocator freelists.&lt;br /&gt;
 CONFIG_SLAB_FREELIST_RANDOM=y&lt;br /&gt;
 &lt;br /&gt;
 # Allow allocator validation checking to be enabled (see &amp;quot;slub_debug=P&amp;quot; below).&lt;br /&gt;
 CONFIG_SLUB_DEBUG=y&lt;br /&gt;
 &lt;br /&gt;
 # Wipe higher-level memory allocations when they are freed (needs &amp;quot;page_poison=1&amp;quot; command line below).&lt;br /&gt;
 # (If you can afford even more performance penalty, leave CONFIG_PAGE_POISONING_NO_SANITY=n)&lt;br /&gt;
 CONFIG_PAGE_POISONING=y&lt;br /&gt;
 CONFIG_PAGE_POISONING_NO_SANITY=y&lt;br /&gt;
 CONFIG_PAGE_POISONING_ZERO=y&lt;br /&gt;
 &lt;br /&gt;
 # Dangerous; enabling this allows direct physical memory writing.&lt;br /&gt;
 # CONFIG_ACPI_CUSTOM_METHOD is not set&lt;br /&gt;
 &lt;br /&gt;
 # Dangerous; enabling this disables brk ASLR.&lt;br /&gt;
 # CONFIG_COMPAT_BRK is not set&lt;br /&gt;
 &lt;br /&gt;
 # Dangerous; enabling this allows direct kernel memory writing.&lt;br /&gt;
 # CONFIG_DEVKMEM is not set&lt;br /&gt;
 &lt;br /&gt;
 # Dangerous; enabling this disables VDSO ASLR.&lt;br /&gt;
 # CONFIG_COMPAT_VDSO is not set&lt;br /&gt;
 &lt;br /&gt;
 # Dangerous; enabling this allows replacement of running kernel.&lt;br /&gt;
 # CONFIG_KEXEC is not set&lt;br /&gt;
 &lt;br /&gt;
 # Dangerous; enabling this allows replacement of running kernel.&lt;br /&gt;
 # CONFIG_HIBERNATION is not set&lt;br /&gt;
 &lt;br /&gt;
 # Prior to v4.1, assists heap memory attacks; best to keep interface disabled.&lt;br /&gt;
 # CONFIG_INET_DIAG is not set&lt;br /&gt;
 &lt;br /&gt;
 # Easily confused by misconfigured userspace, keep off.&lt;br /&gt;
 # CONFIG_BINFMT_MISC is not set&lt;br /&gt;
 &lt;br /&gt;
 # Use the modern PTY interface (devpts) only.&lt;br /&gt;
 # CONFIG_LEGACY_PTYS is not set&lt;br /&gt;
 &lt;br /&gt;
 # Reboot devices immediately if kernel experiences an Oops.&lt;br /&gt;
 CONFIG_PANIC_ON_OOPS=y&lt;br /&gt;
 CONFIG_PANIC_TIMEOUT=-1&lt;br /&gt;
 &lt;br /&gt;
 # Keep root from altering kernel memory via loadable modules.&lt;br /&gt;
 # CONFIG_MODULES is not set&lt;br /&gt;
 &lt;br /&gt;
 # But if CONFIG_MODULE=y is needed, at least they must be signed with a per-build key.&lt;br /&gt;
 CONFIG_DEBUG_SET_MODULE_RONX=y&lt;br /&gt;
 CONFIG_MODULE_SIG=y&lt;br /&gt;
 CONFIG_MODULE_SIG_FORCE=y&lt;br /&gt;
 CONFIG_MODULE_SIG_ALL=y&lt;br /&gt;
 CONFIG_MODULE_SIG_SHA512=y&lt;br /&gt;
 CONFIG_MODULE_SIG_HASH=&amp;quot;sha512&amp;quot;&lt;br /&gt;
 CONFIG_MODULE_SIG_KEY=&amp;quot;certs/signing_key.pem&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== x86_32 ===&lt;br /&gt;
&lt;br /&gt;
 # On 32-bit kernels, require PAE for NX bit support.&lt;br /&gt;
 # CONFIG_M486 is not set&lt;br /&gt;
 # CONFIG_HIGHMEM4G is not set&lt;br /&gt;
 CONFIG_HIGHMEM64G=y&lt;br /&gt;
 CONFIG_X86_PAE=y&lt;br /&gt;
 &lt;br /&gt;
 # Disallow allocating the first 64k of memory.&lt;br /&gt;
 CONFIG_DEFAULT_MMAP_MIN_ADDR=65536&lt;br /&gt;
 &lt;br /&gt;
 # Randomize position of kernel.&lt;br /&gt;
 CONFIG_RANDOMIZE_BASE=y&lt;br /&gt;
&lt;br /&gt;
=== x86_64 ===&lt;br /&gt;
&lt;br /&gt;
 # Full 64-bit means PAE and NX bit.&lt;br /&gt;
 CONFIG_X86_64=y&lt;br /&gt;
 &lt;br /&gt;
 # Disallow allocating the first 64k of memory.&lt;br /&gt;
 CONFIG_DEFAULT_MMAP_MIN_ADDR=65536&lt;br /&gt;
 &lt;br /&gt;
 # Randomize position of kernel and memory.&lt;br /&gt;
 CONFIG_RANDOMIZE_BASE=y&lt;br /&gt;
 CONFIG_RANDOMIZE_MEMORY=y&lt;br /&gt;
 &lt;br /&gt;
 # Modern libc no longer needs a fixed-position mapping in userspace, remove it as a possible target.&lt;br /&gt;
 CONFIG_LEGACY_VSYSCALL_NONE=y&lt;br /&gt;
 &lt;br /&gt;
 # Remove additional attack surface, unless you really need them.&lt;br /&gt;
 # CONFIG_IA32_EMULATION is not set&lt;br /&gt;
 # CONFIG_X86_X32 is not set&lt;br /&gt;
 # CONFIG_MODIFY_LDT_SYSCALL is not set&lt;br /&gt;
&lt;br /&gt;
=== arm ===&lt;br /&gt;
&lt;br /&gt;
 # Disallow allocating the first 32k of memory (cannot be 64k due to ARM loader).&lt;br /&gt;
 CONFIG_DEFAULT_MMAP_MIN_ADDR=32768&lt;br /&gt;
 &lt;br /&gt;
 # For maximal userspace memory area (and maximum ASLR).&lt;br /&gt;
 CONFIG_VMSPLIT_3G=y&lt;br /&gt;
 &lt;br /&gt;
 # If building an out-of-tree Qualcomm kernel, this is similar to CONFIG_DEBUG_RODATA.&lt;br /&gt;
 CONFIG_STRICT_MEMORY_RWX=y&lt;br /&gt;
 &lt;br /&gt;
 # Make sure PXN/PAN emulation is enabled.&lt;br /&gt;
 CONFIG_CPU_SW_DOMAIN_PAN=y&lt;br /&gt;
 &lt;br /&gt;
 # Dangerous; old interfaces and needless additional attack surface.&lt;br /&gt;
 # CONFIG_OABI_COMPAT is unset&lt;br /&gt;
&lt;br /&gt;
=== arm64 ===&lt;br /&gt;
&lt;br /&gt;
 # Disallow allocating the first 32k of memory (cannot be 64k due to ARM loader).&lt;br /&gt;
 CONFIG_DEFAULT_MMAP_MIN_ADDR=32768&lt;br /&gt;
 &lt;br /&gt;
 # Randomize position of kernel (requires UEFI RNG or bootloader support for /chosen/kaslr-seed DT property).&lt;br /&gt;
 CONFIG_RANDOMIZE_BASE=y&lt;br /&gt;
&lt;br /&gt;
== kernel command line options ==&lt;br /&gt;
&lt;br /&gt;
 # Enable slub/slab allocator free poisoning (requires CONFIG_SLUB_DEBUG=y above).&lt;br /&gt;
 slub_debug=P&lt;br /&gt;
 &lt;br /&gt;
 # Enable buddy allocator free poisoning (requires CONFIG_PAGE_POISONING=y above).&lt;br /&gt;
 page_poison=1&lt;br /&gt;
&lt;br /&gt;
=== x86_64 ===&lt;br /&gt;
&lt;br /&gt;
 # Remove vsyscall entirely to avoid it being a fixed-position ROP target of any kind.&lt;br /&gt;
 # (Same as CONFIG_LEGACY_VSYSCALL_NONE=y above.)&lt;br /&gt;
 vsyscall=none&lt;br /&gt;
&lt;br /&gt;
== sysctls ==&lt;br /&gt;
&lt;br /&gt;
 # Try to keep kernel address exposures out of various /proc files (kallsyms, modules, etc).&lt;br /&gt;
 kernel.kptr_restrict = 1&lt;br /&gt;
 &lt;br /&gt;
 # Avoid kernel memory address exposures via dmesg.&lt;br /&gt;
 kernel.dmesg_restrict = 1&lt;br /&gt;
 &lt;br /&gt;
 # Block non-uid-0 profiling (needs [https://patchwork.kernel.org/patch/9249919/ distro patch], otherwise this is the same as &amp;quot;= 2&amp;quot;)&lt;br /&gt;
 kernel.perf_event_paranoid = 3&lt;br /&gt;
 &lt;br /&gt;
 # Turn off kexec, even if it's built in.&lt;br /&gt;
 kernel.kexec_load_disabled = 1&lt;br /&gt;
 &lt;br /&gt;
 # Avoid non-ancestor ptrace access to running processes and their credentials.&lt;br /&gt;
 kernel.yama.ptrace_scope = 1&lt;/div&gt;</summary>
		<author><name>DavidWindsor</name></author>
	</entry>
	<entry>
		<id>http://kernsec.org/wiki/index.php?title=Bug_Classes/Use_after_free&amp;diff=3840</id>
		<title>Bug Classes/Use after free</title>
		<link rel="alternate" type="text/html" href="http://kernsec.org/wiki/index.php?title=Bug_Classes/Use_after_free&amp;diff=3840"/>
		<updated>2017-02-04T05:14:40Z</updated>

		<summary type="html">&lt;p&gt;DavidWindsor: Add reference counter overflow protection to Mitigations&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Details =&lt;br /&gt;
When a memory allocation gets freed but there are still accidentally users of that memory, it is possible that an attacker could control the new memory allocation that fills the freed area, and then manipulate the contents in a way that the system uses its stale pointer and expects a different structure than is currently present. If there are function pointers contained in the structure, this allows for trivial execution control.&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
&lt;br /&gt;
* [http://perception-point.io/2016/01/14/analysis-and-exploitation-of-a-linux-kernel-vulnerability-cve-2016-0728/ keyring use-after-free]&lt;br /&gt;
&lt;br /&gt;
= Mitigations =&lt;br /&gt;
&lt;br /&gt;
* clearing memory on free can stop attacks where there is no reallocation control (e.g. PAX_MEMORY_SANITIZE)&lt;br /&gt;
* segregating memory used by the kernel and by userspace can stop attacks where this boundary is crossed (e.g. PAX_USERCOPY)&lt;br /&gt;
* randomizing heap allocations can frustrate the reallocation efforts the attack needs to perform (e.g. OpenBSD malloc)&lt;br /&gt;
* reference counter overflow protection (e.g. PAX_REFCOUNT, HARDENED_ATOMIC)&lt;/div&gt;</summary>
		<author><name>DavidWindsor</name></author>
	</entry>
	<entry>
		<id>http://kernsec.org/wiki/index.php?title=Bug_Classes/Use_after_free&amp;diff=3839</id>
		<title>Bug Classes/Use after free</title>
		<link rel="alternate" type="text/html" href="http://kernsec.org/wiki/index.php?title=Bug_Classes/Use_after_free&amp;diff=3839"/>
		<updated>2017-02-04T05:13:56Z</updated>

		<summary type="html">&lt;p&gt;DavidWindsor: Undo revision 3836 by DavidWindsor (talk)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Details =&lt;br /&gt;
When a memory allocation gets freed but there are still accidentally users of that memory, it is possible that an attacker could control the new memory allocation that fills the freed area, and then manipulate the contents in a way that the system uses its stale pointer and expects a different structure than is currently present. If there are function pointers contained in the structure, this allows for trivial execution control.&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
&lt;br /&gt;
* [http://perception-point.io/2016/01/14/analysis-and-exploitation-of-a-linux-kernel-vulnerability-cve-2016-0728/ keyring use-after-free]&lt;br /&gt;
&lt;br /&gt;
= Mitigations =&lt;br /&gt;
&lt;br /&gt;
* clearing memory on free can stop attacks where there is no reallocation control (e.g. PAX_MEMORY_SANITIZE)&lt;br /&gt;
* segregating memory used by the kernel and by userspace can stop attacks where this boundary is crossed (e.g. PAX_USERCOPY)&lt;br /&gt;
* randomizing heap allocations can frustrate the reallocation efforts the attack needs to perform (e.g. OpenBSD malloc)&lt;/div&gt;</summary>
		<author><name>DavidWindsor</name></author>
	</entry>
	<entry>
		<id>http://kernsec.org/wiki/index.php?title=Bug_Classes/Use_after_free&amp;diff=3838</id>
		<title>Bug Classes/Use after free</title>
		<link rel="alternate" type="text/html" href="http://kernsec.org/wiki/index.php?title=Bug_Classes/Use_after_free&amp;diff=3838"/>
		<updated>2017-02-04T05:13:36Z</updated>

		<summary type="html">&lt;p&gt;DavidWindsor: Undo revision 3837 by DavidWindsor (talk)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Details =&lt;br /&gt;
When a memory allocation gets freed but there are still accidentally users of that memory, it is possible that an attacker could control the new memory allocation that fills the freed area, and then manipulate the contents in a way that the system uses its stale pointer and expects a different structure than is currently present. If there are function pointers contained in the structure, this allows for trivial execution control.&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
&lt;br /&gt;
* [http://perception-point.io/2016/01/14/analysis-and-exploitation-of-a-linux-kernel-vulnerability-cve-2016-0728/ keyring use-after-free]&lt;br /&gt;
&lt;br /&gt;
= Mitigations =&lt;br /&gt;
&lt;br /&gt;
* clearing memory on free can stop attacks where there is no reallocation control (e.g. PAX_MEMORY_SANITIZE)&lt;br /&gt;
* segregating memory used by the kernel and by userspace can stop attacks where this boundary is crossed (e.g. PAX_USERCOPY)&lt;br /&gt;
* randomizing heap allocations can frustrate the reallocation efforts the attack needs to perform (e.g. OpenBSD malloc)&lt;br /&gt;
* reference counter overflow protection (PAX_REFCOUNT, HARDENED_ATOMIC)&lt;/div&gt;</summary>
		<author><name>DavidWindsor</name></author>
	</entry>
	<entry>
		<id>http://kernsec.org/wiki/index.php?title=Bug_Classes/Use_after_free&amp;diff=3837</id>
		<title>Bug Classes/Use after free</title>
		<link rel="alternate" type="text/html" href="http://kernsec.org/wiki/index.php?title=Bug_Classes/Use_after_free&amp;diff=3837"/>
		<updated>2017-02-04T05:12:59Z</updated>

		<summary type="html">&lt;p&gt;DavidWindsor: /* Mitigations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Details =&lt;br /&gt;
When a memory allocation gets freed but there are still accidentally users of that memory, it is possible that an attacker could control the new memory allocation that fills the freed area, and then manipulate the contents in a way that the system uses its stale pointer and expects a different structure than is currently present. If there are function pointers contained in the structure, this allows for trivial execution control.&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
&lt;br /&gt;
* [http://perception-point.io/2016/01/14/analysis-and-exploitation-of-a-linux-kernel-vulnerability-cve-2016-0728/ keyring use-after-free]&lt;br /&gt;
&lt;br /&gt;
= Mitigations =&lt;br /&gt;
&lt;br /&gt;
* clearing memory on free can stop attacks where there is no reallocation control (e.g. PAX_MEMORY_SANITIZE)&lt;br /&gt;
* segregating memory used by the kernel and by userspace can stop attacks where this boundary is crossed (e.g. PAX_USERCOPY)&lt;br /&gt;
* randomizing heap allocations can frustrate the reallocation efforts the attack needs to perform (e.g. OpenBSD malloc)&lt;br /&gt;
* reference counter overflow protection (e.g. PAX_REFCOUNT, HARDENED_ATOMIC)&lt;/div&gt;</summary>
		<author><name>DavidWindsor</name></author>
	</entry>
	<entry>
		<id>http://kernsec.org/wiki/index.php?title=Bug_Classes/Use_after_free&amp;diff=3836</id>
		<title>Bug Classes/Use after free</title>
		<link rel="alternate" type="text/html" href="http://kernsec.org/wiki/index.php?title=Bug_Classes/Use_after_free&amp;diff=3836"/>
		<updated>2017-02-04T05:12:39Z</updated>

		<summary type="html">&lt;p&gt;DavidWindsor: /* Mitigations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Details =&lt;br /&gt;
When a memory allocation gets freed but there are still accidentally users of that memory, it is possible that an attacker could control the new memory allocation that fills the freed area, and then manipulate the contents in a way that the system uses its stale pointer and expects a different structure than is currently present. If there are function pointers contained in the structure, this allows for trivial execution control.&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
&lt;br /&gt;
* [http://perception-point.io/2016/01/14/analysis-and-exploitation-of-a-linux-kernel-vulnerability-cve-2016-0728/ keyring use-after-free]&lt;br /&gt;
&lt;br /&gt;
= Mitigations =&lt;br /&gt;
&lt;br /&gt;
* clearing memory on free can stop attacks where there is no reallocation control (e.g. PAX_MEMORY_SANITIZE)&lt;br /&gt;
* segregating memory used by the kernel and by userspace can stop attacks where this boundary is crossed (e.g. PAX_USERCOPY)&lt;br /&gt;
* randomizing heap allocations can frustrate the reallocation efforts the attack needs to perform (e.g. OpenBSD malloc)&lt;br /&gt;
* reference counter overflow protection (PAX_REFCOUNT, HARDENED_ATOMIC)&lt;/div&gt;</summary>
		<author><name>DavidWindsor</name></author>
	</entry>
</feed>