ISO/IEC JTC1 SC22 WG21 P2795R1

Date: 2023-06-15

To: SG12, SG23, EWG, CWG

Thomas Köppe <tkoeppe@google.com>

Erroneous behaviour for uninitialized reads

Contents

  1. Revision history
  2. Summary
  3. Motivation
  4. Proposal: reading an uninitialized variable is erroneous
  5. Proposed wording
  6. Impact and implementability
  7. The broader picture: Erroneous behaviour in C++
  8. Tooling
  9. What is code?
  10. Related work
  11. Questions and answers
  12. Acknowledgements
  13. References

Revision history

Summary

We propose to address the safety problems of reading a default-initialized automatic variable (an “uninitialized read”) by adding a novel kind of behaviour for C++. This new behaviour, called erroneous behaviour, allows us to formally speak about “buggy” (or “incorrect”) code, that is, code that does not mean what it should mean (in a sense we will discuss). This behaviour is both “wrong” in the sense of indicating a programming bug, and also well-defined in the sense of not posing a safety risk.

Motivation

Pragmatically, there are very few C++ programs in the real world that are entirely correct. In terms of the Standard, that means most programs are not constrained by the specification at all, since they run into undefined behaviour. This is ultimately not very helpful to real software development efforts. The term “safety” has been mentioned as a concern in both C and C++, but it is a nebulous and slippery term that means different things to different people. A useful definition that has come up is that “safety is about the behaviour of incorrect programs”.

The motivating example of unsafe code that we address in this proposal is reading a default-initialized variable of automatic storage duration and scalar type:

Example M:

extern void f(int); int main() {   int x;     // default-initialized, value of x is indeterminate   f(x);      // glvalue-to-prvalue conversion has undefined behaviour }

This code is blatantly incorrect, but it occurs commonly as a programming error. The code is also unsafe because this error is exploitable and leads to real, serious vulnerabilities.

With increased community interest in safety, and a growing track record of exploited vulnerabilities stemming from errors such as this one, there have been calls to fix C++. The recent P2723R1 proposes to make this fix by changing the undefined behaviour into well-defined behaviour, and specifically to well-define the initialization to be zero. We will argue below that such an expansion of well-defined behaviour would be a great detriment to the understandability of C++ code. In fact, if we want to both preserve the expressiveness of C++ and also fix the safety problems, we need a novel kind of behaviour.

The excellent survey paper P2754R0 analyses a number of possible changes to automatic variable initialization. We had circulated the core idea of proposed erroneous behaviour on the reflector previously, and that option is contained in the survey. We reproduce the survey summary here, with minor modifications and added colour:

Conclusion from P2754R0
Proposed Solution Viability Backward Compatibility Expressibility
Always Zero-Initialize Viable Compatible Worse
Zero-Initialize or Diagnose Unclear Correct-Code Compatible Unchanged
Force-Initialize in Source Viable Incompatible Better
Force-Initialize or Annotate Viable Incompatible Better
Default Value, Still UB Nonviable Compatible Unchanged
Default Value, Erroneous Viable Compatible Unchanged
Value-Initialize Only Unclear Unclear Unclear

The introduction of a novel notion of erroneous behaviour is the only solution that is viable, compatible with existing code, and that does not sacrifice expressiveness of the language. (A detailed discussion of this aspect follows below.) This leads us to our main proposal.

Proposal: reading an uninitialized variable is erroneous

We propose to change the semantics of reading an uninitialized variable:

Default-initialization of an automatic variable initializes the variable with a fixed value defined by the implementation; however, reading that value is a conceptual error. Implementations are allowed and encouraged to diagnose this error, but they are also allowed to ignore the error and treat the read as valid.

This is a novel kind of behaviour. Reading an uninitialized value is never intended and a definitive sign that the code is not written correctly and needs to be fixed. At the same time, we do give this code well-defined behaviour, and if the situation has not been diagnosed, we want the program to be stable and predictable. This is what we call erroneous behaviour.

In other words, it is still an "wrong" to read an uninitialized value, but if you do read it and the implementation does not otherwise stop you, you get some specific value. In general, implementations must exhibit the defined behaviour, at least up until a diagnostic is issued (if ever). There is no risk of running into the consequences associated with undefined behaviour (e.g. executing instructions not reflected in the source code, time-travel optimizations) when executing erroneous behaviour.

Recall:

extern void f(int); int main() {   int x;   f(x); }

Here is a comparison of the status quo, the proposal P2723R1, and this proposal, with regards to the above Example M.

Comparison of Example M under various proposals
C++23P2723R1
(default-init zero)
This proposal
undefined behaviour well-defined behaviour erroneous behaviour
definitely a bug may be intentionally using 0, or may be a bug definitely a bug
common compilers allow rejecting (e.g. -Werror), this selects a non-conforming compiler mode conforming compilers cannot diagnose anything conforming compilers generally have to accept, but can reject as QoI in non-conforming modes

A change from the previous revision P2795R0 is that the permission for an implementation to reject a translation unit “if it can determine that erroneous behaviour is reachable within that translation unit” has been removed: Richard Smith pointed out that such a determination is not generally possible. Any attempt to reject any erroneous behaviour at all would most likely have false positives, since it is in general impossible to determine whether a particular piece of code ends up being used. Whereas undefined behaviour in unused code would not currently prevent a build from succeeding, it could be rather disruptive if that code would now be rejected for containing erroneous behaviour, even if it was never used. Therefore, in this revision we leave it as pure QoI whether implementations attempt to detect that erroneous behaviour might be encountered and issue appropriate warnings or errors, similar to how implementations currently attempt to warn about undefined behaviour (e.g. with the -Wuninitialized flag).

Note that we do not want to mandate that the specific value actually be zero (like P2723R1 does), since we consider it valuable to allow implementations to use different “poison” values in different build modes. Different choices are conceivable here. A fixed value is more predictable, but also prevents useful debugging hints, and poses a greater risk of being deliberately relied upon by programmers. As an optional further extension, one could also imagine an explicit annotation to request an uninitialized variable (e.g. [[uninitialized]]), so that reading that variable is never erroneous, only either well-defined or undefined.

Proposed wording

Technically, it is not really the initialization that needs to change, but only the glvalue-to-prvalue conversion that is performed on the indeterminate initial value, so the concrete wording might look like as follows. Modify [conv.lval, 7.3.2]:

The result of the conversion is determined according to the following rules:

To establish the meaning of erroneous behaviour, first add an entry to [3, intro.defs]:

3.? erroneous behaviour [defns.erroneous]

well-defined behavior (including implementation-defined and unspecified behavior) which is subject to additional conformance constraints

[Note 1 to entry: Erroneous behaviour is always the consequence of incorrect program code. Implementations are allowed, but not required to diagnose it ([4.1.1, intro.compliance.general]). — end note]

Finally, modify and add a list item to [4.1.1, intro.compliance.general] paragraph 2:

Impact and implementability

Applying erroneous behaviour to the default initialization of automatic variables is already available today. Clang and GCC expose an example of the new production behaviour when given the flag -ftrivial-auto-var-init=zero, with the caveat that this never exhibits the encouraged behaviour of diagnosing an error. Clang exposes the error-detecting behaviour when using its Memory Sanitizer (which currently detects undefined behaviour, and would have to be taught to also recognize erroneous behaviour).

The proposal primarily constitutes a change of the specification tools that we have available in the Standard, so that we have a formal concept of incorrect code that the Standard itself can talk about. It should only pose a minor implementation burden. Generally, the impact of changing an operation’s current undefined behaviour to erroneous behaviour is as follows:

The broader picture: Erroneous behaviour in C++

The current C++ Standard only speaks about well-defined and well-behaved programs, and imposes no requirements on any other program. This results in an overly simple dichotomy of a program either being correct as written, with specified behaviour, or being incorrect and entirely outside the scope of the Standard. It is not possible for program to be incorrect, yet have its behaviour constrained by the Standard.

The newly proposed erroneous behaviour fills this gap. It is well-defined behaviour that is nonetheless acknowledged as being “incorrect”, and thus allows implementations to offer helpful diagnostics, while at the same time being constrained by the specification.

Adopting erroneous behaviour for a particular operation consists of replacing current undefined behaviour with a (well-defined) specification of that operation’s behaviour, explicitly called out as “erroneous”. This will in general have a performance cost, and we need some principles for when we change a particular construction to have erroneous behaviour. We propose the following.

Principles of erroneous behaviour:

To present further examples, we consult the Shafik Yaghmour’s P1705R1, which lists occurrences of undefined behaviour in the Standard. We pick a selection of cases and comment on whether each one might be a candidate for conversion to erroneous behaviour.

UB in C++23ActionComment
[lex.phases], splice results in universal character name leave as is obscure, low potential for harm
Modifying a const value leave as is unlikely to happen (requires explicit, suspicious code), infeasible to specify behaviour
ODR violation leave as is infeasible
Read of indeterminate value change to erroneous (That is this paper!)
Signed integer overflow could be changed to erroneous The result of an overflowing operation could “erroneously be [some particular value]”. This is not an uncommon bug. We consider it of low importance, though, since it is not a major safety concern.
Unrepresentable arithmetic conversions could be changed to erroneous Same as for signed integer overflow.
Bad bitshifts could be changed to erroneous Same as for signed integer overflow.
calling a function through the wrong function type leave as is uncommon, infeasible
invalid down-cast leave as is infeasible
invalid pointer arithmetic or comparison leave as is infeasible
invalid cast to enum unsure This needs investigation. Perhaps the invalid value could be erroneously preserved. Unclear if this would be useful.
various misuses of delete leave as is infeasible
type punning, union misuse, overlapping object access leave as is infeasible
null pointer dereference, null pointer-to-member dereference) practically, leave as is One could entertain a change to make a null pointer dereference erroneous, but the choice of behaviour is tricky. For scalars, the result could be some fixed value. Alternatively, the result could be termination. This would of course have a cost.
division by zero could be changed to erroneous Could erroneously result in some fixed value. The impact in the status quo is unclear; the change would have a cost.
flowing off the end of a non-void function; returning from a [[noreturn]] function could be changed to erroneous E.g. could erroneously call std::terminate. Mild additional cost. Unclear how valuable.
recursively entering the initialization of a block-static variable unsure Seems obscure.
accessing an object outside its lifetime leave as is infeasible
calling a pure-virtual function in an abstract base {con,de}structor could be changed to erroneous E.g. some particular pure-virtual handler could be called erroneously. This might already be the case on some implementations.
[class] doing things with members before construction has finished leave as is infeasible
library undefined behaviour case by case A language-support facility such as “std::erroneous()” (which erroneously has no effect) could be used to allow for user-defined erroneous behaviour.
(speculative) contract violation could be erroneous Current work on contracts comes up against the question of what should happen in case of a contract violation. The notion of erroneous behaviour might provide a useful answer.

Tooling

While we have been emphasising the importance of code readability and understandability, we must also consider the practicalities of actually compiling and running code. Whether code has meaning, and if so, which, impacts tools. There are two important, and sometimes opposed, use cases we would like to consider.

Production compilers

Getting code to run in production often comes with two important (and also opposed) expectations:

Undefined behaviour, and in particular its implications on the meaning of code, is increasingly exploited by compilers to optimise code generation. By assuming that undefined behaviour can never have been intentional, transitive assumptions can be derived that allow for far-reaching optimisations. This is often desirable and beneficial for correct code (and demonstrates the value of unambiguously understandable code: even compilers can use this reasoning to determine how much work does and does not have to be done). However, for incorrect code this can expose vulnerabilities, and thus constitute a considerable lack of safety. P1093R0 discusses these performance implications of undefined behaviour.

The proposed erroneous behaviour retains the same meaning of code as undefined behaviour for human readers, but the compiler has to accept that erroneous behaviour can happen. This constrains the compiler (as it as to ensure erroneous results are produced correctly), but in the event of incorrect code (which all erroneous behaviour requires), the resulting behaviour is constrained by the Standard and does not create a safety hazard. In other words, erroneous behaviour has a potential performance cost compared to undefined behaviour, but is safer in the presence of incorrect code.

Debug toolchains and sanitizers

The other major set of tools that software projects use are debugging tools. Those include extra warnings on compilers, static analysers, and runtime sanitizers. The former two are good at catching some localised bugs early, but do not catch every bug. Indeed one of the main limitations we seem to be discovering is that there is reasonable C++ code for which important analyses cannot be performed statically. (Note that P2687R0 proposes a safety strategy in which static analysis plays a major role.) Runtime sanitizers like ASAN, MSAN, TSAN, UBSAN, on the other hand, have excellent abilities to detect undefined behaviour at runtime with virtually no false positives, but at a significant build and runtime cost.

Both runtime sanitizers and static analysis can use the code readability signal from both undefined and erroneous behaviour equally well. In both cases it is clear that the code is incorrect. For undefined behaviour, implementations are unconstrained anyway and tools may reject or diagnose at runtime. The goal of erroneous behaviour is to permit the exact same treatment, by allowing a conforming implementation to diagnose, terminate (and also reject) a program that contains erroneous behaviour.

In other words, erroneous behaviour retains the understandability and debuggability of undefined behaviour, but also constrains the implementation just like well-defined behaviour.

Usage profiles

The following toolchain deployment examples are based on real-world setups.

What is code?

I would like to discuss this position, and I hope to build consensus around it.

Code is communication. Primarily, code communicates an idea among humans. Humans work with code as an evolving and accumulating resource. Its role in software engineering projects is not too different from the role of traditional literature in the pursuit of science, technology, and engineering: literature is how individuals learn from and contribute to collective progress. The fact that code can also be interpreted and executed by computers is of course also important, but secondary. (There are many ways one can instruct a machine, but not all of them are suitable for building a long-term ecosystem.)

The language of code are programming languages, and the medium is source code, just like natural languages written in books, emails, or spoken in videos are the media of traditional literature. Like all media, source code is imperfect and ambiguous. The purpose of a text is to communicate an idea, but the entire communication has to be funnelled through the medium, and understood by the audience. Without the author present to explain what they really meant, the text is the only clue to the original idea; any act of reading a text is always an act of forensic reconstruction of the original idea. If the text is written well and “clear”, then readers can perform this reconstruction with high confidence that they “got it right” and feel themselves understanding the idea; they are “on the same page” as the author. On the other hand, poor writing leads to ambiguous text, and reading requires interpretation and often guess-work. This is no different in natural languages than in computer code.

I would like to propose that we appreciate the value of code as communication with humans, and consider how well a programming language works for that purpose in medium of source code. Source code is often shared among a large group of users, who are actively working with the code: code is only very rarely a complete black-box that can be added to a project without further thought. At the very least, interfaces and vocabulary have to be understood. But commonly, too, code has to be modified in order to be integrated into a project, and to be evolved in response to new requirements. Last but not least, code often contains errors, which have to be found, understood, and fixed. All of the above efforts may be performed by a diverse group of users, none of whom need to have intimate familiarity with any one piece of code. There is value in having any competent users be able to read and understand any one piece of code — not necessarily in all its domain depth, but well enough to work with it in the context of a larger project. To extent the analogy with natural language above, this is similar to how a competent speaker of a language should be able to understand and integrate a well-made argument in a discussion, even if they are not themselves an expert in the domain of the argument.

How does all this connect to C++? Like with code in any programming language, given a piece of code, a user should be able to understand the idea that the code is communicating. Absent a separate document that says “Here is what this code is meant to do:”, the main source of information available to the user is the behaviour of the code itself. Note how this has nothing to do with compiling and running code. At this point, the code and the idea it communicates exist only in the minds of the author and the reader; no compilation is involved. How well the user understands the code depends on how ambiguous the code is, that is, how many different things it can mean. The user interprets the code by choosing a possible meaning to it from among the choices, where we assume that the code is correct: in C++, that means correct in the sense of the Standard, being both well-formed and executing with well-defined behaviour. This is critical: the constraint of presumed correctness serves as a dramatic aid for interpretation. If we assume that code is correct, then we can dismiss any interpretation that would require incorrect behaviour, and we only have to decide among the few remaining valid interpretations. The more valid interpretations a construction has, the more ambiguity a user has during interpretation of the entire piece of code. C++ defines only a very narrow set of behaviours, and everything else is left as the infamous undefined behaviour, which we could say is not C++ at all, in the sense that we assume that that's not what could possibly have been meant. Practically, of course, we would not dismiss undefined behaviour as “not C++”, but instead we would treat it as a definitive signal that the code is not communicating its idea correctly. (We could then either ask the author for clarification, or, if we are confident to have understood the correct idea anyway, we can fix the code to behave correctly. I claim that in this long-term perspective on code as a cultural good, buggy code with a clear intention is better than well-behaved, ambiguous code: if the intention is clear, then I can see if the code is doing the right thing and fix it if not, but without knowing the intention, I have no idea if the well-behaved code is doing what it is supposed to.)

Related work

Sean Parent’s presentation Reasoning About Software Correctness (and also subsequent Cpp North 2022 keynote talk) give a useful definition of “safety” adapted specifically to C++ and which explicitly only concerns incorrect code. He defines a function to be safe if it does not lead to undefined behaviour (that is, even when preconditions are violated).

JF Bastien's paper P2723R1 proposes addressing the safety concerns around automatic variable initialization by just defining variables to be initialized to zero. The previous revision of that paper was what motivated the current proposal: The resulting behaviour is desirable, but the cost on code understandability is unacceptable to the present author.

The papers P2687R0 by Bjarne Stroustrup and Gabriel Dos Reis and P2410R0 by Bjarne take a more general look at how to arrive at a safe language. They recommends a combination of static analysis and restrictions on the use of the language so as to make static analysis very effective. However, on the subject of automatic variable initialization specifically they offers no new solution: P2687R0 only recommends either zero-initialization or annotated non-initialization (reading of which results in UB); in that regard it is similar to JF Bastien's proposal. P2410R0 states that “[s]tatic analysis easily prevents the creation of uninitialized objects”, but the intended result of this prevention, and in particular the impact on code understandability, is left open.

Tom Honerman proposed a system of “diagnosable events”, which is largely aligned with the values and goals of this proposal, and takes a quite similar approach: Diagnosable events have well-defined behaviour, but implementations are permitted to handle them in an implementation-defined way.

Davis Herring's paper P1492R2 proposes a checkpointing system that would stop undefined behaviour from having arbitrarily far-reaching effects. That is a somewhat different problem area from the present safety one, and in particular, it does not control the effects of the undefined behaviour itself, but merely prevents it from interfering with other, previous behaviour. (E.g. this would not prevent the leaking of secrets via uninitialized variables.)

The Ada programming language has a notion of bounded undefined behaviour.

Paper P1093R0 by Bennieston, Coe, Gahir and Russel discusses the value of undefined behaviour in correct code and argues for the value of the compiler optimizations that undefined behaviour permits. This is essentially the tool’s perspective of the value of undefined behaviour for the interpretability of code which we discussed above: both humans and compilers benefit from being able to understand code with fewer ambiguities. Compilers can use the absence of ambiguities to avoid generating unnecessary code. The paper argues that we should not break these optimizations lightheartedly by making erstwhile undefined behaviour well-defined.

Questions and answers

Do you really mean that there can never be any UB in any correct code? There is of course always room for nuance and detail. If a particular construction is known to be UB, but still appropriate on some platform or under some additional assumptions, it is perfectly fine to use it. It should be documented/annotated sufficiently, and perhaps tools that detect UB need to be informed that the construction is intentional.

Why is static analysis not enough to solve the safety problem of UB? Why do we need sanitizers? Current C++ is not constrained enough to allow static analysis to accurately detect all cases of undefined behaviour. (For example, C++ allows initializing a variable via a call to a function in a separate translation unit or library.) Other languages like Rust manage to prevent unsafe behaviour statically, but they are more constrained (e.g. Rust does not allow passing an uninitialized value to a function). Better static analysis is frequently suggested as a way to address safety concerns in C++ (e.g. P2410R0, P2687R0), but this usually requires adopting a limited subset of C++ that is amenable to reliable static analysis. This does not help with the wealth of existing C++ code, neither with making it safe nor with making it correct. By contrast, runtime sanitizers can reliably point out when undefined behaviour is reached.

Why is int x; any different from std::vector<int> v;? Several reasons. One is that vector is a class with internal invariants that needs to be destructible, so a well-defined initial state already suggests itself. The other is that a vector is a container of elements, and if the initializer does not provide any elements, then a vector with no elements is an unsurprising result. By contrast, if there is no initial value given for an int, there is no single number that is better or more obviously right than any other number. Zero is a common choice in other languages, but it does not seem helpful in the sense of making it easy to write unambiguous code if we allow a novel spelling of a zero-valued int. If you mean zero, just say int x = 0;.

Is this proposal better than defining int x; to be zero? It depends on whether you want code to deliberately use int x; to mean, deliberately, that x is zero. The counter-position, shared by this author, is that zero should have no such special treatment, and all initialization should be explicit, int x = -1, y = 0, z = +1;. All numeric constants are worth seeing explicitly in code, and there is no reason to allow int x; as a valid alternative for one particular case that already has a perfectly readable spelling. (An explicit marker for a deliberately uninitialized variable is still a good idea, and accessing such a variable would remain undefined behaviour, and not become erroneous even in this present proposal.)

Acknowledgements

Many thanks to Loïc Joly for extensive help on restructuring and refocussing this document, to Richard Smith for discussion and wording, and to Andrzej Krzemienski for wording suggestions and for pointing out a possible application to contracts. Thanks for valuable discussions and encouragement also to JF Bastien and to members of SG23.

References