Skip to content

[IR] LangRef: state explicitly that floats generally behave according to IEEE-754 #102140

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Oct 11, 2024
70 changes: 53 additions & 17 deletions llvm/docs/LangRef.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2405,6 +2405,8 @@ example:
function which has an ``ssp`` or ``sspstrong`` attribute, the calling
function's attribute will be upgraded to ``sspreq``.

.. _strictfp:

``strictfp``
This attribute indicates that the function was called from a scope that
requires strict floating-point semantics. LLVM will not attempt any
Expand Down Expand Up @@ -3582,11 +3584,12 @@ status flags are not observable. Therefore, floating-point math operations do
not have side effects and may be speculated freely. Results assume the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also the denormal exception

Copy link
Contributor Author

@RalfJung RalfJung Aug 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the denormal exception? Is this about what happens when denormal-fp-math is set, but the default is to be IEEE-compatible?

Given that IEEE says that denormals are not flushed and LLVM assumes the same by default, I don't think this is an exception from "IR float ops behave according to IEEE".

round-to-nearest rounding mode, and subnormals are assumed to be preserved.

Running LLVM code in an environment where these assumptions are not met can lead
to undefined behavior. The ``strictfp`` and ``denormal-fp-math`` attributes as
well as :ref:`Constrained Floating-Point Intrinsics <constrainedfp>` can be used
to weaken LLVM's assumptions and ensure defined behavior in non-default
floating-point environments; see their respective documentation for details.
Running LLVM code in an environment where these assumptions are not met
typically leads to undefined behavior. The ``strictfp`` and ``denormal-fp-math``
attributes as well as :ref:`Constrained Floating-Point Intrinsics
<constrainedfp>` can be used to weaken LLVM's assumptions and ensure defined
behavior in non-default floating-point environments; see their respective
documentation for details.

.. _floatnan:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean by "floating-point instruction" here? Is sqrt included?

I understand that the main point here is to say that without further IR constructs an instruction like fdiv is assumed to be correctly rounded. IEEE-754 also assumes this of sqrt. I believe the latest version specifies that other math functions should also return correctly rounded results. That's why I think it needs to be explicit here which ones you mean.

Copy link
Contributor Author

@RalfJung RalfJung Aug 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant all the operations that have an equivalent IEEE-754 operation. So yes that would include sqrt, though I was under the impression that it does not include transcendental functions.

I am not sure what is the best way to say that. Having a list seems awkward? Should each such operation have a comment, like "This corresponds to <op> in IEEE-754, so if the argument is an IEEE float format then the :ref:floating-point semantics <floatsem> guarantees apply."?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is something that is hard to come up with a good term for. IEEE 754 has a core list of operations in section 5 which is a good starting point, but these omit the minimum/maximum operations (which are section 9.6). Section 9 is "recommended operations", and 9.2 is the main list of transcendental functions you're thinking of; IEEE 754 requires that they be correctly rounded, but C explicitly disclaims that requirement in Annex F. There's also a few functions in C that aren't in IEEE 754, notably ldexp and frexp.

(Note too that it was recently brought up in the Discourse forums that the libm intrinsics are meant to correspond to libm semantics, not IEEE 754 semantics.)

Copy link
Contributor Author

@RalfJung RalfJung Aug 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minimum/maximum don't do any rounding, and already seem to unambiguously describe their semantics in the existing docs, making this clarification much less relevant. So maybe we should just say that this is about the core operations listed in section 5?


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to specify "all machines that support IEEE-754 arithmetic"? I don't know if we support any targets that don't support IEEE-754, but it seems like there should be some provision for that. The C standard, for instance, talks about some transformations that are legal on "IEC 60559 machines."

Or are we saying that architectures that don't support IEEE-754 should indicate the differences in the IR or use a different type?

Copy link
Contributor Author

@RalfJung RalfJung Aug 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now, LLVM assumes that all backends implement IEEE-754 arithmetic, and will miscompile code if the backend doesn't do that. One example of a target that does not implement IEEE-754 arithmetic is x86 without SSE, and #89885 has examples of code that gets micompiled due to that.

The point of this PR is to make that more explicit. If instead the goal is to make LLVM work with backends and targets that do not implement IEEE-754 arithmetic, that will require changes to optimization passes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're already at the point where we expect float et al to correspond to the IEEE 754 binary32 et al formats. (This is documented, although somewhat subtly, by the current LangRef). There is also agreement at this point that excess precision (à la x87) is not correct behavior for LLVM IR, although it's not (yet) explicitly documented in the LangRef.

The only hardware deviation from IEEE 754 that we're prepared to accept at this point is denormal handling. I'm reluctant to offer too many guarantees on denormal handling because I'm not up to speed on the diversity of common FP hardware with respect to denormals, but I'm pretty sure there is hardware in use that mandates denormal flushing (e.g., the AVX512-BF16 stuff is unconditionally default RM+DAZ+FTZ, with changing MXCSR having no effect).

In short, we already require that hardware supporting LLVM be IEEE 754-ish; this is tightening up the definition in the LangRef to cover what we already agree to be the case. In the putative future that we start talking about cases where float et al are truly non-IEEE 754 types (say, Alpha machines, or perhaps posits will make it big), then we can talk about how to add support for them in LLVM IR (which, given the history of LLVM, probably means "add new types", not "float means something different depending on target triple").

Copy link
Contributor Author

@RalfJung RalfJung Aug 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only hardware deviation from IEEE 754 that we're prepared to accept at this point is denormal handling.

Even there, the pass that causes trouble in #89885 would lead to miscompilations. Analysis/ScalarEvolution will assume that float ops that don't return NaNs produce a given bit pattern (including denormals), and if codegen later generates code that produces a different bit pattern, the result is a miscompilation. If we don't accept "always return the same bit-identical result on all machines", then this pass (and possibly others) has to be changed.

So non-standard denormal handling is only supported with an explicit marker, which works very similar to the markers required for non-default FP exception handling.

Expand All @@ -3608,10 +3611,11 @@ are not "floating-point math operations": ``fneg``, ``llvm.fabs``, and
``llvm.copysign``. These operations act directly on the underlying bit
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This paragraph is basically an exact duplicate of the second paragraph in the floatenv section, so I am inclined to remove it... but your draft did include such a sentence.

The way I view it, the floatsem section is just about the IEEE float formats. This paragraph is true for all formats so it should be in the floatenv section.

representation and never change anything except possibly for the sign bit.

For floating-point math operations, unless specified otherwise, the following
rules apply when a NaN value is returned: the result has a non-deterministic
sign; the quiet bit and payload are non-deterministically chosen from the
following set of options:
Floating-point math operations that return a NaN are an exception from the
general principle that LLVM implements IEEE-754 semantics. Unless specified
otherwise, the following rules apply whenever the IEEE-754 semantics say that a
NaN value is returned: the result has a non-deterministic sign; the quiet bit
and payload are non-deterministically chosen from the following set of options:

- The quiet bit is set and the payload is all-zero. ("Preferred NaN" case)
- The quiet bit is set and the payload is copied from any input operand that is
Expand Down Expand Up @@ -3657,6 +3661,40 @@ specification on some architectures:
LLVM does not correctly represent this. See `issue #60796
<https://github.com/llvm/llvm-project/issues/60796>`_.

.. _floatsem:

Floating-Point Semantics
------------------------

This section defines the semantics for core floating-point operations on types
that use a format specified by IEEE-745. These types are: ``half``, ``float``,
``double``, and ``fp128``, which correspond to the binary16, binary32, binary64,
and binary128 formats, respectively. The "core" operations are those defined in
section 5 of IEEE-745, which all have corresponding LLVM operations.

The value returned by those operations matches that of the corresponding
IEEE-754 operation executed in the :ref:`default LLVM floating-point environment
<floatenv>`, except that the behavior of NaN results is instead :ref:`as
specified here <floatnan>`. In particular, such a floating-point instruction
returning a non-NaN value is guaranteed to always return the same bit-identical
result on all machines and optimization levels.

This means that optimizations and backends may not change the observed bitwise
result of these operations in any way (unless NaNs are returned), and frontends
can rely on these operations providing correctly rounded results as described in
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Term is usually "correctly rounded" not "perfectly rounded"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, fair. It should be clear from context who is in charge of defining "correct" here (namely, IEEE-754).

I am adding these edits as new commits so it's easy to see what changed; I can squash them later or now if you prefer.

the standard.

(Note that this is only about the value returned by these operations; see the
:ref:`floating-point environment section <floatenv>` regarding flags and
exceptions.)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and metadata (e.g. !fpmath)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I didn't know about that one, thanks. I added a mention, and also used the opportunity to add links for strictfp and denormal-fp-math.


Various flags, attributes, and metadata can alter the behavior of these
operations and thus make them not bit-identical across machines and optimization
levels any more: most notably, the :ref:`fast-math flags <fastmath>` as well as
the :ref:`strictfp <strictfp>` and :ref:`denormal-fp-math <denormal_fp_math>`
attributes and :ref:`!fpmath metadata <fpmath-metadata>`. See their
corresponding documentation for details.

.. _fastmath:

Fast-Math Flags
Expand Down Expand Up @@ -3943,7 +3981,7 @@ Floating-Point Types
- Description

* - ``half``
- 16-bit floating-point value
- 16-bit floating-point value (IEEE-754 binary16)

* - ``bfloat``
- 16-bit "brain" floating-point value (7-bit significand). Provides the
Expand All @@ -3952,24 +3990,20 @@ Floating-Point Types
extensions and Arm's ARMv8.6-A extensions, among others.

* - ``float``
- 32-bit floating-point value
- 32-bit floating-point value (IEEE-754 binary32)

* - ``double``
- 64-bit floating-point value
- 64-bit floating-point value (IEEE-754 binary64)

* - ``fp128``
- 128-bit floating-point value (113-bit significand)
- 128-bit floating-point value (IEEE-754 binary128)

* - ``x86_fp80``
- 80-bit floating-point value (X87)

* - ``ppc_fp128``
- 128-bit floating-point value (two 64-bits)

The binary format of half, float, double, and fp128 correspond to the
IEEE-754-2008 specifications for binary16, binary32, binary64, and binary128
respectively.

X86_amx Type
""""""""""""

Expand Down Expand Up @@ -6925,6 +6959,8 @@ For example,
%2 = load float, ptr %c, align 4, !alias.scope !6
store float %0, ptr %arrayidx.i, align 4, !noalias !7

.. _fpmath-metadata:

'``fpmath``' Metadata
^^^^^^^^^^^^^^^^^^^^^

Expand Down
Loading