豆豆友情提示:这是一个非官方 GitHub 代理镜像,主要用于网络测试或访问加速。请勿在此进行登录、注册或处理任何敏感信息。进行这些操作请务必访问官方网站 github.com。 Raw 内容也通过此代理提供。
Skip to content

fix: operator precedence bug in pack_ue8m0_to_int assertion (mantissa check always passes) #309

@kuishou68

Description

@kuishou68

In deep_gemm/utils/math.py, the pack_ue8m0_to_int function has an operator precedence bug that makes the validation assertion ineffective.

Bug

def pack_ue8m0_to_int(x: torch.Tensor):
    assert x.dtype == torch.float and x.size(-1) % 4 == 0
    assert (x.view(torch.int) & ((1 << 23) - 1) == 0).all()  # BUG: wrong precedence

In Python, == has higher precedence than &. So the expression is parsed as:

x.view(torch.int) & (((1 << 23) - 1) == 0)

((1 << 23) - 1) == 0 evaluates to False (which is 0 numerically), then x.view(torch.int) & 0 gives all zeros, and .all() on all zeros returns True.

This means the assertion always passes regardless of whether the mantissa bits are actually zero, so it silently accepts invalid inputs instead of catching them.

Fix

Add parentheses to ensure the AND operation happens before the equality check:

assert ((x.view(torch.int) & ((1 << 23) - 1)) == 0).all()

This correctly checks that all mantissa bits (lower 23 bits) are zero, which is the intended behavior for UE8M0 format values.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions