|
Boost : |
From: Kevin Wheatley (hxpro_at_[hidden])
Date: 2004-04-13 06:17:50
Daryle Walker wrote:
> int
> lowest_bit( uintmax_t x )
> {
> return integer_log2( x - (x & ( x - 1 )) );
> }
[snip]
> Why does the "lowest_bit" function work? I can't see it.
It looks to be using the property of (x & ( x - 1 ) having a value of
0 when x is a power of two *or when x is 0*,
think binary...
when x is a power of 2:
[<leading zeros>] 1 [<trailing zeros>]
then x - 1 becomes:
[<leading zeros>] 0 [<trailing 1s>]
thus (x & ( x - 1 ) is becomes 0 because no bit set in the original is
still set after the subtract 1.
This then means lowest_bit returns integer_log2( x - 0 ).
In the case where there is more than a single bit set in x then x - 1
will still only affect the lowest bit set and the trailing zeros, thus
the prefix is retained (x&x == x), so the statement ( x - (x & ( x - 1
)) ) subtracts off the upper bits that are set leaving you with a
single bit (the lowest bit set) as above.
So as long as you don't mind the results of x == 0 and x == 1 then
your OK,
Kevin
-- | Kevin Wheatley | These are the opinions of | | Senior Do-er of Technical Things | nobody and are not shared | | Cinesite (Europe) Ltd | by my employers |
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk