|
Boost-Commit : |
From: daniel_james_at_[hidden]
Date: 2007-12-29 15:52:23
Author: danieljames
Date: 2007-12-29 15:52:22 EST (Sat, 29 Dec 2007)
New Revision: 42346
URL: http://svn.boost.org/trac/boost/changeset/42346
Log:
Move the table summarizing methods for controlling bucket size next to the discussion of these methods. The paragraphs about insert and invalidating iterator moves on to something else.
Text files modified:
branches/unordered/dev/libs/unordered/doc/buckets.qbk | 40 ++++++++++++++++++++--------------------
1 files changed, 20 insertions(+), 20 deletions(-)
Modified: branches/unordered/dev/libs/unordered/doc/buckets.qbk
==============================================================================
--- branches/unordered/dev/libs/unordered/doc/buckets.qbk (original)
+++ branches/unordered/dev/libs/unordered/doc/buckets.qbk 2007-12-29 15:52:22 EST (Sat, 29 Dec 2007)
@@ -90,26 +90,6 @@
below the max load factor, and set the maximum load factor to be the same as
or close to the hint - unless your hint is unreasonably small or large.
-It is not specified how member functions other than `rehash` affect
-the bucket count, although `insert` is only allowed to invalidate iterators
-when the insertion causes the load factor to reach the maximum load factor.
-Which will typically mean that insert will only change the number of buckets
-when this happens.
-
-In a similar manner to using `reserve` for `vector`s, it can be a good idea
-to call `rehash` before inserting a large number of elements. This will get
-the expensive rehashing out of the way and let you store iterators, safe in
-the knowledge that they won't be invalidated. If you are inserting `n`
-elements into container `x`, you could first call:
-
- x.rehash((x.size() + n) / x.max_load_factor() + 1);
-
-[blurb Note: `rehash`'s argument is the minimum number of buckets, not the
-number of elements, which is why the new size is divided by the maximum load factor. The
-+ 1 guarantees there is no invalidation; without it, reallocation could occur
-if the number of bucket exactly divides the target size, since the container is
-allowed to rehash when the load factor is equal to the maximum load factor.]
-
[table Methods for Controlling Bucket Size
[[Method] [Description]]
@@ -133,4 +113,24 @@
]
+It is not specified how member functions other than `rehash` affect
+the bucket count, although `insert` is only allowed to invalidate iterators
+when the insertion causes the load factor to be greater than or equal to the
+maximum load factor. For most implementations this means that insert will only
+change the number of buckets when this happens.
+
+In a similar manner to using `reserve` for `vector`s, it can be a good idea
+to call `rehash` before inserting a large number of elements. This will get
+the expensive rehashing out of the way and let you store iterators, safe in
+the knowledge that they won't be invalidated. If you are inserting `n`
+elements into container `x`, you could first call:
+
+ x.rehash((x.size() + n) / x.max_load_factor() + 1);
+
+[blurb Note: `rehash`'s argument is the minimum number of buckets, not the
+number of elements, which is why the new size is divided by the maximum load factor. The
+`+ 1` guarantees there is no invalidation; without it, reallocation could occur
+if the number of bucket exactly divides the target size, since the container is
+allowed to rehash when the load factor is equal to the maximum load factor.]
+
[endsect]
Boost-Commit list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk