c++ - Amortized analysis of std::vector insertion -
how analysis of insertion @ (push_back) in std::vector? it's amortized time o(1) per insertion. in particular in video in channel9 stephan t lavavej , in ( 17:42 onwards ) says optimal performance microsoft's implementation of method increases capacity of vector around 1.5.
how constant determined?
assuming mean push_back
, not insertion, believe important part multiply constant (as opposed grabbing n more elements each time) , long you'll amortized constant time. changing factor changes average case , worst case performance.
concretely: if constant factor large, you'll have average case performance, bad worst case performance arrays big. instance, imagine doubling (2x) 10000 size vector because have 10001th element pushed. edit: michael burr indirectly pointed out, real cost here you'll grow memory larger need be. add there cache issues affect speed if factor large. suffice there real costs (memory , computation) if grow larger need.
however if constant factor small, (1.1x) you're going have worst case performance, bad average performance, because you're going have incur cost of reallocating many times.
also, see jon skeet's answer similar question previously. (thanks @bo persson)
a little more analysis: have n
items pushing , multiplication factor of m
. number of reallocations log base m
of n
(log_m(n)
). , i
th reallocation cost proportional m^i
(m
i
th power). total time of pushbacks m^1 + m^2 + ... m^(log_m(n))
. number of pushbacks n
, , series (which geometric series, , reduces (nm)/(m-1)
in limit) divided n
. constant, m/(m-1)
.
for large values of m
overshoot lot , allocate more need reasonably (which mentioned above). small values of m
(close 1) constant m/(m-1)
becomes large. factor directly affects average time.
Comments
Post a Comment