Take the following example, which is from an article by Bjarne Stroustrup that I learned about from Yakk a while back on these forums. Suppose I have a C++ vector and a C++ list. To each, I perform the following operation. First, I generate a bunch of random numbers, finding the place in the sequence that it should go in sorted order and inserting the number there. This has to be a linear scan for the list, so to make it fair I do a linear scan of the vector as well rather than a binary search. In the same order I generated the numbers, I then go and remove them from the list, again doing a linear scan to find the appropriate place and then removing.
In both cases, the "find the right place" is linear in the current size of the sequence. For the vector, the "insert/delete" operation is linear in the size of the sequence. For the list, the insert/delete operation is constant.
So, the question is: at around what number of elements do you expect that the additional time spent for the linear insert/delete in the vector will cause the vector to fall behind in performance relative to the list? Answer in the spoiler.
Said another way: replacing the O(n) insert/replace operation with a O(1) insert/replace operation will never pay off.
Why is this? How can it be? And the answer is that looking at just the insert/replace operation is the wrong granularity, because that's not what was being timed. The entire operation was being timed, and that is the same complexity (O(n2)) in both cases.
And when we replaced the vector with the list, the decrease of time spent doing insert/delete was more than offset by a linear increase in time elsewhere.
(I should also say that I carried out this test myself, using a few different configurations. My results agree with Stroustrup's, and seem to be pretty robust to changes in the exact process, including changes that cause all nodes in the list to be allocated in sorted order.)