I faced this problem recently, the need for appending a string with millions of characters. I ended up doing my own.
It is simply a C array of characters, encapsulated in a class that keeps track of array size and number of allocated bytes.
The performance compared to SDS and std::string is 10 times faster with the benchmark below
at
https://github.com/pedro-vicente/table-string
Benchmarks
For Visual Studio 2015, x86 debug build:
| API | Seconds
| ----------------------|----|
| SDS | 19 |
| std::string | 11 |
| std::string (reserve) | 9 |
| table_str_t | 1 |
clock_gettime_t timer;
const size_t nbr = 1000 * 1000 * 10;
const char* s = "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb";
size_t len = strlen(s);
timer.start();
table_str_t table(nbr *len);
for (size_t idx = 0; idx < nbr; ++idx)
{
table.add(s, len);
}
timer.now("end table");
timer.stop();
EDIT
Maximum performance is achieved by allocating the string all at start (constructor parameter size). If a fraction of total size is used, performance drops. Example with 100 allocations:
std::string benchmark append string of size 33, 10000000 times
end str: 11.0 seconds 11.0 total
std::string reserve benchmark append string of size 33, 10000000 times
end str reserve: 10.0 seconds 10.0 total
table string benchmark with pre-allocation of 330000000 elements
end table: 1.0 seconds 1.0 total
table string benchmark with pre-allocation of ONLY 3300000 elements, allocation is MADE 100 times...patience...
end table: 9.0 seconds 9.0 total