It's better to write your own structure that has an interface similar to std:: stuff. It's tons more compact and efficient. It's good practice to make your own String class that is a replacement for std::string. It saves memory too.
Eh, string seems like one of the worse types to do this for. Strings have a *ton* of functionality and they're used a *lot*.
I'd much rather have the extremely reliable STL for something like that, even if it's a little slower. It's better than having bugs.
You would think so, but you would be wrong. And after you got tons of bugs from using std::string, you would realize it's actually not able to do a lot of what you need it to. Unicode for example, which will be a !!HUGE!! issue for any international support. UTF-8 is probably the most common solution, but several parts of the std library just plain do not support variable-length character encodings.
This means things work in many cases (because UTF-8 was designed in such a way to prevent problems with naive handling), but that really filthy things can be done which may screw you over. For example, incrementing a std::string iterator or otherwise by 1 will not always get you the next character. It will get you the next byte, but with the variable-length encoding, that may still be part of the first character. So if, for example, you wanted to send your strings in limitted size packets over some IO device like a network then print them out immediately on the far side, some of your characters would get mangled into garbage because they were missing the following bytes. (Source: been there, debugged that.) And believe me, an odd off by one in your custom string class is way easier to diagnose and fix than a bug that mysteriously appears only on a foreign language VM with your software that you otherwise wouldn't want to use for testing.
Additionally, you can use the structure of UTF-8 to actually help you recognize bugs elsewhere in your system by implementing a good string class! See, UTF-8 has a specific pattern to the encoding bits such that certain sequences are invalid. By throwing an assert in common use when it sees invalid UTF-8, you can detect memory stomps and other bad data getting into a string's memory in your program automatically whenever it is used, without any additional cost. This has alerted us to numerous subtle bugs in our code. It even detected an otherwise silent multithreaded stack corruption bug caused by a callback going rogue.
I personally spent about a month debugging various string issues in a project that (was required by interfaces to) use a mix of wide characters, UTF8 and typical strings, with a bunch more bugs of that nature in the pipeline. I then spent a month writing a new string class and replacing the old uses of strings, most of which was refactoring thousands of uses in the code base. After the rewrite, there were a grand total of 2 non-trivial bugs in the implementation, both of which were fixed in short order, the backlog of string related bugs magically disappeared, and it alerted us to a memory stomp bug by the end of the month.
In conclusion: I'm not saying std stuff is always bad. What I am saying is it is often missing some detail of context in which you want to use it that makes your life awful if you don't write your own. This could be international string support, this could be serializing data for a network, or it could simply be a lack of ability to easily modify the internal algorithm to optimizing for your particular use case. If you do it right, congratulations, you never need to do it again because you now have your string class or your array class, which has been tested by your projects and whose internals you can modify and are familiar with. The convenience and flexibility this affords you is far too often underestimated.