I don’t want to derail the thread too much so this shall be my last post about templates.
When used correctly, I find them to be more elegant than other approaches.
An Example (click to reveal)
For example, consider:
SomeStruct a;
SomeStruct b;
std::memcpy(&a, &b, sizeof(SomeStruct));
And compare it with:
SomeStruct a;
SomeStruct b;
memoryCopy(a, b);
Made possible thanks to the template function:
template< typename T >
T & memoryCopy(T & destination, const T & source)
{
return *std::memcpy(&destination, &source, sizeof(T));
}
The latter is less typing, completely type safe and doesn’t have to even think about null pointers.
With type_traits.h
it’s even possible to make it cause a compile error if the programmer tries to invoke undefined behaviour.
To quote cppreference:
If either dest or src is a null pointer, the behavior is undefined, even if count is zero.
If the objects are not TriviallyCopyable, the behavior of memcpy is not specified and may be undefined.
The following definition would prevent both those situations:
template< typename T, bool isTriviallyCopyable = std::is_trivially_copyable<T>::value >
T & memoryCopy(T & destination, const T & source)
{
static_assert(isTriviallyCopyable, "Type is not trivially copyable, memcpy is undefined. Use memoryCopy<T, true> to override");
return *std::memcpy(&destination, &source, sizeof(T));
}
That may not be very elegant to write, but it only has to be written once.
To the end user, the API isn’t difficult - it’s called just like a normal function, there’s no sizeof
or pointers to wrangle, and if the compiler is doing its job (which is most of the time) the template function will be inlined anyway, so the end result will be the same machine code.
There’s nothing to stop people doing that with non-template classes or functions.
The reason it has to be that way is because of the archaic compiler design that C++ inherited from C.
.cpp
files are compiled as separate translation units - they never see each other’s contents.
That wouldn’t work for templates because templates aren’t classes or functions, they’re just a pattern used for generating classes and functions, hence the whole pattern needs to be known.
Macros can’t have their definitions separated into a .cpp
file for a similar reason - macros are just patterns describing ways to manipulate text.
It’s still possible to put the implementation into a separate file from the definition, but it can’t be a .cpp
file, it would have to be another header file (or as some have taken to doing, a .tpp
file), and that file would have to be included at the end of the first.
I’ve used a similar technique to this before when writing particularly large templates.
Isocpp has a very good explanation here
I find using C’s restrictions causes more work.
E.g. being forced to do func(a, b)
instead of a.func(b)
, and being worried about using the wrong kind of a
. Having to worry about null pointers, those sorts of things.
Unfortunately Haskell isn’t particularly suitable for embedded systems. A lot of its constructs are memory hungry and it relies on a garbage collector.
To quote something I found on ycombinator:
The problem with Haskell is that while it is very effective at expressing ideas and logic, it is just as ineffective at expressing runtime behavior. In world where everything is lazily evaluated by default and garbage collected, it is very difficult to reason about how the code will actually execute. Are you trashing the cache? Fragmenting your heap? Are you misusing the instruction cache?
Which is quite close to my sentiment.
I know C++ intimately and I can take a very good guess at what the actual behaviour of the program is in terms of the stack, the heap and sometimes even the assembly generated, and I can usually do that even if I’m using several layers of templates and abstraction.
I cannot say the same for Haskell.
If you miss type inference though, C++ has the auto
keyword:
const auto a = 5;
const auto b = 10;
const auto c = a + b;